Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.

Pareto Efficiency: Why we need it—and why it’s not enough

JDN 2456914 PDT 11:45.

I already briefly mentioned the concept in an earlier post, but Pareto-efficiency is so fundamental to both ethics and economics I decided I would spent some more time on explaining exactly what it’s about.

This is the core idea: A system is Pareto-efficient if you can’t make anyone better off without also making someone else worse off. It is Pareto-inefficient if the opposite is true, and you could improve someone’s situation without hurting anyone else.

Improving someone’s situation without harming anyone else is called a Pareto-improvement. A system is Pareto-efficient if and only if there are no possible Pareto-improvements.

Zero-sum games are always Pareto-efficient. If the game is about how we distribute the same $10 between two people, any dollar I get is a dollar you don’t get, so no matter what we do, we can’t make either of us better off without harming the other. You may have ideas about what the fair or right solution is—and I’ll get back to that shortly—but all possible distributions are Pareto-efficient.

Where Pareto-efficiency gets interesting is in nonzero-sum games. The most famous and most important such game is the so-called Prisoner’s Dilemma; I don’t like the standard story to set up the game, so I’m going to give you my own. Two corporations, Alphacomp and Betatech, make PCs. The computers they make are of basically the same quality and neither is a big brand name, so very few customers are going to choose on anything except price. Combining labor, materials, equipment and so on, each PC costs each company $300 to manufacture a new PC, and most customers are willing to buy a PC as long as it’s no more than $1000. Suppose there are 1000 customers buying. Now the question is, what price do they set? They would both make the most profit if they set the price at $1000, because customers would still buy and they’d make $700 on each unit, each making $350,000. But now suppose Alphacomp sets a price at $1000; Betatech could undercut them by making the price $999 and sell twice as many PCs, making $699,000. And then Alphacomp could respond by setting the price at $998, and so on. The only stable end result if they are both selfish profit-maximizers—the Nash equilibrium—is when the price they both set is $301, meaning each company only profits $1 per PC, making $1000. Indeed, this result is what we call in economics perfect competition. This is great for consumers, but not so great for the companies.

If you focus on the most important choice, $1000 versus $999—to collude or to compete—we can set up a table of how much each company would profit by making that choice (a payoff matrix or normal form game in game theory jargon).

A: $999 A: $1000
B: $999 A:$349k

B:$349k

A:$0

B:$699k

B: $1000 A:$699k

B:$0

A:$350k

B:$350k

Obviously the choice that makes both companies best-off is for both companies to make the price $1000; that is Pareto-efficient. But it’s also Pareto-efficient for Alphacomp to choose $999 and the other one to choose $1000, because then they sell twice as many computers. We have made someone worse off—Betatech—but it’s still Pareto-efficient because we couldn’t give Betatech back what they lost without taking some of what Alphacomp gained.

There’s only one option that’s not Pareto-efficient: If both companies charge $999, they could both have made more money if they’d charged $1000 instead. The problem is, that’s not the Nash equilibrium; the stable state is the one where they set the price lower.

This means that only case that isn’t Pareto-efficient is the one that the system will naturally trend toward if both compal selfish profit-maximizers. (And while most human beings are nothing like that, most corporations actually get pretty close. They aren’t infinite, but they’re huge; they aren’t identical, but they’re very similar; and they basically are psychopaths.)

In jargon, we say the Nash equilibrium of a Prisoner’s Dilemma is Pareto-inefficient. That one sentence is basically why John Nash was such a big deal; up until that point, everyone had assumed that if everyone acted in their own self-interest, the end result would have to be Pareto-efficient; Nash proved that this isn’t true at all. Everyone acting in their own self-interest can doom us all.

It’s not hard to see why Pareto-efficiency would be a good thing: if we can make someone better off without hurting anyone else, why wouldn’t we? What’s harder for most people—and even most economists—to understand is that just because an outcome is Pareto-efficient, that doesn’t mean it’s good.

I think this is easiest to see in zero-sum games, so let’s go back to my little game of distributing the same $10. Let’s say it’s all within my power to choose—this is called the ultimatum game. If I take $9 for myself and only give you $1, is that Pareto-efficient? It sure is; for me to give you any more, I’d have to lose some for myself. But is it fair? Obviously not! The fair option is for me to go fifty-fifty, $5 and $5; and maybe you’d forgive me if I went sixty-forty, $6 and $4. But if I take $9 and only offer you $1, you know you’re getting a raw deal.

Actually as the game is often played, you have the choice the say, “Forget it; if that’s your offer, we both get nothing.” In that case the game is nonzero-sum, and the choice you’ve just taken is not Pareto-efficient! Neoclassicists are typically baffled at the fact that you would turn down that free $1, paltry as it may be; but I’m not baffled at all, and I’d probably do the same thing in your place. You’re willing to pay that $1 to punish me for being so stingy. And indeed, if you allow this punishment option, guess what? People aren’t as stingy! If you play the game without the rejection option, people typically take about $7 and give about $3 (still fairer than the $9/$1, you may notice; most people aren’t psychopaths), but if you allow it, people typically take about $6 and give about $4. Now, these are pretty small sums of money, so it’s a fair question what people might do if $100,000 were on the table and they were offered $10,000. But that doesn’t mean people aren’t willing to stand up for fairness; it just means that they’re only willing to go so far. They’ll take a $1 hit to punish someone for being unfair, but that $10,000 hit is just too much. I suppose this means most of us do what Guess Who told us: “You can sell your soul, but don’t you sell it too cheap!”

Now, let’s move on to the more complicated—and more realistic—scenario of a nonzero-sum game. In fact, let’s make the “game” a real-world situation. Suppose Congress is debating a bill that would introduce a 70% marginal income tax on the top 1% to fund a basic income. (Please, can we debate that, instead of proposing a balanced-budget amendment that would cripple US fiscal policy indefinitely and lead to a permanent depression?)

This tax would raise about 14% of GDP in revenue, or about $2.4 trillion a year (yes, really). It would then provide, for every man, woman and child in America, a $7000 per year income, no questions asked. For a family of four, that would be $28,000, which is bound to make their lives better.

But of course it would also take a lot of money from the top 1%; Mitt Romney would only make $6 million a year instead of $20 million, and Bill Gates would have to settle for $2.4 billion a year instead of $8 billion. Since it’s the whole top 1%, it would also hurt a lot of people with more moderate high incomes, like your average neurosurgeon or Paul Krugman, who each make about $500,000 year. About $100,000 of that is above the cutoff for the top 1%, so they’d each have to pay about $70,000 more than they currently do in taxes; so if they were paying $175,000 they’re now paying $245,000. Once taking home $325,000, now only $255,000. (Probably not as big a difference as you thought, right? Most people do not seem to understand how marginal tax rates work, as evinced by “Joe the Plumber” who thought that if he made $250,001 he would be taxed at the top rate on the whole amount—no, just that last $1.)

You can even suppose that it would hurt the economy as a whole, though in fact there’s no evidence of that—we had tax rates like this in the 1960s and our economy did just fine. The basic income itself would inject so much spending into the economy that we might actually see more growth. But okay, for the sake of argument let’s suppose it also drops our per-capita GDP by 5%, from $53,000 to $50,300; that really doesn’t sound so bad, and any bigger drop than that is a totally unreasonable estimate based on prejudice rather than data. For the same tax rate might have to drop the basic income a bit too, say $6600 instead of $7000.

So, this is not a Pareto-improvement; we’re making some people better off, but others worse off. In fact, the way economists usually estimate Pareto-efficiency based on so-called “economic welfare”, they really just count up the total number of dollars and divide by the number of people and call it a day; so if we lose 5% in GDP they would register this as a Pareto-loss. (Yes, that’s a ridiculous way to do it for obvious reasons—$1 to Mitt Romney isn’t worth as much as it is to you and me—but it’s still how it’s usually done.)

But does that mean that it’s a bad idea? Not at all. In fact, if you assume that the real value—the utility—of a dollar decreases exponentially with each dollar you have, this policy could almost double the total happiness in US society. If you use a logarithm instead, it’s not quite as impressive; it’s only about a 20% improvement in total happiness—in other words, “only” making as much difference to the happiness of Americans from 2014 to 2015 as the entire period of economic growth from 1900 to 2000.

If right now you’re thinking, “Wow! Why aren’t we doing that?” that’s good, because I’ve been thinking the same thing for years. And maybe if we keep talking about it enough we can get people to start voting on it and actually make it happen.

But in order to make things like that happen, we must first get past the idea that Pareto-efficiency is the only thing that matters in moral decisions. And once again, that means overcoming the standard modes of thinking in neoclassical economics.

Something strange happened to economics in about 1950. Before that, economists from Marx to Smith to Keynes were always talking about differences in utility, marginal utility of wealth, how to maximize utility. But then economists stopped being comfortable talking about happiness, deciding (for reasons I still do not quite grasp) that it was “unscientific”, so they eschewed all discussion of the subject. Since we still needed to know why people choose what they do, a new framework was created revolving around “preferences”, which are a simple binary relation—you either prefer it or you don’t, you can’t like it “a lot more” or “a little more”—that is supposedly more measurable and therefore more “scientific”. But under this framework, there’s no way to say that giving a dollar to a homeless person makes a bigger difference to them than giving the same dollar to Mitt Romney, because a “bigger difference” is something you’ve defined out of existence. All you can say is that each would prefer to receive the dollar, and that both Mitt Romney and the homeless person would, given the choice, prefer to be Mitt Romney. While both of these things are true, it does seem to be kind of missing the point, doesn’t it?

There are stirrings of returning to actual talk about measuring actual (“cardinal”) utility, but still preferences (so-called “ordinal utility”) are the dominant framework. And in this framework, there’s really only one way to evaluate a situation as good or bad, and that’s Pareto-efficiency.

Actually, that’s not quite right; John Rawls cleverly came up with a way around this problem, by using the idea of “maximin”—maximize the minimum. Since each would prefer to be Romney, given the chance, we can say that the homeless person is worse off than Mitt Romney, and therefore say that it’s better to make the homeless person better off. We can’t say how much better, but at least we can say that it’s better, because we’re raising the floor instead of the ceiling. This is certainly a dramatic improvement, and on these grounds alone you can argue for the basic income—your floor is now explicitly set at the $6600 per year of the basic income.

But is that really all we can say? Think about how you make your own decisions; do you only speak in terms of strict preferences? I like Coke more than Pepsi; I like massages better than being stabbed. If preference theory is right, then there is no greater distance in the latter case than the former, because this whole notion of “distance” is unscientific. I guess we could expand the preference over groups of goods (baskets as they are generally called), and say that I prefer the set “drink Pepsi and get a massage” to the set “drink Coke and get stabbed”, which is certainly true. But do we really want to have to define that for every single possible combination of things that might happen to me? Suppose there are 1000 things that could happen to me at any given time, which is surely conservative. In that case there are 2^1000 = 10^300 possible combinations. If I were really just reading off a table of unrelated preference relations, there wouldn’t be room in my brain—or my planet—to store it, nor enough time in the history of the universe to read it. Even imposing rational constraints like transitivity doesn’t shrink the set anywhere near small enough—at best maybe now it’s 10^20, well done; now I theoretically could make one decision every billion years or so. At some point doesn’t it become a lot more parsimonious—dare I say, more scientific—to think that I am using some more organized measure than that? It certainly feels like I am; even if couldn’t exactly quantify it, I can definitely say that some differences in my happiness are large and others are small. The mild annoyance of drinking Pepsi instead of Coke will melt away in the massage, but no amount of Coke deliciousness is going to overcome the agony of being stabbed.

And indeed if you give people surveys and ask them how much they like things or how strongly they feel about things, they have no problem giving you answers out of 5 stars or on a scale from 1 to 10. Very few survey participants ever write in the comments box: “I was unable to take this survey because cardinal utility does not exist and I can only express binary preferences.” A few do write 1s and 10s on everything, but even those are fairly rare. This “cardinal utility” that supposedly doesn’t exist is the entire basis of the scoring system on Netflix and Amazon. In fact, if you use cardinal utility in voting, it is mathematically provable that you have the best possible voting system, which may have something to do with why Netflix and Amazon like it. (That’s another big “Why aren’t we doing this already?”)

If you can actually measure utility in this way, then there’s really not much reason to worry about Pareto-efficiency. If you just maximize utility, you’ll automatically get a Pareto-efficient result; but the converse is not true because there are plenty of Pareto-efficient scenarios that don’t maximize utility. Thinking back to our ultimatum game, all options are Pareto-efficient, but you can actually prove that the $5/$5 choice is the utility-maximizing one, if the two players have the same amount of wealth to start with. (Admittedly for those small amounts there isn’t much difference; but that’s also not too surprising, since $5 isn’t going to change anybody’s life.) And if they don’t—suppose I’m rich and you’re poor and we play the game—well, maybe I should give you more, precisely because we both know you need it more.

Perhaps even more significant, you can move from a Pareto-inefficient scenario to a Pareto-efficient one and make things worse in terms of utility. The scenario in which the top 1% are as wealthy as they can possibly be and the rest of us live on scraps may in fact be Pareto-efficient; but that doesn’t mean any of us should be interested in moving toward it (though sadly, we kind of are). If you’re only measuring in terms of Pareto-efficiency, your attempts at improvement can actually make things worse. It’s not that the concept is totally wrong; Pareto-efficiency is, other things equal, good; but other things are never equal.

So that’s Pareto-efficiency—and why you really shouldn’t care about it that much.

2014, a year of war

[First of all, let me apologize for missing last week’s post. It was quite a week for me; the weekend itself (actually Wednesday to Sunday) was taken up by Gen Con, after which I had my four-day road trip back to Long Beach, and then of course I had to unpack, clean my apartment, stock my refrigerator and so on. Now that I’m settled in back in Long Beach, I should be able to resume my regular blogging schedule. Classes start on Monday, but I won’t let that stop me.]

Things aren’t looking too good in the world lately. Russia launched a secret invasion of the Ukraine and is now deploying “humanitarian convoys” with full military capability. The war between Israel and its neighbors has reached a new flashpoint. Assad continues to oppress Syria, but lately he’s looking like the lesser of two evils as he escalates the war against ISIS. Then again, ISIS is kind of his fault to begin with. But blame aside, ISIS is absolutely horrifying; they recently beheaded an American journalist. Even China just did some belligerent maneuvers around a US spyplane (basically a dick-measuring contest that China hasn’t the faintest hope of winning).

Indeed, things have gotten so bad that the UN has rated three different countries level 3 humanitarian crises, the worst rating any crisis has received as long as the UN has existed. People are making comparisons to the Rwandan genocide and even World War 2.

But it’s important to keep in mind that the reason this bother us, the reason it is so shocking and aberrant, is that the latter half of the 20th century and the start of the 21st have been the most peaceful period in recorded history. Technology notwithstanding, the level of violence we are seeing now would not have been out of place in the Middle Ages; even if they’d had a world news broadcast media it would have given these events only minor attention.

It’s also interesting to see how neoclassical economists try to understand the phenomenon of war. Right-wing economists who think that humans are rational are completely baffled by war, because it cripples infrastructure and kills millions of people (as neoclassical economists would say, “depreciation of human capital”, as though human beings were a special case of machinery), basically the opposite of what an economy is supposed to do.

Austrians use this fact as yet another plank in an anti-Keynesian platform; they frequently accuse Keynesians of thinking war is good—when I’ve yet to meet a Keynesian who actually said such a thing. Stiglitz, the one they cite most approvingly in that article, is a die-hard Keynesian; moreover the column in which Krugman talks about alien invasions was obviously tongue-in-cheek. It’s quite interesting to me how Austrians are always saying that humans are rational and economists are not, so apparently they don’t think economists are human. (I concur that economists are often irrational, but I never said humans weren’t.)

Krugman is more sensitive to the irrationality of human behavior than most neoclassicists, and as a result he does proportionally better; Krugman recognizes that war is done for political, not economic reasons. But as neoclassical Keynesians are wont to do, he doesn’t look deeper; human behavior is assumed to be a minor deviation from the infinite identical psychopath, rather than a fundamentally different paradigm.

Think about it: Why would it be that leaders become more popular when they start wars, especially if war is economically damaging? Shouldn’t people be angry at a leader who insists upon risking their lives and destroying their wealth?

To be fair, some are; anti-war protest is about as old as war. But the vast majority of people in the vast majority of wars have supported their leaders, sometimes even saying things like “I disagree with this war, but we must all stand together in order to win it.” If you think that humans are rational self-interested optimizers, this sort of behavior must seem absolutely nonsensical.

But it makes perfect sense once you realize what humans actually are. We’re not selfish. We’re also not altruistic, not in the broadest sense. We are tribal. We identify ourselves with a group, our tribe, and then act to advance the perceived interests of that group.

What tribe we choose can vary, even within one person: You can have varying degrees of solidarity (remember how I said solidarity can be quantitatively formalized?) with your family, your friends, your school, your home town, your state, your nation, your race, your culture, your religion, your species. You can be torn between these different identities when their interests conflict. At the two extremes lie your own self-interest and the interests of all sentient beings in the universe; one measure of your moral development as an individual is how much time you can spend toward the latter end rather than the former.

When a leader declares war, he—it is usually a ‘he’, though Margaret Thatcher is a notable exception—is either expressing that tribal instinct or capitalizing upon it. For examples of each, look no further than George W. Bush, who really believed in avenging 9/11 and toppling Saddam Hussein, and Dick Cheney, who saw the Iraq War as a great way to raise the value of Halliburton stock. (Among living people, Dick Cheney is the closest I can think of to a neoclassical rational agent. Among the dead, I think I’d go with Josef Stalin. Look upon your ‘rationality’ and despair.)

The reason Netanyahu’s popularity spiked in the invasion (it’s heading down now, but still over 50%) and Putin’s remains above an astonishing 80% is that they are maximizing this tribal instinct, rallying the tribe to righteous war against its enemies. They are behaving like the alpha male our ape brains have long missed—I mean, seriously, Putin looks like a shaved gorilla. The aggression is driven by an ancient animus that we have spent millions of years trying to transcend.

The good news is, we actually are beginning to succeed. The process is slow and painful, and there are setbacks—2014 was definitely a setback—but still, we do make progress. We have expanded our notion of tribes over time, far beyond its original capacity. We evolved for a tribe of about 100 people, barely above what we’d now call “friends and family”; we now unite ourselves into nation-states of hundreds of millions or even billions. The very fact that I can say “China did X” and not be speaking utter nonsense is proof that humanity has made it quite far along the continuum toward universal altruism. We have already advanced seven orders of magnitude; we have less than one left before we include the entire human species. Another two or three after that, and we’ll have encompassed all sentient life on Earth. Another five or six past that, the galaxy; then another nine and we may well have the whole damn universe. 7 down, 18 to go.

Don’t lose hope; this year’s violence is an anomaly in the trend toward peace.

Schools of Thought

If you’re at all familiar with the schools of thought in economics, you may wonder where I stand. Am I a Keynesian? Or perhaps a post-Keynesian? A New Keynesian? A neo-Keynesian (not to be confused)? A neo-paleo-Keynesian? Or am I a Monetarist? Or a Modern Monetary Theorist? Or perhaps something more heterodox, like an Austrian or a Sraffian or a Marxist?

No, I am none of those things. I guess if you insist on labeling, you could call me a “cognitivist”; and in terms of policy I tend to agree with the Keynesians, but I also like the Modern Monetary Theorists.

But really I think this sort of labeling of ‘schools of thought’ is exactly the problem. There shouldn’t be schools of thought; the universe only works one way. When you don’t know the answer, you should have the courage to admit you don’t know. And once we actually have enough evidence to know something, people need to stop disagreeing about it. If you continue to disagree with what the evidence has shown, you’re not a ‘school of thought’; you’re just wrong.

The whole notion of ‘schools of thought’ smacks of cultural relativism; asking what the ‘Keynesian’ answer to a question is (and if you take enough economics classes I guarantee you will be asked exactly that) is rather like asking what religious beliefs prevail in a particular part of the world. It might be worth asking for some historical reason, but it’s not a question about economics; it’s a question about economic beliefs. This is the difference between asking how people believe the universe was created, and actually being a cosmologist. True, schools of thought aren’t as geographically localized as religions; but they do say the words ‘saltwater’ and ‘freshwater’ for a reason. I’m not all that interested in the Shinto myths versus the Hindu myths; I want to be a cosmologist.

At best, schools of thought are a sign of a field that hasn’t fully matured. Perhaps there were Newtonians and Einsteinians in 1910; but by 1930 there were just Einsteinians and bad physicists. Are there ‘schools of thought’ in physics today? Well, there are string theorists. But string theory hasn’t been a glorious success of physics advancement; on the contrary, it’s been a dead end from which the field has somehow failed to extricate itself for almost 50 years.

So where does that put us in economics? Well, some of the schools of thought are clearly dead ends, every bit as unfounded as string theory but far worse because they have direct influences on policy. String theory hasn’t ever killed anyone; bad economics definitely has. (How, you ask? Exposure to hazardous chemicals that were deregulated; poverty and starvation due to cuts to social welfare programs; and of course the Second Depression. I could go on.)

The worst offender is surely Austrian economics and its crazy cousin Randian libertarianism. Ayn Rand literally ruled a cult; Friedrich Hayek never took it quite that far, but there is certainly something cultish about Austrian economists. They insist that economics must be derived a priori, without recourse to empirical evidence (or at least that’s what they say when you point out that all the empirical evidence is against them). They are fond of ridiculous hyperbole about an inevitable slippery slope between raising taxes on capital gains and turning into Stalin’s Soviet Union, as well as rhetorical questions I find myself answering opposite to how they want (like “For are taxes not simply another form of robbery?” and “Once we allow the government to regulate what man can do, will they not continue until they control all aspects of our lives?”). They even co-opt and distort cognitivist concepts like herd instinct and asymmetric information; somehow Austrians think that asymmetric information is an argument for why markets are more efficient than government, even though Akerlof’s point was that asymmetric information is why we need regulations.

Marxists are on the opposite end of the political spectrum, but their ideas are equally nonsensical. (Marx himself was a bit more reasonable, but even he recognized they were going too far: “All I know is that I am not a Marxist.”) They have this whole “labor theory of value” thing where the value of something is the amount of work you have to put into it. This would mean that labor-saving innovations are pointless, because they devalue everything; it would also mean that putting an awful lot of work into something useless would nevertheless somehow make it enormously valuable. Really, it would never be worth doing much of anything, because the value you get out of something is exactly equal to the work you put in. Marxists also tend to think that what the world needs is a violent revolution to overthrow the bondage of capitalism; this is an absolutely terrible idea. During the transition it would be one of the bloodiest conflicts in history; afterward you’d probably get something like the Soviet Union or modern-day Venezuela. Even if you did somehow establish your glorious Communist utopia, you’d have destroyed so much productive capacity in the process that you’d make everyone poor. Socialist reforms make sense—and have worked well in Europe, particularly Scandinavia. But socialist revolution is a a good way to get millions of innocent people killed.

Sraffians are also quite silly; they have this bizarre notion that capital must be valued as “dated labor”, basically a formalized Marxism. I’ll admit, it’s weird how neoclassicists try to value labor as “human capital”; frankly it’s a bit disturbing how it echoes slavery. (And if you think slavery is dead, think again; it’s dead in the First World, but very much alive elsewhere.) But the solution to that problem is not to pretend that capital is a form of labor; it’s to recognize that capital and labor are different. Capital can be owned, sold, and redistributed; labor cannot. Labor is done by human beings, who have intrinsic value and rights; capital is made of inanimate matter, which does not. (This is what makes Citizens United so outrageous; “corporations are people” and “money is speech” are such fundamental distortions of democratic principles that they are literally Orwellian. We’re not that far from “freedom is slavery” and “war is peace”.)

Neoclassical economists do better, at least. They do respond to empirical data, albeit slowly. Their models are mathematically consistent. They rarely take account of human irrationality or asymmetric information, but when they do they rightfully recognize them as obstacles to efficient markets. But they still model people as infinite identical psychopaths, and they still divide themselves into schools of thought. Keynesians and Monetarists are particularly prominent, and Modern Monetary Theorists seem to be the next rising star. Each of these schools gets some things right and other things wrong, and that’s exactly why we shouldn’t make ourselves beholden to a particular tribe.

Monetarists follow Friedman, who said, “inflation is always and everywhere a monetary phenomenon.” This is wrong. You can definitely cause inflation without expanding your money supply; just ramp up government spending as in World War 2 or suffer a supply shock like we did when OPEC cut the oil supply. (In both cases, the US money supply was still tied to gold by the Bretton Woods system.) But they are right about one thing: To really have hyperinflation ala Weimar or Zimbabwe, you probably have to be printing money. If that were all there is to Monetarism, I can invert another Friedmanism: We’re all Monetarists now.

Keynesians are basically right about most things; in particular, they are the only branch of neoclassicists who understand recessions and know how to deal with them. The world’s most famous Keynesian is probably Krugman, who has the best track record of economic predictions in the popular media today. Keynesians much better appreciate the fact that humans are irrational; in fact, cognitivism can be partly traced to Keynes, who spoke often of the “animal spirits” that drive human behavior (Akerlof’s most recent book is called Animal Spirits). But even Keynesians have their sacred cows, like the Phillips Curve, the alleged inverse correlation between inflation and unemployment. This is fairly empirically accurate if you look just at First World economies after World War 2 and exclude major recessions. But Keynes himself said, “Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” The Phillips Curve “shifts” sometimes, and it’s not always clear why—and empirically it’s not easy to tell the difference between a curve that shifts a lot and a relationship that just isn’t there. There is very little evidence for a “natural rate of unemployment”. Worst of all, it’s pretty clear that the original policy implications of the Phillips Curve are all wrong; you can’t get rid of unemployment just by ramping up inflation, and that way really does lie Zimbabwe.

Finally, Modern Monetary Theorists understand money better than everyone else. They recognize that a sovereign government doesn’t have to get its money “from somewhere”; it can create however much money it needs. The whole narrative that the US is “out of money” isn’t just wrong, it’s incoherent; if there is one entity in the world that can never be out of money, it’s the US government, who print the world’s reserve currency. The panicked fears of quantitative easing causing hyperinflation aren’t quite as crazy; if the economy were at full capacity, printing $4 trillion over 5 years (yes, we did that) would absolutely cause some inflation. Since that’s only about 6% of US GDP, we might be back to 8% or even 10% inflation like the 1970s, but we certainly would not be in Zimbabwe. Moreover, we aren’t at full capacity; we needed to expand the money supply that much just to maintain prices where they are. The Second Depression is the Red Queen: It took all the running we could do to stay in one place. Modern Monetary Theorists also have some very good ideas about taxation; they point out that since the government only takes out the same thing it puts in—its own currency—it doesn’t make sense to say they are “taking” something (let alone “confiscating” it as Austrians would have you believe). Instead, it’s more like they are pumping it, taking money in and forcing it back out continuously. And just as pumping doesn’t take away water but rather makes it flow, taxation and spending doesn’t remove money from the economy but rather maintains its circulation. Now that I’ve said what they get right, what do they get wrong? Basically they focus too much on money, ignoring the real economy. They like to use double-entry accounting models, perfectly sensible for money, but absolutely nonsensical for real value. The whole point of an economy is that you can get more value out than you put in. From the Homo erectus who pulls apples from the trees to the software developer who buys a mansion, the reason they do it is that the value they get out (the gatherer gets to eat, the programmer gets to live in a mansion) is higher than the value they put in (the effort to climb the tree, the skill to write the code). If, as Modern Monetary Theorists are wont to do, you calculated a value for the human capital of the gatherer and the programmer equal to the value of the goods they purchase, you’d be missing the entire point.

Who are you? What is this new blog? Why “Infinite Identical Psychopaths”?

My name is Patrick Julius. I am about halfway through a master’s degree in economics, specializing in the new subfield of cognitive economics (closely related to the also quite new fields of cognitive science and behavioral economics). This makes me in one sense heterodox; I disagree adamantly with most things that typical neoclassical economists say. But in another sense, I am actually quite orthodox. All I’m doing is bringing the insights of psychology, sociology, history, and political science—not to mention ethics—to the study of economics. The problem is simply that economists have divorced themselves so far from the rest of social science.

Another way I differ from most critics of mainstream economics (I’m looking at you, Peter Schiff) is that, for lack of a better phrase, I’m good at math. (As Bill Clinton said, “It’s arithmetic!”) I understand things like partial differential equations and subgame perfect equilibria, and therefore I am equipped to criticize them on their own terms. In this blog I will do my best to explain the esoteric mathematical concepts in terms most readers can understand, but it’s not always easy. The important thing to keep in mind is that fancy math can’t make a lie true; no matter how sophisticated its equations, a model that doesn’t fit the real world can’t be correct.

This blog, which I plan to update every Saturday, is about the current state of economics, both as it is and how economists imagine it to be. One of my central points is that these two are quite far apart, which has exacerbated if not caused the majority of economic problems in the world today. (Economists didn’t invent world hunger, but for over a decade now we’ve had the power to end it and haven’t done so. You’d be amazed how cheap it would be; we’re talking about 1% of First World GDP at most.)

The reason I call it “infinite identical psychopaths” is that this is what neoclassical economists appear to believe human beings are, at least if we judge by the models they use. These are the typical assumptions of a neoclassical economic model:

      1. Perfect information: All individuals know everything they need to know about the state of the world and the actions of other individuals.
      2. Rational expectations: Predictions about the future can only be wrong within a normal distribution, and in the long run are on average correct.
      3. Representative agents: All individuals are identical and interchangeable; a single type represents them all.
      4. Perfect competition: There are infinitely many agents in the market, and none of them ever collude with one another.
      5. “Economic rationality”: Individuals act according to a monotonic increasing utility function that is only dependent upon their own present and future consumption of goods.

I put the last one in scare quotes because it is the worst of the bunch. What economists call “rationality” has only a distant relation to actual rationality, either as understood by common usage or by formal philosophical terminology.

Don’t be scared by the terminology; a “utility function” is just a formal model of the things you care about when you make decisions. Things you want have positive utility; things you don’t want have negative utility. Larger numbers reflect stronger feelings: a bar of chocolate has much less positive utility than a decade of happy marriage; a pinched finger has much less negative utility than a year of continual torture. Utility maximization just means that you try to get the things you want and avoid the things you don’t. By talking about expected utility, we make some allowance for an uncertain future—but not much, because we have so-called “rational expectations”.

Since any action taken by an “economically rational” agent maximizes expected utility, it is impossible for such an agent to ever make a mistake in the usual sense. Whatever they do is always the best idea at the time. This is already an extremely strong assumption that doesn’t make a whole lot of sense applied to human beings; who among us can honestly say they’ve never done anything they later regretted?

The worst part, however, is the assumption that an individual’s utility function depends only upon their own consumption. What this means is that the only thing anyone cares about is how much stuff they have; considerations like family, loyalty, justice, honesty, and fairness cannot factor into their decisions. The “monotonic increasing” part means that more stuff is always better; if they already have twelve private jets, they’d still want a thirteenth; and even if children had to starve for it, they’d be just fine with that. They are, in other words, psychopaths. So that’s one word of my title.

I think “identical” is rather self-explanatory; by using representative agent models, neoclassicists effectively assume that there is no variation between human beings whatsoever. They all have the same desires, the same goals, the same capabilities, the same resources. Implicit in this assumption is the notion that there is no such thing as poverty or wealth inequality, not to mention diversity, disability, or even differences in taste. (One wonders why you’d even bother with economics if that were the case.)

As for “infinite”, that comes from the assumptions of perfect information and perfect competition. In order to really have perfect information, one would need a brain with enough storage capacity to contain the state of every particle in the visible universe. Maybe not quite infinite, but pretty darn close. Likewise, in order to have true perfect competition, there must be infinitely many individuals in the economy, all of whom are poised to instantly take any opportunity offered that allows them to make even the tiniest profit.

Now, you might be thinking this is a strawman; surely neoclassicists don’t actually believe that people are infinite identical psychopaths. They just model that way to simplify the mathematics, which is of course necessary because the world is far too vast and interconnected to analyze in its full complexity.

This is certainly true: Suppose it took you one microsecond to consider each possible position on a Go board; how long would it take you to go through them all? More time than we have left before the universe fades into heat death. A Go board has two colors (plus empty) and 361 spaces. Now imagine trying to understand a global economy of 7 billion people by brute-force analysis. Simplifying heuristics are unavoidable.

And some neoclassical economists—for example Paul Krugman and Joseph Stiglitz—generally use these heuristics correctly; they understand the limitations of their models and don’t apply them in cases where they don’t belong. In that sort of case, there’s nothing particularly bad about these simplifying assumptions; they are like when a physicist models the trajectory of a spacecraft by assuming frictionless vacuum. Since outer space actually is close to a frictionless vacuum, this works pretty well; and if you need to make minor corrections (like the Pioneer Anomaly) you can.

However, this explanation already seems weird for the “economically rational” assumption (the psychopath part), because that doesn’t really make things much simpler. Why would we exclude the fact that people care about each other, they like to cooperate, they have feelings of loyalty and trust? And don’t tell me it’s because that’s impossible to quantify; behavioral geneticists already have a simple equation (C < r B) designed precisely to quantify altruism. (C is cost, B is benefit, r is relatedness.) I’d make only one slight modification; instead of r for relatedness, use p for psychological closeness, or as I like to call it, solidarity. For humans, solidarity is usually much higher than relatedness, though the two are correlated. C < p B.

Worse, there are other neoclassical economists—those of the most fanatically “free-market” bent—who really don’t seem to do this. I don’t know if they honestly believe that people are infinite identical psychopaths, but they make policy as if they did.

We have people like Stephen Moore saying that unemployment is “like a paid vacation” because obviously anyone who truly wants a job can immediately find one, or people like N. Gregory Mankiw arguing—in a published paper no less!—that the reason Steve Jobs was a billionaire was that he was actually a million times as productive as the rest of us, and therefore it would be inefficient (and, he implies but does not say outright, immoral) to take the fruits of those labors from him. (Honestly, I think I could concede the point and still argue for redistribution, on the grounds that people do not deserve to starve to death simply because they aren’t productive; but that’s the sort of thing never even considered by most neoclassicists, and anyway it’s a topic for another time.)

These kinds of statements would only make sense if markets were really as efficient and competitive as neoclassical models—that is, if people were infinite identical psychopaths. Allow even a single monopoly or just a few bits of imperfect information, and that whole edifice collapses.

And indeed if you’ve ever been unemployed or known someone who was, you know that our labor markets just ain’t that efficient. If you want to cut unemployment payments, you need a better argument than that. Similarly, it’s obvious to anyone who isn’t wearing the blinders of economic ideology that many large corporations exert monopoly power to increase their profits at our expense (How can you not see that Apple is a monopoly!?).

This sort of reasoning is more like plotting the trajectory of an aircraft on the assumption of frictionless vacuum; you’d be baffled as to where the oxidizer comes from, or how the craft manages to lift itself off the ground when the exhaust vents are pointed sideways instead of downward. And then you’d be telling the aerospace engineers to cut off the wings because they’re useless mass.

Worst of all, if we continue this analogy, the engineers would listen to you—they’d actually be convinced by your differential equations and cut off the wings just as you requested. Then the plane would never fly, and they’d ask if they could put the wings back on—but you’d adamantly insist that it was just coincidence, you just happened to be hit by a random problem at the very same moment as you cut off the wings, and putting them back on will do nothing and only make things worse.

No, seriously; so-called “Real Business Cycle” theory, while thoroughly obfuscated in esoteric mathematics, ultimately boils down to the assertion that financial crises have nothing to do with recessions, which are actually caused by random shocks to the real economy—the actual production of goods and services. The fact that a financial crisis always seems to happen just beforehand is, apparently, sheer coincidence, or at best some kind of forward-thinking response investors make as they see the storm coming. I want to you think for a minute about the idea that the kind of people who make computer programs that accidentally collapse the Dow, who made Bitcoin the first example in history of hyperdeflation, and who bought up Tweeter thinking it was Twitter are forward-thinking predictors of future events in real production.

And yet, it is on this sort of basis that our policy is made.

Can otherwise intelligent people really believe that these insane models are true? I’m not sure.
Sadly I think they may really believe that all people are psychopaths—because they themselves may be psychopaths. Economics students score higher on various psychopathic traits than other students. Part of this is self-selection—psychopaths are more likely to study economics—but the terrifying part is that part of it isn’t—studying economics may actually make you more like a sociopath. As I study for my master’s degree, I actually am somewhat afraid of being corrupted by this; I make sure to periodically disengage from their ideology and interact with normal people with normal human beliefs to recalibrate my moral compass.

Of course, it’s still pretty hard to imagine that anyone could honestly believe that the world economy is in a state of perfect information. But if they can’t really believe this insane assumption, why do they keep using models based on it?

The more charitable possibility is that they don’t appreciate just how sensitive the models are to the assumptions. They may think, for instance, that the General Welfare Theorems still basically apply if you relax the assumption of perfect information; maybe it’s not always Pareto-efficient, but it’s probably most of the time, right? Or at least close? Actually, no. The Myerson-Satterthwaithe Theorem says that once you give up perfect information, the whole theorem collapses; even a small amount of asymmetric information is enough to make it so that a Pareto-efficient outcome is impossible. And as you might expect, the more asymmetric the information is, the further the result deviates from Pareto-efficiency. And since we always have some asymmetric information, it looks like the General Welfare Theorems really aren’t doing much for us. They apply only in a magical fantasy world. (In case you didn’t know, Pareto-efficiency is a state in which it’s impossible to make any person better off without making someone else worse off. The real world is in a not Pareto-efficient state, which means that by smarter policy we could improve some people’s lives without hurting anyone else.)

The more sinister possibility is that they know full well that the models are wrong, they just don’t care. The models are really just excuses for an underlying ideology, the unshakeable belief that rich people are inherently better than poor people and private corporations are inherently better than governments. Hence, it must be bad for the economy to raise the minimum wage and good to cut income taxes, even though the empirical evidence runs exactly the opposite way; it must be good to subsidize big oil companies and bad to subsidize solar power research, even though that makes absolutely no sense.

One should normally be hesitant to attribute to malice what can be explained by stupidity, but the “I trust the models” explanation just doesn’t work for some of the really extreme privatizations that the US has undergone since Reagan.

No neoclassical model says that you should privatize prisons; prisons are a classic example of a public good, which would be underfunded in a competitive market and basically has to be operated or funded by the government.

No neoclassical model would support the idea that the EPA is a terrorist organization (yes, a member of the US Congress said this). In fact, the economic case for environmental regulations is unassailable. (What else are we supposed to do, privatize the air?) The question is not whether to regulate and tax pollution, but how and how much.

No neoclassical model says that you should deregulate finance; in fact, most neoclassical models don’t even include a financial sector (as bizarre and terrifying as that is), and those that do generally assume it is in a state of perfect equilibrium with zero arbitrage. If the financial sector were actually in a state of zero arbitrage, no banks would make a profit at all.

In case you weren’t aware, arbitrage is the practice of making money off of money without actually making any goods or doing any services. Unlike manufacturing (which, oddly enough, almost all neoclassical models are based on—despite the fact that it is now a minority sector in First World GDP), there’s no value added. Under zero arbitrage, the interest rate a bank charges should be almost exactly the same as the interest rate it receives, with just enough gap between to barely cover their operating expenses—which should in turn be minimal, especially in a modern electronic system. If financial markets were at zero arbitrage equilibrium, it would be sensible to speak of a single “real interest rate” in the economy, the one that everyone pays and everyone receives. Of course, those of us who live in the real world know that not only do different people pay radically different rates, most people have multiple outstanding lines of credit, each with a different rate. My savings account is 0.5%, my car loan is 5.5%, and my biggest credit card is 19%. These basically span the entire range of sensible interest rates (frankly 19% may even exceed that; that’s a doubling time of 3.6 years), and I know I’m not the exception but the rule.

So that’s the mess we’re in. Stay tuned; in future weeks I’ll talk about what we can do about it.