Yes, but what about the next 5000 years?

JDN 2456991 PST 1:34.

This week’s post will be a bit different: I have a book to review. It’s called Debt: The First 5000 Years, by David Graeber. The book is long (about 400 pages plus endnotes), but such a compelling read that the hours melt away. “The First 5000 Years” is an incredibly ambitious subtitle, but Graeber actually manages to live up to it quite well; he really does tell us a story that is more or less continuous from 3000 BC to the present.

So who is this David Graeber fellow, anyway? None will be surprised that he is a founding member of Occupy Wall Street—he was in fact the man who coined “We are the 99%”. (As I’ve studied inequality more, I’ve learned he made a mistake; it really should be “We are the 99.99%”.) I had expected him to be a historian, or an economist; but in fact he is an anthropologist. He is looking at debt and its surrounding institutions in terms of a cultural ethnography—he takes a step outside our own cultural assumptions and tries to see them as he might if he were encountering them in a foreign society. This is what gives the book its freshest parts; Graeber recognizes, as few others seem willing to, that our institutions are not the inevitable product of impersonal deterministic forces, but decisions made by human beings.

(On a related note, I was pleasantly surprised to see in one of my economics textbooks yesterday a neoclassical economist acknowledging that the best explanation we have for why Botswana is doing so well—low corruption, low poverty by African standards, high growth—really has to come down to good leadership and good policy. For once they couldn’t remove all human agency and mark it down to grand impersonal ‘market forces’. It’s odd how strong the pressure is to do that, though; I even feel it in myself: Saying that civil rights progressed so much because Martin Luther King was a great leader isn’t very scientific, is it? Well, if that’s what the evidence points to… why not? At what point did ‘scientific’ come to mean ‘human beings are helplessly at the mercy of grand impersonal forces’? Honestly, doesn’t the link between science and technology make matters quite the opposite?)

Graeber provides a new perspective on many things we take for granted: in the introduction there is one particularly compelling passage where he starts talking—with a fellow left-wing activist—about the damage that has been done to the Third World by IMF policy, and she immediately interjects: “But surely one has to pay one’s debts.” The rest of the book is essentially an elaboration on why we say that—and why it is absolutely untrue.

Graeber has also made me think quite a bit differently about Medieval society and in particular Medieval Islam; this was certainly the society in which the writings of Socrates were preserved and algebra was invented, so it couldn’t have been all bad. But in fact, assuming that Graeber’s account is accurate, Muslim societies in the 14th century actually had something approaching the idyllic fair and free market to which all neoclassicists aspire. They did so, however, by rejecting one of the core assumptions of neoclassical economics, and you can probably guess which one: the assumption that human beings are infinite identical psychopaths. Instead, merchants in Medieval Muslim society were held to high moral standards, and their livelihood was largely based upon the reputation they could maintain as upstanding good citizens. Theoretically they couldn’t even lend at interest, though in practice they had workarounds (like payment in installments that total slightly higher than the original price) that amounted to low rates of interest. They did not, however, have anything approaching the levels of interest that we have today in credit cards at 29% or (it still makes me shudder every time I think about it) payday loans at 400%. Paying on installments to a Muslim merchant would make you end up paying about a 2% to 4% rate of interest—which sounds to me almost exactly what it should be, maybe even a bit low because we’re not taking inflation into account. In any case, the moral standards of society kept people from getting too poor or too greedy, and as a result there was little need for enforcement by the state. In spite of myself I have to admit that may not have been possible without the theological enforcement provided by Islam.
Graeber also avoids one of the most common failings of anthropologists, the cultural relativism that makes them unwilling to criticize any cultural practice as immoral even when it obviously is (except usually making exceptions for modern Western capitalist imperialism). While at times I can see he was tempted to go that way, he generally avoids it; several times he goes out of his way to point out how women were sold into slavery in hunter-gatherer tribes and how that contributed to the institutions of chattel slavery that developed once Western powers invaded.

Anthropologists have another common failing that I don’t think he avoids as well, which is a primitivist bent in which anthropologists speak of ancient societies as idyllic and modern societies as horrific. That’s part of why I said ‘if Graber’s account is accurate,’ because I’m honestly not sure it is. I’ll need to look more into the history of Medieval Islam to be sure. Graeber spends a great deal of time talking about how our current monetary system is fundamentally based on threats of violence—but I can tell you that I have honestly never been threatened with violence over money in my entire life. Not by the state, not by individuals, not by corporations. I haven’t even been mugged—and that’s the sort of the thing the state exists to prevent. (Not that I’ve never been threatened with violence—but so far it’s always been either something personal, or, more often, bigotry against LGBT people.) If violence is the foundation of our monetary system, then it’s hiding itself extraordinarily well. Granted, the violence probably pops up more if you’re near the very bottom, but I think I speak for most of the American middle class when I say that I’ve been through a lot of financial troubles, but none of them have involved any guns pointed at my head. And you can’t counter this by saying that we theoretically have laws on the books that allow you to be arrested for financial insolvency—because that’s always been true, in fact it’s less true now than any other point in history, and Graeber himself freely admits this. The important question is how many people actually get violence enforced upon them, and at least within the United States that number seems to be quite small.

Graeber describes the true story of the emergence of money historically, as the result of military conquest—a way to pay soldiers and buy supplies when in an occupied territory where nobody trusts you. He demolishes the (always fishy) argument that money emerged as a way of mediating a barter system: If I catch fish and he makes shoes and I want some shoes but he doesn’t want fish right now, why not just make a deal to pay later? This is of course exactly what they did. Indeed Graeber uses the intentionally provocative word communism to describe the way that resources are typically distributed within families and small villages—because it basically is “from each according to his ability, to each according to his need”. (I would probably use the less-charged word “community”, but I have to admit that those come from the same Latin root.) He also describes something I’ve tried to explain many times to neoclassical economists to no avail: There is equally a communism of the rich, a solidarity of deal-making and collusion that undermines the competitive market that is supposed to keep the rich in check. Graeber points out that wine, women and feasting have been common parts of deals between villages throughout history—and yet are still common parts of top-level business deals in modern capitalism. Even as we claim to be atomistic rational agents we still fall back on the community norms that guided our ancestors.

Another one of my favorite lines in the book is on this very subject: “Why, if I took a free-market economic theorist out to an expensive dinner, would that economist feel somewhat diminished—uncomfortably in my debt—until he had been able to return the favor? Why, if he were feeling competitive with me, would he be inclined to take me someplace even more expensive?” That doesn’t make any sense at all under the theory of neoclassical rational agents (an infinite identical psychopath would just enjoy the dinner—free dinner!—and might never speak to you again), but it makes perfect sense under the cultural norms of community in which gifts form bonds and generosity is a measure of moral character. I also got thinking about how introducing money directly into such exchanges can change them dramatically: For instance, suppose I took my professor out to a nice dinner with drinks in order to thank him for writing me recommendation letters. This seems entirely appropriate, right? But now suppose I just paid him $30 for writing the letters. All the sudden it seems downright corrupt. But the dinner check said $30 on it! My bank account debit is the same! He might go out and buy a dinner with it! What’s the difference? I think the difference is that the dinner forms a relationship that ties the two of us together as individuals, while the cash creates a market transaction between two interchangeable economic agents. By giving my professor cash I would effectively be saying that we are infinite identical psychopaths.

While Graeber doesn’t get into it, a similar argument also applies to gift-giving on holidays and birthdays. There seriously is—I kid you not—a neoclassical economist who argues that Christmas is economically inefficient and should be abolished in favor of cash transfers. He wrote a book about it. He literally does not understand the concept of gift-giving as a way of sharing experiences and solidifying relationships. This man must be such a joy to have around! I can imagine it now: “Will you play catch with me, Daddy?” “Daddy has to work, but don’t worry dear, I hired a minor league catcher to play with you. Won’t that be much more efficient?”

This sort of thing is what makes Debt such a compelling read, and Graeber does make some good points and presents a wealth of historical information. So now it’s time to talk about what’s wrong with the book, the things Graeber gets wrong.

First of all, he’s clearly quite ignorant about the state-of-the-art in economics, and I’m not even talking about the sort of cutting-edge cognitive economics experiments I want to be doing. (When I read what Molly Crockett has been working on lately in the neuroscience of moral judgments, I began to wonder if I should apply to University College London after all.)

No, I mean Graeber is ignorant of really basic stuff, like the nature of government debt—almost nothing of what I said in that post is controversial among serious economists; the equations certainly aren’t, though some of the interpretation and application might be. (One particularly likely sticking point called “Ricardian equivalence” is something I hope to get into in a future post. You already know the refrain: Ricardian equivalence only happens if you live in a world of infinite identical psychopaths.) Graeber has internalized the Republican talking points about how this is money our grandchildren will owe to China; it’s nothing of the sort, and most of it we “owe” to ourselves. In a particularly baffling passage Graeber talks about how there are no protections for creditors of the US government, when creditors of the US government have literally never suffered a single late payment in the last 200 years. There are literally no creditors in the world who are more protected from default—and only a few others that reach the same level, such as creditors to the Bank of England.

In an equally-bizarre aside he also says in one endnote that “mainstream economists” favor the use of the gold standard and are suspicious of fiat money; exactly the opposite is the case. Mainstream economists—even the neoclassicists with whom I have my quarrels—are in almost total agreement that a fiat monetary system managed by a central bank is the only way to have a stable money supply. The gold standard is the pet project of a bunch of cranks and quacks like Peter Schiff. Like most quacks, the are quite vocal; but they are by no means supported by academic research or respected by top policymakers. (I suppose the latter could change if enough Tea Party Republicans get into office, but so far even that hasn’t happened and Janet Yellen continues to manage our fiat money supply.) In fact, it’s basically a consensus among economists that the gold standard caused the Great Depression—that in addition to some triggering event (my money is on Minsky-style debt deflation—and so is Krugman’s), the inability of the money supply to adjust was the reason why the world economy remained in such terrible shape for such a long period. The gold standard has not been a mainstream position among economists since roughly the mid-1980s—before I was born.

He makes this really bizarre argument about how because Korea, Japan, Taiwan, and West Germany are major holders of US Treasury bonds and became so under US occupation—which is indisputably true—that means that their development was really just some kind of smokescreen to sell more Treasury bonds. First of all, we’ve never had trouble selling Treasury bonds; people are literally accepting negative interest rates in order to have them right now. More importantly, Korea, Japan, Taiwan, and West Germany—those exact four countries, in that order—are the greatest economic success stories in the history of the human race. West Germany was rebuilt literally from rubble to become once again a world power. The Asian Tigers were even more impressive, raised from the most abject Third World poverty to full First World high-tech economy status in a few generations. If this is what happens when you buy Treasury bonds, we should all buy as many Treasury bonds as we possibly can. And while that seems intuitively ridiculous, I have to admit, China’s meteoric rise also came with an enormous investment in Treasury bonds. Maybe the secret to economic development isn’t physical capital or exports or institutions; nope, it’s buying Treasury bonds. (I don’t actually believe this, but the correlation is there, and it totally undermines Graeber’s argument that buying Treasury bonds makes you some kind of debt peon.)

Speaking of correlations, Graeber is absolutely terrible at econometrics; he doesn’t even seem to grasp the most basic concepts. On page 366 he shows this graph of the US defense budget and the US federal debt side by side in order to argue that the military is the primary reason for our national debt. First of all, he doesn’t even correct for inflation—so most of the exponential rise in the two curves is simply the purchasing power of the dollar declining over time. Second, he doesn’t account for GDP growth, which is most of what’s left after you account for inflation. He has two nonstationary time-series with obvious exponential trends and doesn’t even formally correlate them, let alone actually perform the proper econometrics to show that they are cointegrated. I actually think they probably are cointegrated, and that a large portion of national debt is driven by military spending, but Graeber’s graph doesn’t even begin to make that argument. You could just as well graph the number of murders and the number of cheesecakes sold, each on an annual basis; both of them would rise exponentially with population, thus proving that cheesecakes cause murder (or murders cause cheesecakes?).

And then where Graeber really loses me is when he develops his theory of how modern capitalism and the monetary and debt system that go with it are fundamentally corrupt to the core and must be abolished and replaced with something totally new. First of all, he never tells us what that new thing is supposed to be. You’d think in 400 pages he could at least give us some idea, but no; nothing. He apparently wants us to do “not capitalism”, which is an infinite space of possible systems, some of which might well be better, but none of which can actually be implemented without more specific ideas. Many have declared that Occupy has failed—I am convinced that those who say this appreciate neither how long it takes social movements to make change, nor how effective Occupy has already been at changing our discourse, so that Capital in the Twenty-First Century can be a bestseller and the President of the United States can mention income inequality and economic mobility in his speeches—but insofar as Occupy has failed to achieve its goals, it seems to me that this is because it was never clear just what Occupy’s goals were to begin with. Now that I’ve read Graeber’s work, I understand why: He wanted it that way. He didn’t want to go through the hard work (which is also risky: you could be wrong) of actually specifying what this new economic system would look like; instead he’d prefer to find flaws in the current system and then wait for someone else to figure out how to fix them. That has always been the easy part; any human system comes with flaws. The hard part is actually coming up with a better system—and Graeber doesn’t seem willing to even try.

I don’t know exactly how accurate Graeber’s historical account is, but it seems to check out, and even make sense of some things that were otherwise baffling about the sketchy account of the past I had previously learned. Why were African tribes so willing to sell their people into slavery? Well, because they didn’t think of it as their people—they were selling captives from other tribes taken in war, which is something they had done since time immemorial in the form of slaves for slaves rather than slaves for goods. Indeed, it appears that trade itself emerged originally as what Graeber calls a “human economy”, in which human beings are literally traded as a fungible commodity—but always humans for humans. When money was introduced, people continued selling other people, but now it was for goods—and apparently most of the people sold were young women. So much of the Bible makes more sense that way: Why would Job be all right with getting new kids after losing his old ones? Kids are fungible! Why would people sell their daughters for goats? We always sell women! How quickly do we flirt with the unconscionable, when first we say that all is fungible.

One of Graeber’s central points is that debt came long before money—you owed people apples or hours of labor long before you ever paid anybody in gold. Money only emerged when debt became impossible to enforce, usually because trade was occurring between soldiers and the villages they had just conquered, so nobody was going to trust anyone to pay anyone back. Immediate spot trades were the only way to ensure that trades were fair in the absence of trust or community. In other words, the first use of gold as money was really using it as collateral. All of this makes a good deal of sense, and I’m willing to believe that’s where money originally came from.

But then Graeber tries to use this horrific and violent origin of money—in war, rape, and slavery, literally some of the worst things human beings have ever done to one another—as an argument for why money itself is somehow corrupt and capitalism with it. This is nothing short of a genetic fallacy: I could agree completely that money had this terrible origin, and yet still say that money is a good thing and worth preserving. (Indeed, I’m rather strongly inclined to say exactly that.) The fact that it was born of violence does not mean that it is violence; we too were born of violence, literally millions of years of rape and murder. It is astronomically unlikely that any one of us does not have a murderer somewhere in our ancestry. (Supposedly I’m descended from Julius Caesar, hence my last name Julius—not sure I really believe that—but if so, there you go, a murderer and tyrant.) Are we therefore all irredeemably corrupt? No. Where you come from does not decide what you are or where you are going.

In fact, I could even turn the argument around: Perhaps money was born of violence because it is the only alternative to violence; without money we’d still be trading our daughters away because we had no other way of trading. I don’t think I believe that either; but it should show you how fragile an argument from origin really is.

This is why the whole book gives this strange feeling of non sequitur; all this history is very interesting and enlightening, but what does it have to do with our modern problems? Oh. Nothing, that’s what. The connection you saw doesn’t make any sense, so maybe there’s just no connection at all. Well all right then. This was an interesting little experience.

This is a shame, because I do think there are important things to be said about the nature of money culturally, philosophically, morally—but Graeber never gets around to saying them, seeming to think that merely pointing out money’s violent origins is a sufficient indictment. It’s worth talking about the fact that money is something we made, something we can redistribute or unmake if we choose. I had such high expectations after I read that little interchange about the IMF: Yes! Finally, someone gets it! No, you don’t have to repay debts if that means millions of people will suffer! But then he never really goes back to that. The closest he veers toward an actual policy recommendation is at the very end of the book, a short section entitled “Perhaps the world really does owe you a living” in which he very briefly suggests—doesn’t even argue for, just suggests—that perhaps people do deserve a certain basic standard of living even if they aren’t working. He could have filled 50 pages arguing the ins and outs of a basic income with graphs and charts and citations of experimental data—but no, he just spends a few paragraphs proposing the idea and then ends the book. (I guess I’ll have to write that chapter myself; I think it would go well in The End of Economics, which I hope to get back to writing in a few months—while I also hope to finally publish my already-written book The Mathematics of Tears and Joy.)

If you want to learn about the history of money and debt over the last 5000 years, this is a good book to do so—and that is, after all, what the title said it would be. But if you’re looking for advice on how to improve our current economic system for the benefit of all humanity, you’ll need to look elsewhere.

And so in the grand economic tradition of reducing complex systems into a single numeric utility value, I rate Debt: The First 5000 Years a 3 out of 5.

 Who are the job creators?

JDN 2456956 PDT 11:30.

For about 20 years now, conservatives have opposed any economic measures that might redistribute wealth from the rich as hurting “job creators” and thereby damaging the economy. This has become so common that the phrase “job creator” has become a euphemism for “rich person”; indeed, when Paul Ryan was asked to define “rich” he stumbled over himself and ended up with “job creators”. A few years ago, John Boehner gave a speech saying that ‘the job creators are on strike’. During his presidential campaign, Mitt Romney said Obama was ‘waging war on job creators’.

If you get the impression that the “job creator” narrative is used more often now than ever, you’re not imagining things; the term was used almost as many times in a single month of Obama’s presidency than it was in George W. Bush’s entire second term.

This narrative is not just wrong; it’s utterly ludicrous. The vision seems to be something like this: Out there somewhere, beyond the view of ordinary mortals, there lives a race of beings known as Job Creators. Ours is not to judge them, not to influence them; ours is only to appease them so that they might look upon us with favor and bestow upon us our much-needed Jobs. Without these Jobs, we will surely die, and so all other concerns are secondary: We must appease the Job Creators.

Businesses don’t create jobs because they feel like it, or because they love us, or because we have gone through the appropriate appeasement rituals. They don’t create jobs because their taxes are low or because they have extra money lying around. They create jobs because they see profit in it. They create jobs because the marginal revenue of hiring an additional worker exceeds the marginal cost.

And of course they’ll gladly destroy jobs for the exact same reasons; if they think the marginal cost exceeds the marginal revenue, out come the pink slips. If demand for the product has fallen, if the raw materials have become more expensive, or if new technology has allowed some of the labor to be cheaply automated, workers will be laid off in the interests of the company. In fact, sometimes it won’t even be in the interests of the company; corporate executives are lately in the habit of using layoffs and stock buybacks to artificially boost the value of their stock options so they can exercise them, pocket the money, and run away as the company comes crashing to the ground. Because of market deregulation and the ridiculous theory of “shareholder value” (as though shareholders are the only ones who matter!), our stock market has changed from a system of value creation to a system of value extraction.

What actually creates jobs? Demand. If the demand for their product exceeds the company’s capacity to produce it, they will hire more people in order to produce more of the product. The marginal revenue has to go up, or companies will have no reason to hire new workers. (The marginal cost could also go down, but then you get low-paying jobs, which isn’t really what we’re aiming for.) They will continue hiring more people up until the point at which it costs more to hire someone than they’d make from selling the products that person could make for them.

What if they don’t have enough money? They’ll borrow it. As long as they know they are going to make a profit from that worker, they will gladly borrow money in order to hire them. Indeed, corporations do this sort of thing all the time. If banks stop lending, that’s a big problem—it’s called a credit crunchand it’s a major part of just about any financial crisis. But that isn’t because rich people don’t have enough money, it’s because our banking system is fundamentally defective and corrupt. Yes, fixing the banking system would create jobs in a number of different ways. (The biggest three I can think of: There would be more credit for real businesses to fund investment, more credit for individuals to increase demand, and labor effort that is currently wasted on useless financial speculation would be once again returned to real production.) But that’s not what Paul Ryan and his ilk are talking about—indeed, Paul Ryan seems to think that we should undo the meager reforms we’ve already made. Unless we fundamentally change the financial system, the way to create jobs would be to create demand.

And what decides demand? Well, a lot of things I suppose; preferences, technologies, cultural norms, fads, advertising, and so on. But when you’re looking at short-run changes like the business cycle, the driving factor in most cases is actually quite simple: How much money does the middle class have to spend? The middle class is where most of the consumer spending comes from, and if the middle class has money to spend we will buy products. If we don’t have money to spend—we’re out of work, or we have too much debt to pay—then we won’t buy products. It’s not that we suddenly stopped wanting products; the utility value of those products to us is unchanged. The problem is that we simply can’t afford them anymore. This is what happens in a recession: After some sort of shock to the economy, the middle class stops being able to spend, which reduces demand. That causes corporations to lay off workers, which creates unemployment, which reduces demand even further. To correct for the lost demand, prices are supposed to go down (deflation); but this doesn’t actually work, for two reasons.

First, people absolutely hate seeing their wages go down; even if there is a legitimate economic reason, people still have a sense that they are being exploited by their employers (and sometimes they are). This is called downward nominal wage rigidity.

Second, when prices go down, the real value of debt doesn’t go down; it goes up. Your loans are denominated in dollars, not apples; so reducing the price of apples means that you actually owe more apples than you did before. Since debt is usually one of the big things holding back spending by the middle class in the first place, deflation doesn’t correct the imbalance; it makes it worse. This is called debt deflation. Maybe we shouldn’t call it that, since the problem isn’t the prices, it’s the debt. In 2008, the first thing that happened wasn’t that prices in general went down, which is what we normally mean by “deflation”; it was that housing prices went down, and so suddenly people owed vastly more on their mortgages than they had before, and many of them couldn’t afford to pay. It wasn’t a drop in prices so much as a rise in the real value of debt. (I actually think one of the reasons there is no successful comprehensive theory of the cause of business cycles is that there isn’t a single comprehensive cause of business cycles. It’s usually some form of financial crisis followed by debt deflation—and these are the ones to be worried about, 1929 and 2008—but that isn’t always what happens. In 2001, we actually had an unanticipated negative real economic shock—the 9/11 attacks. In 1973 we had a different kind of real economic shock when OPEC raised oil prices at the same time as the US hit peak oil. We should probably be distinguishing between financial recession and real recession.)

Notice how in this entire discussion of what drives aggregate demand, I have never mentioned rich people getting free money; I haven’t even mentioned tax rates. If you have the simplistic view “taxes are bad” (or the totally insane, yet still common, view “taxation is slavery”), then you’re going to look for excuses to lower taxes whenever you can. If you specifically love rich people more than poor people, you’re going to look for excuses to lower taxes on the rich and raise them on the poor (and there is really no other way to interpret Mitt Romney’s infamous “47%” comments). But none of this has anything to do with aggregate demand and job creation. It is pure ideology and has no basis in economics.

Indeed, there’s little reason to think that a tax on corporate profits or capital income would change hiring decisions at all. When we talk about the potential distortions of income taxes, we really have to be talking about labor income, because labor can actually be disincentivized. Say you’re making $15 an hour and not paying any taxes, but your tax rate is suddenly raised to 40%. You can see that after taxes your real wage is now only $9, and maybe you’ll decide that it’s just not worth it to work those hours. This is because you pay a real cost to work—it’s hard, it’s stressful, it’s frustrating, it takes up time.

Capital income can’t be disincentivized. You can have relative incentives, if you tax certain kinds of capital more than others. But if you tax all capital income at the same rate, the incentives remain exactly as they were before: Seek the highest return on investment. Your only costs were financial, and your only benefits are financial. Yes, you’ll be unhappy that your after-tax return on investment has gone down; but it won’t change your investment decisions. If you previously had the choice between investment A yielding 5% return and investment B yielding a 10% return, you’d choose B. Now you pay a 40% tax on capital income; you now have a choice between a 3% real return on A and a 6% real return on B—you’re still going to choose B. That’s probably why high marginal tax rates on income don’t reduce job growth—because most high incomes are capital incomes of one form or another; even when a CEO reports ordinary income it’s really a due to profits and stock options, it’s not like he was paid a wage for work he did.

To be fair, it does get more complicated when you include borrowing and interest rates (now you have the option of lending your money at interest or borrowing more from someone else, which may be taxed differently), and because it’s so easy to move money across borders you can have a relative incentive even when tax rates within a given nation are all the same. Don’t take this literally as saying that you can do whatever you want with taxes on capital income. But in fact you can do quite a lot, because you can change the real rate of return and have no incentive effect as long as you don’t change the relative rate of return. That’s different from wages, for which the real value of the wage can have a direct effect on employers and employees. (The only way to have the same effect on workers would be to somehow lower the real cost of working—make working easier or more fun—which actually sounds like a great idea if you can do it.) The people who are constantly telling us that workers need to tighten their belts but we mustn’t dare tax the “job creators” have the whole situation exactly backwards.

There’s something else I should bring up as well. In everything I’ve said above, I have taken as given the assumption that we need jobs. For many people, probably most Americans in fact, this is an unquestioned assumption, seemingly so obvious as to be self-evident; of course we need jobs, right? But no, actually, we don’t; what we need is production and distribution of wealth. We need to make food and clothing and houses—those are truly basic needs. We could even say we “need” (or at least want) to make televisions and computers and cars. As individuals and as a society we benefit from having these goods. And in our present capitalist economy, the way that we produce and distribute goods is through a system of jobs—you are paid to make goods, and then you can use that money to buy other goods. Don’t get me wrong; this system works pretty well, and for the most part I want to make small adjustments and reforms around the edges rather than throw the whole thing out. Thus far, other systems have not worked as well; when we have attempted to centrally plan production and distribution, the best-case scenario has been inefficiency and the worst-case scenario has been mass starvation.

But we should also be open to the possibility of other systems that are better than capitalism. We should be open to the possibility of a culture like, well, The Culture (and if you haven’t read any Iain Banks novels you should; I’d probably start with Player of Games), in which artificial intelligence and automation allows central planning to finally achieve efficient production and distribution. We should be open to the possibility of a culture like the Federation (and don’t tell me you haven’t seen Star Trek!), in which resources are so plentiful that anyone can have whatever they want, and people work not because they have to, but because they want to—it gives them meaning and purpose in their lives. Fanciful? Perhaps. But lightspeed worldwide communication and landing robots on other planets would have seemed pretty fanciful a century ago.
Capitalism is really an Industrial Era system. It was designed in, and for, a world in which the most important determinants of production are machines, raw materials, and labor hours. But we don’t live in that world anymore. The most important determinants of production are now ideas; software, research, patents, copyrights. Microsoft, Google, and Amazon don’t make things at all, they make ideas; Sony, IBM, Apple, and Toshiba make things, but those things are primarily for the production and dissemination of ideas. Ideas are just as valuable as things—if not more so—but they obey different rules.

Capitalism was designed for a world of rival, excludable goods with increasing marginal cost. Rival, meaning that if one person has it, someone else can’t have it anymore. We speak of piracy as “stealing”, but that’s totally wrong; if you steal something I have, I don’t have it anymore. If you pirate something I have, I still have it. If I gave you my computer, I wouldn’t have it anymore; but I can give you the ideas in this blog post and then we’ll both have them. Excludable, meaning that there is a way to prevent someone else from getting it if you don’t want them to. And increasing marginal cost, meaning that the more you make, the more it costs to make each one. Under these conditions, you get a very nice equilibrium that is efficient under competition.

But ideas are nonrival, they have nearly zero marginal cost, and we are increasingly finding that they aren’t even very excludable; DRM is astonishingly ineffective. Under these conditions, your nice efficient equilibrium completely evaporates. There can be many different equilibria, or no equilibrium at all; and the results are almost always inefficient. We have shoehorned capitalism onto an economy that it was not designed to deal with. Capitalism was designed for the Industrial Era; but we are now in the Information Era.

Indeed, you can see this in all our neoclassical growth models: K is physical capital—machines—and L is labor, and sometimes it is augmented with N—natural resources. But these typically only explain about 50% of the variation in economic output, so we add an extra term, A, which goes by many names: “productivity”, “efficiency”, “technology”; I think the most informative one is actually “the Solow residual”. It’s the residual; it’s the part we can’t explain, dare I say, the part capitalism isn’t designed to explain. It is, in short, made of ideas. One of my thesis papers is actually about this “total factor productivity”, and how a major component of it is made up of one class of ideas in particular: Corruption. Corruption isn’t a thing, some object in space. It’s a cultural norm, a systemic idea that permeates the thoughts and actions of the whole society. It affects what we do, whom we trust, how the rules are made, and how well we follow those rules. You can even think of capitalism as an idea, a system, a culture—and a good part of “productivity” can be accounted for by “market orientation”, which is to say how capitalist a nation is. I would like to see someday a new model that actually includes these factors as terms in the equation, instead of throwing them all together in the mysterious A that we don’t understand.

With this in mind, we should be asking ourselves whether we need jobs at all, because jobs are a system designed for the production of physical goods in the Industrial Era. Now that we live in the Information Era and most of our production is in the form of ideas, do we still need jobs? Does everyone need a job? If you’re trying to make cars for a million people, it may not take a million people to do it, but it’s going to take thousands. But if you’re trying to design a car for a million people, or make a computer game about cars for a million people to play, that can be done with a lot fewer people. Ideas can be made by a few and then disseminated to the world. General Motors has 200,000 employees (and used to have about twice as many in the 1970s); Blizzard Entertainment has less than 5,000. It’s not because they produce for fewer people; GM sells about 3 million cars a year, and Starcraft sold over 11 million copies. Starcraft came out in 1998, so I added up how many cars GM sold in the US since 1998: 61 million. That’s still 3.28 employees per thousand cars sold, but only 0.45 employees per thousand computer games sold.

Still, I don’t have a detailed account of what this new jobless economic system might look like. For now, it’s probably best if people have jobs. But if we really want to create jobs, we need to increase aggregate demand. That most likely means either reducing debt or giving more money to consumers. It certainly doesn’t have anything to do with tax cuts for the rich.

And really, this is pretty obvious; if you stop and think for a minute about why businesses create jobs, you realize that it has to do with demand for products, not how nice the government treats them or how much extra cash they have laying around. I actually have trouble believing that the people who say “job creators” unironically actually believe the words they are saying. Do they honestly think that rich people create jobs out of sheer brilliance and benevolence, but are constrained by how much money they have and “go on strike” if the government doesn’t kowtow to them?

The only way I can see that they could actually believe this sort of thing would be if they read so much Ayn Rand that it totally infested their brains and rendered them incapable of thinking outside that framework. Perhaps Krugman is right, and Rand Paul really does believe that he is John Galt. Maybe they really do honestly believe that this is how economics works—in which case it’s no wonder that our economy is in trouble. Indeed, the marvel is that it works at all.

What are the limits to growth?

JDN 2456941 PDT 12:25.

Paul Krugman recently wrote a column about the “limits to growth” community, and as usual, it’s good stuff; his example of how steamships substituted more ships for less fuel is quite compelling. But there’s a much stronger argument to made against “limits to growth”, and I thought I’d make it here.

The basic idea, most famously propounded by Jay Forrester but still with many proponents today (and actually owing quite a bit to Thomas Malthus), is this: There’s only so much stuff in the world. If we keep adding more people and trying to give people higher standards of living, we’re going to exhaust all the stuff, and then we’ll be in big trouble.

This argument seems intuitively reasonable, but turns out to be economically naïve. It can take several specific forms, from the basically reasonable to the utterly ridiculous. On the former end is “peak oil”, the point at which we reach a maximum rate of oil extraction. We’re actually past that point in most places, and it won’t be long before the whole world crosses that line. So yes, we really are running out of oil, and we need to transition to other fuels as quickly as possible. On the latter end is the original Mathusian argument (we now have much more food per person worldwide than they did in Malthus’s time—that’s why ending world hunger is a realistic option now), and, sadly, the argument Mark Buchanan made a few days ago. No, you don’t always need more energy to produce more economic output—as Krugman’s example cleverly demonstrates. You can use other methods to improve your energy efficiency, and that doesn’t necessarily require new technology.

Here’s the part that Krugman missed: Even if we need more energy, there’s plenty of room at the top. The total amount of sunlight that hits the Earth is about 1.3 kW/m^2, and the Earth has a surface area of about 500 million km^2, which is 5e14 m^2. That means that if we could somehow capture all the sunlight that hits the Earth, we’d have 6.5e17 W, which is 5.7e18 kilowatt-hours per year. Total world energy consumption is about 140,000 terawatt-hours per year, which is 1.4e14 kilowatt-hours per year. That means we could increase energy consumption by a factor of one thousand just using Earth-based solar power (Covering the oceans with synthetic algae? A fleet of high-altitude balloons covered in high-efficiency solar panels?). That’s not including fission power, which is already economically efficient, or fusion power, which has passed break-even and may soon become economically feasible as well. Fusion power is only limited by the size of your reactor and your quantity of deuterium, and deuterium is found in ocean water (about 33 milligrams per liter), not to mention permeating all of outer space. If we can figure out how to fuse ordinary hydrogen, well now our fuel is literally the most abundant substance in the universe.

And what if we move beyond the Earth? What if we somehow captured not just the solar energy that hits the Earth, but the totality of solar energy that the Sun itself releases? That figure is about 1e31 joules per day, which is 1e27 kilowatt-hours per day, or seven trillion times as much energy as we currently consume. It is literally enough to annihilate entire planets, which the Sun would certainly do if you put a planet near enough to it. A theoretical construct to capture all this energy is called a Dyson Sphere, and the ability to construct one officially makes you a Type 2 Kardashev Civilization. (We currently stand at about Type 0.7. Building that worldwide solar network would raise us to Type 1.)

Can we actually capture all that energy with our current technology? Of course not. Indeed, we probably won’t have that technology for centuries if not millennia. But if your claim—as Mark Buchanan’s was—is about fundamental physical limits, then you should be talking about Dyson Spheres. If you’re not, then we are really talking about practical economic limits.

Are there practical economic limits to growth? Of course there are; indeed, they are what actually constrains growth in the real world. That’s why the US can’t grow above 2% and China won’t be growing at 7% much longer. (I am rather disturbed by the fact that many of the Chinese nationals I know don’t appreciate this; they seem to believe the propaganda that this rapid growth is something fundamentally better about the Chinese system, rather than the simple economic fact that it’s easier to grow rapidly when you are starting very small. I had a conversation with a man the other day who honestly seemed to think that Macau could sustain its 12% annual GDP growth—driven by gambling, no less! Zero real productivity!—into the indefinite future. Don’t get me wrong, I’m thrilled that China is growing so fast and lifting so many people out of poverty. But no remotely credible economist believes they can sustain this growth forever. The best-case scenario is to follow the pattern of Korea, rising from Third World to First World status in a few generations. Korea grew astonishingly fast from about 1950 to 1990, but now that they’ve made it, their growth rate is only 3%.)

There is also a reasonable argument to be made about the economic tradeoffs involved in fighting climate change and natural resource depletion. While the people of Brazil may like to have more firewood and space for farming, the fact is the rest of need that Amazon in order to breathe. While any given fisherman may be rational in the amount of fish he catches, worldwide we are running out of fish. And while we Americans may love our low gas prices (and become furious when they rise even slightly), the fact is, our oil subsidies are costing hundreds of billions of dollars and endangering millions of lives.

We may in fact have to bear some short-term cost in economic output in order to ensure long-term environmental sustainability (though to return to Krugman, that cost may be a lot less than many people think!). Economic growth does slow down as you reach high standards of living, and it may even continue to slow down as technology begins to reach diminishing returns (though this is much harder to forecast). So yes, in that sense there are limits to growth. But the really fundamental limits aren’t something we have to worry about for at least a thousand years. Right now, it’s just a question of good economic policy.

Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.

Pareto Efficiency: Why we need it—and why it’s not enough

JDN 2456914 PDT 11:45.

I already briefly mentioned the concept in an earlier post, but Pareto-efficiency is so fundamental to both ethics and economics I decided I would spent some more time on explaining exactly what it’s about.

This is the core idea: A system is Pareto-efficient if you can’t make anyone better off without also making someone else worse off. It is Pareto-inefficient if the opposite is true, and you could improve someone’s situation without hurting anyone else.

Improving someone’s situation without harming anyone else is called a Pareto-improvement. A system is Pareto-efficient if and only if there are no possible Pareto-improvements.

Zero-sum games are always Pareto-efficient. If the game is about how we distribute the same $10 between two people, any dollar I get is a dollar you don’t get, so no matter what we do, we can’t make either of us better off without harming the other. You may have ideas about what the fair or right solution is—and I’ll get back to that shortly—but all possible distributions are Pareto-efficient.

Where Pareto-efficiency gets interesting is in nonzero-sum games. The most famous and most important such game is the so-called Prisoner’s Dilemma; I don’t like the standard story to set up the game, so I’m going to give you my own. Two corporations, Alphacomp and Betatech, make PCs. The computers they make are of basically the same quality and neither is a big brand name, so very few customers are going to choose on anything except price. Combining labor, materials, equipment and so on, each PC costs each company $300 to manufacture a new PC, and most customers are willing to buy a PC as long as it’s no more than $1000. Suppose there are 1000 customers buying. Now the question is, what price do they set? They would both make the most profit if they set the price at $1000, because customers would still buy and they’d make $700 on each unit, each making $350,000. But now suppose Alphacomp sets a price at $1000; Betatech could undercut them by making the price $999 and sell twice as many PCs, making $699,000. And then Alphacomp could respond by setting the price at $998, and so on. The only stable end result if they are both selfish profit-maximizers—the Nash equilibrium—is when the price they both set is $301, meaning each company only profits $1 per PC, making $1000. Indeed, this result is what we call in economics perfect competition. This is great for consumers, but not so great for the companies.

If you focus on the most important choice, $1000 versus $999—to collude or to compete—we can set up a table of how much each company would profit by making that choice (a payoff matrix or normal form game in game theory jargon).

A: $999 A: $1000
B: $999 A:$349k

B:$349k

A:$0

B:$699k

B: $1000 A:$699k

B:$0

A:$350k

B:$350k

Obviously the choice that makes both companies best-off is for both companies to make the price $1000; that is Pareto-efficient. But it’s also Pareto-efficient for Alphacomp to choose $999 and the other one to choose $1000, because then they sell twice as many computers. We have made someone worse off—Betatech—but it’s still Pareto-efficient because we couldn’t give Betatech back what they lost without taking some of what Alphacomp gained.

There’s only one option that’s not Pareto-efficient: If both companies charge $999, they could both have made more money if they’d charged $1000 instead. The problem is, that’s not the Nash equilibrium; the stable state is the one where they set the price lower.

This means that only case that isn’t Pareto-efficient is the one that the system will naturally trend toward if both compal selfish profit-maximizers. (And while most human beings are nothing like that, most corporations actually get pretty close. They aren’t infinite, but they’re huge; they aren’t identical, but they’re very similar; and they basically are psychopaths.)

In jargon, we say the Nash equilibrium of a Prisoner’s Dilemma is Pareto-inefficient. That one sentence is basically why John Nash was such a big deal; up until that point, everyone had assumed that if everyone acted in their own self-interest, the end result would have to be Pareto-efficient; Nash proved that this isn’t true at all. Everyone acting in their own self-interest can doom us all.

It’s not hard to see why Pareto-efficiency would be a good thing: if we can make someone better off without hurting anyone else, why wouldn’t we? What’s harder for most people—and even most economists—to understand is that just because an outcome is Pareto-efficient, that doesn’t mean it’s good.

I think this is easiest to see in zero-sum games, so let’s go back to my little game of distributing the same $10. Let’s say it’s all within my power to choose—this is called the ultimatum game. If I take $9 for myself and only give you $1, is that Pareto-efficient? It sure is; for me to give you any more, I’d have to lose some for myself. But is it fair? Obviously not! The fair option is for me to go fifty-fifty, $5 and $5; and maybe you’d forgive me if I went sixty-forty, $6 and $4. But if I take $9 and only offer you $1, you know you’re getting a raw deal.

Actually as the game is often played, you have the choice the say, “Forget it; if that’s your offer, we both get nothing.” In that case the game is nonzero-sum, and the choice you’ve just taken is not Pareto-efficient! Neoclassicists are typically baffled at the fact that you would turn down that free $1, paltry as it may be; but I’m not baffled at all, and I’d probably do the same thing in your place. You’re willing to pay that $1 to punish me for being so stingy. And indeed, if you allow this punishment option, guess what? People aren’t as stingy! If you play the game without the rejection option, people typically take about $7 and give about $3 (still fairer than the $9/$1, you may notice; most people aren’t psychopaths), but if you allow it, people typically take about $6 and give about $4. Now, these are pretty small sums of money, so it’s a fair question what people might do if $100,000 were on the table and they were offered $10,000. But that doesn’t mean people aren’t willing to stand up for fairness; it just means that they’re only willing to go so far. They’ll take a $1 hit to punish someone for being unfair, but that $10,000 hit is just too much. I suppose this means most of us do what Guess Who told us: “You can sell your soul, but don’t you sell it too cheap!”

Now, let’s move on to the more complicated—and more realistic—scenario of a nonzero-sum game. In fact, let’s make the “game” a real-world situation. Suppose Congress is debating a bill that would introduce a 70% marginal income tax on the top 1% to fund a basic income. (Please, can we debate that, instead of proposing a balanced-budget amendment that would cripple US fiscal policy indefinitely and lead to a permanent depression?)

This tax would raise about 14% of GDP in revenue, or about $2.4 trillion a year (yes, really). It would then provide, for every man, woman and child in America, a $7000 per year income, no questions asked. For a family of four, that would be $28,000, which is bound to make their lives better.

But of course it would also take a lot of money from the top 1%; Mitt Romney would only make $6 million a year instead of $20 million, and Bill Gates would have to settle for $2.4 billion a year instead of $8 billion. Since it’s the whole top 1%, it would also hurt a lot of people with more moderate high incomes, like your average neurosurgeon or Paul Krugman, who each make about $500,000 year. About $100,000 of that is above the cutoff for the top 1%, so they’d each have to pay about $70,000 more than they currently do in taxes; so if they were paying $175,000 they’re now paying $245,000. Once taking home $325,000, now only $255,000. (Probably not as big a difference as you thought, right? Most people do not seem to understand how marginal tax rates work, as evinced by “Joe the Plumber” who thought that if he made $250,001 he would be taxed at the top rate on the whole amount—no, just that last $1.)

You can even suppose that it would hurt the economy as a whole, though in fact there’s no evidence of that—we had tax rates like this in the 1960s and our economy did just fine. The basic income itself would inject so much spending into the economy that we might actually see more growth. But okay, for the sake of argument let’s suppose it also drops our per-capita GDP by 5%, from $53,000 to $50,300; that really doesn’t sound so bad, and any bigger drop than that is a totally unreasonable estimate based on prejudice rather than data. For the same tax rate might have to drop the basic income a bit too, say $6600 instead of $7000.

So, this is not a Pareto-improvement; we’re making some people better off, but others worse off. In fact, the way economists usually estimate Pareto-efficiency based on so-called “economic welfare”, they really just count up the total number of dollars and divide by the number of people and call it a day; so if we lose 5% in GDP they would register this as a Pareto-loss. (Yes, that’s a ridiculous way to do it for obvious reasons—$1 to Mitt Romney isn’t worth as much as it is to you and me—but it’s still how it’s usually done.)

But does that mean that it’s a bad idea? Not at all. In fact, if you assume that the real value—the utility—of a dollar decreases exponentially with each dollar you have, this policy could almost double the total happiness in US society. If you use a logarithm instead, it’s not quite as impressive; it’s only about a 20% improvement in total happiness—in other words, “only” making as much difference to the happiness of Americans from 2014 to 2015 as the entire period of economic growth from 1900 to 2000.

If right now you’re thinking, “Wow! Why aren’t we doing that?” that’s good, because I’ve been thinking the same thing for years. And maybe if we keep talking about it enough we can get people to start voting on it and actually make it happen.

But in order to make things like that happen, we must first get past the idea that Pareto-efficiency is the only thing that matters in moral decisions. And once again, that means overcoming the standard modes of thinking in neoclassical economics.

Something strange happened to economics in about 1950. Before that, economists from Marx to Smith to Keynes were always talking about differences in utility, marginal utility of wealth, how to maximize utility. But then economists stopped being comfortable talking about happiness, deciding (for reasons I still do not quite grasp) that it was “unscientific”, so they eschewed all discussion of the subject. Since we still needed to know why people choose what they do, a new framework was created revolving around “preferences”, which are a simple binary relation—you either prefer it or you don’t, you can’t like it “a lot more” or “a little more”—that is supposedly more measurable and therefore more “scientific”. But under this framework, there’s no way to say that giving a dollar to a homeless person makes a bigger difference to them than giving the same dollar to Mitt Romney, because a “bigger difference” is something you’ve defined out of existence. All you can say is that each would prefer to receive the dollar, and that both Mitt Romney and the homeless person would, given the choice, prefer to be Mitt Romney. While both of these things are true, it does seem to be kind of missing the point, doesn’t it?

There are stirrings of returning to actual talk about measuring actual (“cardinal”) utility, but still preferences (so-called “ordinal utility”) are the dominant framework. And in this framework, there’s really only one way to evaluate a situation as good or bad, and that’s Pareto-efficiency.

Actually, that’s not quite right; John Rawls cleverly came up with a way around this problem, by using the idea of “maximin”—maximize the minimum. Since each would prefer to be Romney, given the chance, we can say that the homeless person is worse off than Mitt Romney, and therefore say that it’s better to make the homeless person better off. We can’t say how much better, but at least we can say that it’s better, because we’re raising the floor instead of the ceiling. This is certainly a dramatic improvement, and on these grounds alone you can argue for the basic income—your floor is now explicitly set at the $6600 per year of the basic income.

But is that really all we can say? Think about how you make your own decisions; do you only speak in terms of strict preferences? I like Coke more than Pepsi; I like massages better than being stabbed. If preference theory is right, then there is no greater distance in the latter case than the former, because this whole notion of “distance” is unscientific. I guess we could expand the preference over groups of goods (baskets as they are generally called), and say that I prefer the set “drink Pepsi and get a massage” to the set “drink Coke and get stabbed”, which is certainly true. But do we really want to have to define that for every single possible combination of things that might happen to me? Suppose there are 1000 things that could happen to me at any given time, which is surely conservative. In that case there are 2^1000 = 10^300 possible combinations. If I were really just reading off a table of unrelated preference relations, there wouldn’t be room in my brain—or my planet—to store it, nor enough time in the history of the universe to read it. Even imposing rational constraints like transitivity doesn’t shrink the set anywhere near small enough—at best maybe now it’s 10^20, well done; now I theoretically could make one decision every billion years or so. At some point doesn’t it become a lot more parsimonious—dare I say, more scientific—to think that I am using some more organized measure than that? It certainly feels like I am; even if couldn’t exactly quantify it, I can definitely say that some differences in my happiness are large and others are small. The mild annoyance of drinking Pepsi instead of Coke will melt away in the massage, but no amount of Coke deliciousness is going to overcome the agony of being stabbed.

And indeed if you give people surveys and ask them how much they like things or how strongly they feel about things, they have no problem giving you answers out of 5 stars or on a scale from 1 to 10. Very few survey participants ever write in the comments box: “I was unable to take this survey because cardinal utility does not exist and I can only express binary preferences.” A few do write 1s and 10s on everything, but even those are fairly rare. This “cardinal utility” that supposedly doesn’t exist is the entire basis of the scoring system on Netflix and Amazon. In fact, if you use cardinal utility in voting, it is mathematically provable that you have the best possible voting system, which may have something to do with why Netflix and Amazon like it. (That’s another big “Why aren’t we doing this already?”)

If you can actually measure utility in this way, then there’s really not much reason to worry about Pareto-efficiency. If you just maximize utility, you’ll automatically get a Pareto-efficient result; but the converse is not true because there are plenty of Pareto-efficient scenarios that don’t maximize utility. Thinking back to our ultimatum game, all options are Pareto-efficient, but you can actually prove that the $5/$5 choice is the utility-maximizing one, if the two players have the same amount of wealth to start with. (Admittedly for those small amounts there isn’t much difference; but that’s also not too surprising, since $5 isn’t going to change anybody’s life.) And if they don’t—suppose I’m rich and you’re poor and we play the game—well, maybe I should give you more, precisely because we both know you need it more.

Perhaps even more significant, you can move from a Pareto-inefficient scenario to a Pareto-efficient one and make things worse in terms of utility. The scenario in which the top 1% are as wealthy as they can possibly be and the rest of us live on scraps may in fact be Pareto-efficient; but that doesn’t mean any of us should be interested in moving toward it (though sadly, we kind of are). If you’re only measuring in terms of Pareto-efficiency, your attempts at improvement can actually make things worse. It’s not that the concept is totally wrong; Pareto-efficiency is, other things equal, good; but other things are never equal.

So that’s Pareto-efficiency—and why you really shouldn’t care about it that much.

2014, a year of war

[First of all, let me apologize for missing last week’s post. It was quite a week for me; the weekend itself (actually Wednesday to Sunday) was taken up by Gen Con, after which I had my four-day road trip back to Long Beach, and then of course I had to unpack, clean my apartment, stock my refrigerator and so on. Now that I’m settled in back in Long Beach, I should be able to resume my regular blogging schedule. Classes start on Monday, but I won’t let that stop me.]

Things aren’t looking too good in the world lately. Russia launched a secret invasion of the Ukraine and is now deploying “humanitarian convoys” with full military capability. The war between Israel and its neighbors has reached a new flashpoint. Assad continues to oppress Syria, but lately he’s looking like the lesser of two evils as he escalates the war against ISIS. Then again, ISIS is kind of his fault to begin with. But blame aside, ISIS is absolutely horrifying; they recently beheaded an American journalist. Even China just did some belligerent maneuvers around a US spyplane (basically a dick-measuring contest that China hasn’t the faintest hope of winning).

Indeed, things have gotten so bad that the UN has rated three different countries level 3 humanitarian crises, the worst rating any crisis has received as long as the UN has existed. People are making comparisons to the Rwandan genocide and even World War 2.

But it’s important to keep in mind that the reason this bother us, the reason it is so shocking and aberrant, is that the latter half of the 20th century and the start of the 21st have been the most peaceful period in recorded history. Technology notwithstanding, the level of violence we are seeing now would not have been out of place in the Middle Ages; even if they’d had a world news broadcast media it would have given these events only minor attention.

It’s also interesting to see how neoclassical economists try to understand the phenomenon of war. Right-wing economists who think that humans are rational are completely baffled by war, because it cripples infrastructure and kills millions of people (as neoclassical economists would say, “depreciation of human capital”, as though human beings were a special case of machinery), basically the opposite of what an economy is supposed to do.

Austrians use this fact as yet another plank in an anti-Keynesian platform; they frequently accuse Keynesians of thinking war is good—when I’ve yet to meet a Keynesian who actually said such a thing. Stiglitz, the one they cite most approvingly in that article, is a die-hard Keynesian; moreover the column in which Krugman talks about alien invasions was obviously tongue-in-cheek. It’s quite interesting to me how Austrians are always saying that humans are rational and economists are not, so apparently they don’t think economists are human. (I concur that economists are often irrational, but I never said humans weren’t.)

Krugman is more sensitive to the irrationality of human behavior than most neoclassicists, and as a result he does proportionally better; Krugman recognizes that war is done for political, not economic reasons. But as neoclassical Keynesians are wont to do, he doesn’t look deeper; human behavior is assumed to be a minor deviation from the infinite identical psychopath, rather than a fundamentally different paradigm.

Think about it: Why would it be that leaders become more popular when they start wars, especially if war is economically damaging? Shouldn’t people be angry at a leader who insists upon risking their lives and destroying their wealth?

To be fair, some are; anti-war protest is about as old as war. But the vast majority of people in the vast majority of wars have supported their leaders, sometimes even saying things like “I disagree with this war, but we must all stand together in order to win it.” If you think that humans are rational self-interested optimizers, this sort of behavior must seem absolutely nonsensical.

But it makes perfect sense once you realize what humans actually are. We’re not selfish. We’re also not altruistic, not in the broadest sense. We are tribal. We identify ourselves with a group, our tribe, and then act to advance the perceived interests of that group.

What tribe we choose can vary, even within one person: You can have varying degrees of solidarity (remember how I said solidarity can be quantitatively formalized?) with your family, your friends, your school, your home town, your state, your nation, your race, your culture, your religion, your species. You can be torn between these different identities when their interests conflict. At the two extremes lie your own self-interest and the interests of all sentient beings in the universe; one measure of your moral development as an individual is how much time you can spend toward the latter end rather than the former.

When a leader declares war, he—it is usually a ‘he’, though Margaret Thatcher is a notable exception—is either expressing that tribal instinct or capitalizing upon it. For examples of each, look no further than George W. Bush, who really believed in avenging 9/11 and toppling Saddam Hussein, and Dick Cheney, who saw the Iraq War as a great way to raise the value of Halliburton stock. (Among living people, Dick Cheney is the closest I can think of to a neoclassical rational agent. Among the dead, I think I’d go with Josef Stalin. Look upon your ‘rationality’ and despair.)

The reason Netanyahu’s popularity spiked in the invasion (it’s heading down now, but still over 50%) and Putin’s remains above an astonishing 80% is that they are maximizing this tribal instinct, rallying the tribe to righteous war against its enemies. They are behaving like the alpha male our ape brains have long missed—I mean, seriously, Putin looks like a shaved gorilla. The aggression is driven by an ancient animus that we have spent millions of years trying to transcend.

The good news is, we actually are beginning to succeed. The process is slow and painful, and there are setbacks—2014 was definitely a setback—but still, we do make progress. We have expanded our notion of tribes over time, far beyond its original capacity. We evolved for a tribe of about 100 people, barely above what we’d now call “friends and family”; we now unite ourselves into nation-states of hundreds of millions or even billions. The very fact that I can say “China did X” and not be speaking utter nonsense is proof that humanity has made it quite far along the continuum toward universal altruism. We have already advanced seven orders of magnitude; we have less than one left before we include the entire human species. Another two or three after that, and we’ll have encompassed all sentient life on Earth. Another five or six past that, the galaxy; then another nine and we may well have the whole damn universe. 7 down, 18 to go.

Don’t lose hope; this year’s violence is an anomaly in the trend toward peace.

Schools of Thought

If you’re at all familiar with the schools of thought in economics, you may wonder where I stand. Am I a Keynesian? Or perhaps a post-Keynesian? A New Keynesian? A neo-Keynesian (not to be confused)? A neo-paleo-Keynesian? Or am I a Monetarist? Or a Modern Monetary Theorist? Or perhaps something more heterodox, like an Austrian or a Sraffian or a Marxist?

No, I am none of those things. I guess if you insist on labeling, you could call me a “cognitivist”; and in terms of policy I tend to agree with the Keynesians, but I also like the Modern Monetary Theorists.

But really I think this sort of labeling of ‘schools of thought’ is exactly the problem. There shouldn’t be schools of thought; the universe only works one way. When you don’t know the answer, you should have the courage to admit you don’t know. And once we actually have enough evidence to know something, people need to stop disagreeing about it. If you continue to disagree with what the evidence has shown, you’re not a ‘school of thought’; you’re just wrong.

The whole notion of ‘schools of thought’ smacks of cultural relativism; asking what the ‘Keynesian’ answer to a question is (and if you take enough economics classes I guarantee you will be asked exactly that) is rather like asking what religious beliefs prevail in a particular part of the world. It might be worth asking for some historical reason, but it’s not a question about economics; it’s a question about economic beliefs. This is the difference between asking how people believe the universe was created, and actually being a cosmologist. True, schools of thought aren’t as geographically localized as religions; but they do say the words ‘saltwater’ and ‘freshwater’ for a reason. I’m not all that interested in the Shinto myths versus the Hindu myths; I want to be a cosmologist.

At best, schools of thought are a sign of a field that hasn’t fully matured. Perhaps there were Newtonians and Einsteinians in 1910; but by 1930 there were just Einsteinians and bad physicists. Are there ‘schools of thought’ in physics today? Well, there are string theorists. But string theory hasn’t been a glorious success of physics advancement; on the contrary, it’s been a dead end from which the field has somehow failed to extricate itself for almost 50 years.

So where does that put us in economics? Well, some of the schools of thought are clearly dead ends, every bit as unfounded as string theory but far worse because they have direct influences on policy. String theory hasn’t ever killed anyone; bad economics definitely has. (How, you ask? Exposure to hazardous chemicals that were deregulated; poverty and starvation due to cuts to social welfare programs; and of course the Second Depression. I could go on.)

The worst offender is surely Austrian economics and its crazy cousin Randian libertarianism. Ayn Rand literally ruled a cult; Friedrich Hayek never took it quite that far, but there is certainly something cultish about Austrian economists. They insist that economics must be derived a priori, without recourse to empirical evidence (or at least that’s what they say when you point out that all the empirical evidence is against them). They are fond of ridiculous hyperbole about an inevitable slippery slope between raising taxes on capital gains and turning into Stalin’s Soviet Union, as well as rhetorical questions I find myself answering opposite to how they want (like “For are taxes not simply another form of robbery?” and “Once we allow the government to regulate what man can do, will they not continue until they control all aspects of our lives?”). They even co-opt and distort cognitivist concepts like herd instinct and asymmetric information; somehow Austrians think that asymmetric information is an argument for why markets are more efficient than government, even though Akerlof’s point was that asymmetric information is why we need regulations.

Marxists are on the opposite end of the political spectrum, but their ideas are equally nonsensical. (Marx himself was a bit more reasonable, but even he recognized they were going too far: “All I know is that I am not a Marxist.”) They have this whole “labor theory of value” thing where the value of something is the amount of work you have to put into it. This would mean that labor-saving innovations are pointless, because they devalue everything; it would also mean that putting an awful lot of work into something useless would nevertheless somehow make it enormously valuable. Really, it would never be worth doing much of anything, because the value you get out of something is exactly equal to the work you put in. Marxists also tend to think that what the world needs is a violent revolution to overthrow the bondage of capitalism; this is an absolutely terrible idea. During the transition it would be one of the bloodiest conflicts in history; afterward you’d probably get something like the Soviet Union or modern-day Venezuela. Even if you did somehow establish your glorious Communist utopia, you’d have destroyed so much productive capacity in the process that you’d make everyone poor. Socialist reforms make sense—and have worked well in Europe, particularly Scandinavia. But socialist revolution is a a good way to get millions of innocent people killed.

Sraffians are also quite silly; they have this bizarre notion that capital must be valued as “dated labor”, basically a formalized Marxism. I’ll admit, it’s weird how neoclassicists try to value labor as “human capital”; frankly it’s a bit disturbing how it echoes slavery. (And if you think slavery is dead, think again; it’s dead in the First World, but very much alive elsewhere.) But the solution to that problem is not to pretend that capital is a form of labor; it’s to recognize that capital and labor are different. Capital can be owned, sold, and redistributed; labor cannot. Labor is done by human beings, who have intrinsic value and rights; capital is made of inanimate matter, which does not. (This is what makes Citizens United so outrageous; “corporations are people” and “money is speech” are such fundamental distortions of democratic principles that they are literally Orwellian. We’re not that far from “freedom is slavery” and “war is peace”.)

Neoclassical economists do better, at least. They do respond to empirical data, albeit slowly. Their models are mathematically consistent. They rarely take account of human irrationality or asymmetric information, but when they do they rightfully recognize them as obstacles to efficient markets. But they still model people as infinite identical psychopaths, and they still divide themselves into schools of thought. Keynesians and Monetarists are particularly prominent, and Modern Monetary Theorists seem to be the next rising star. Each of these schools gets some things right and other things wrong, and that’s exactly why we shouldn’t make ourselves beholden to a particular tribe.

Monetarists follow Friedman, who said, “inflation is always and everywhere a monetary phenomenon.” This is wrong. You can definitely cause inflation without expanding your money supply; just ramp up government spending as in World War 2 or suffer a supply shock like we did when OPEC cut the oil supply. (In both cases, the US money supply was still tied to gold by the Bretton Woods system.) But they are right about one thing: To really have hyperinflation ala Weimar or Zimbabwe, you probably have to be printing money. If that were all there is to Monetarism, I can invert another Friedmanism: We’re all Monetarists now.

Keynesians are basically right about most things; in particular, they are the only branch of neoclassicists who understand recessions and know how to deal with them. The world’s most famous Keynesian is probably Krugman, who has the best track record of economic predictions in the popular media today. Keynesians much better appreciate the fact that humans are irrational; in fact, cognitivism can be partly traced to Keynes, who spoke often of the “animal spirits” that drive human behavior (Akerlof’s most recent book is called Animal Spirits). But even Keynesians have their sacred cows, like the Phillips Curve, the alleged inverse correlation between inflation and unemployment. This is fairly empirically accurate if you look just at First World economies after World War 2 and exclude major recessions. But Keynes himself said, “Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” The Phillips Curve “shifts” sometimes, and it’s not always clear why—and empirically it’s not easy to tell the difference between a curve that shifts a lot and a relationship that just isn’t there. There is very little evidence for a “natural rate of unemployment”. Worst of all, it’s pretty clear that the original policy implications of the Phillips Curve are all wrong; you can’t get rid of unemployment just by ramping up inflation, and that way really does lie Zimbabwe.

Finally, Modern Monetary Theorists understand money better than everyone else. They recognize that a sovereign government doesn’t have to get its money “from somewhere”; it can create however much money it needs. The whole narrative that the US is “out of money” isn’t just wrong, it’s incoherent; if there is one entity in the world that can never be out of money, it’s the US government, who print the world’s reserve currency. The panicked fears of quantitative easing causing hyperinflation aren’t quite as crazy; if the economy were at full capacity, printing $4 trillion over 5 years (yes, we did that) would absolutely cause some inflation. Since that’s only about 6% of US GDP, we might be back to 8% or even 10% inflation like the 1970s, but we certainly would not be in Zimbabwe. Moreover, we aren’t at full capacity; we needed to expand the money supply that much just to maintain prices where they are. The Second Depression is the Red Queen: It took all the running we could do to stay in one place. Modern Monetary Theorists also have some very good ideas about taxation; they point out that since the government only takes out the same thing it puts in—its own currency—it doesn’t make sense to say they are “taking” something (let alone “confiscating” it as Austrians would have you believe). Instead, it’s more like they are pumping it, taking money in and forcing it back out continuously. And just as pumping doesn’t take away water but rather makes it flow, taxation and spending doesn’t remove money from the economy but rather maintains its circulation. Now that I’ve said what they get right, what do they get wrong? Basically they focus too much on money, ignoring the real economy. They like to use double-entry accounting models, perfectly sensible for money, but absolutely nonsensical for real value. The whole point of an economy is that you can get more value out than you put in. From the Homo erectus who pulls apples from the trees to the software developer who buys a mansion, the reason they do it is that the value they get out (the gatherer gets to eat, the programmer gets to live in a mansion) is higher than the value they put in (the effort to climb the tree, the skill to write the code). If, as Modern Monetary Theorists are wont to do, you calculated a value for the human capital of the gatherer and the programmer equal to the value of the goods they purchase, you’d be missing the entire point.

Who are you? What is this new blog? Why “Infinite Identical Psychopaths”?

My name is Patrick Julius. I am about halfway through a master’s degree in economics, specializing in the new subfield of cognitive economics (closely related to the also quite new fields of cognitive science and behavioral economics). This makes me in one sense heterodox; I disagree adamantly with most things that typical neoclassical economists say. But in another sense, I am actually quite orthodox. All I’m doing is bringing the insights of psychology, sociology, history, and political science—not to mention ethics—to the study of economics. The problem is simply that economists have divorced themselves so far from the rest of social science.

Another way I differ from most critics of mainstream economics (I’m looking at you, Peter Schiff) is that, for lack of a better phrase, I’m good at math. (As Bill Clinton said, “It’s arithmetic!”) I understand things like partial differential equations and subgame perfect equilibria, and therefore I am equipped to criticize them on their own terms. In this blog I will do my best to explain the esoteric mathematical concepts in terms most readers can understand, but it’s not always easy. The important thing to keep in mind is that fancy math can’t make a lie true; no matter how sophisticated its equations, a model that doesn’t fit the real world can’t be correct.

This blog, which I plan to update every Saturday, is about the current state of economics, both as it is and how economists imagine it to be. One of my central points is that these two are quite far apart, which has exacerbated if not caused the majority of economic problems in the world today. (Economists didn’t invent world hunger, but for over a decade now we’ve had the power to end it and haven’t done so. You’d be amazed how cheap it would be; we’re talking about 1% of First World GDP at most.)

The reason I call it “infinite identical psychopaths” is that this is what neoclassical economists appear to believe human beings are, at least if we judge by the models they use. These are the typical assumptions of a neoclassical economic model:

      1. Perfect information: All individuals know everything they need to know about the state of the world and the actions of other individuals.
      2. Rational expectations: Predictions about the future can only be wrong within a normal distribution, and in the long run are on average correct.
      3. Representative agents: All individuals are identical and interchangeable; a single type represents them all.
      4. Perfect competition: There are infinitely many agents in the market, and none of them ever collude with one another.
      5. “Economic rationality”: Individuals act according to a monotonic increasing utility function that is only dependent upon their own present and future consumption of goods.

I put the last one in scare quotes because it is the worst of the bunch. What economists call “rationality” has only a distant relation to actual rationality, either as understood by common usage or by formal philosophical terminology.

Don’t be scared by the terminology; a “utility function” is just a formal model of the things you care about when you make decisions. Things you want have positive utility; things you don’t want have negative utility. Larger numbers reflect stronger feelings: a bar of chocolate has much less positive utility than a decade of happy marriage; a pinched finger has much less negative utility than a year of continual torture. Utility maximization just means that you try to get the things you want and avoid the things you don’t. By talking about expected utility, we make some allowance for an uncertain future—but not much, because we have so-called “rational expectations”.

Since any action taken by an “economically rational” agent maximizes expected utility, it is impossible for such an agent to ever make a mistake in the usual sense. Whatever they do is always the best idea at the time. This is already an extremely strong assumption that doesn’t make a whole lot of sense applied to human beings; who among us can honestly say they’ve never done anything they later regretted?

The worst part, however, is the assumption that an individual’s utility function depends only upon their own consumption. What this means is that the only thing anyone cares about is how much stuff they have; considerations like family, loyalty, justice, honesty, and fairness cannot factor into their decisions. The “monotonic increasing” part means that more stuff is always better; if they already have twelve private jets, they’d still want a thirteenth; and even if children had to starve for it, they’d be just fine with that. They are, in other words, psychopaths. So that’s one word of my title.

I think “identical” is rather self-explanatory; by using representative agent models, neoclassicists effectively assume that there is no variation between human beings whatsoever. They all have the same desires, the same goals, the same capabilities, the same resources. Implicit in this assumption is the notion that there is no such thing as poverty or wealth inequality, not to mention diversity, disability, or even differences in taste. (One wonders why you’d even bother with economics if that were the case.)

As for “infinite”, that comes from the assumptions of perfect information and perfect competition. In order to really have perfect information, one would need a brain with enough storage capacity to contain the state of every particle in the visible universe. Maybe not quite infinite, but pretty darn close. Likewise, in order to have true perfect competition, there must be infinitely many individuals in the economy, all of whom are poised to instantly take any opportunity offered that allows them to make even the tiniest profit.

Now, you might be thinking this is a strawman; surely neoclassicists don’t actually believe that people are infinite identical psychopaths. They just model that way to simplify the mathematics, which is of course necessary because the world is far too vast and interconnected to analyze in its full complexity.

This is certainly true: Suppose it took you one microsecond to consider each possible position on a Go board; how long would it take you to go through them all? More time than we have left before the universe fades into heat death. A Go board has two colors (plus empty) and 361 spaces. Now imagine trying to understand a global economy of 7 billion people by brute-force analysis. Simplifying heuristics are unavoidable.

And some neoclassical economists—for example Paul Krugman and Joseph Stiglitz—generally use these heuristics correctly; they understand the limitations of their models and don’t apply them in cases where they don’t belong. In that sort of case, there’s nothing particularly bad about these simplifying assumptions; they are like when a physicist models the trajectory of a spacecraft by assuming frictionless vacuum. Since outer space actually is close to a frictionless vacuum, this works pretty well; and if you need to make minor corrections (like the Pioneer Anomaly) you can.

However, this explanation already seems weird for the “economically rational” assumption (the psychopath part), because that doesn’t really make things much simpler. Why would we exclude the fact that people care about each other, they like to cooperate, they have feelings of loyalty and trust? And don’t tell me it’s because that’s impossible to quantify; behavioral geneticists already have a simple equation (C < r B) designed precisely to quantify altruism. (C is cost, B is benefit, r is relatedness.) I’d make only one slight modification; instead of r for relatedness, use p for psychological closeness, or as I like to call it, solidarity. For humans, solidarity is usually much higher than relatedness, though the two are correlated. C < p B.

Worse, there are other neoclassical economists—those of the most fanatically “free-market” bent—who really don’t seem to do this. I don’t know if they honestly believe that people are infinite identical psychopaths, but they make policy as if they did.

We have people like Stephen Moore saying that unemployment is “like a paid vacation” because obviously anyone who truly wants a job can immediately find one, or people like N. Gregory Mankiw arguing—in a published paper no less!—that the reason Steve Jobs was a billionaire was that he was actually a million times as productive as the rest of us, and therefore it would be inefficient (and, he implies but does not say outright, immoral) to take the fruits of those labors from him. (Honestly, I think I could concede the point and still argue for redistribution, on the grounds that people do not deserve to starve to death simply because they aren’t productive; but that’s the sort of thing never even considered by most neoclassicists, and anyway it’s a topic for another time.)

These kinds of statements would only make sense if markets were really as efficient and competitive as neoclassical models—that is, if people were infinite identical psychopaths. Allow even a single monopoly or just a few bits of imperfect information, and that whole edifice collapses.

And indeed if you’ve ever been unemployed or known someone who was, you know that our labor markets just ain’t that efficient. If you want to cut unemployment payments, you need a better argument than that. Similarly, it’s obvious to anyone who isn’t wearing the blinders of economic ideology that many large corporations exert monopoly power to increase their profits at our expense (How can you not see that Apple is a monopoly!?).

This sort of reasoning is more like plotting the trajectory of an aircraft on the assumption of frictionless vacuum; you’d be baffled as to where the oxidizer comes from, or how the craft manages to lift itself off the ground when the exhaust vents are pointed sideways instead of downward. And then you’d be telling the aerospace engineers to cut off the wings because they’re useless mass.

Worst of all, if we continue this analogy, the engineers would listen to you—they’d actually be convinced by your differential equations and cut off the wings just as you requested. Then the plane would never fly, and they’d ask if they could put the wings back on—but you’d adamantly insist that it was just coincidence, you just happened to be hit by a random problem at the very same moment as you cut off the wings, and putting them back on will do nothing and only make things worse.

No, seriously; so-called “Real Business Cycle” theory, while thoroughly obfuscated in esoteric mathematics, ultimately boils down to the assertion that financial crises have nothing to do with recessions, which are actually caused by random shocks to the real economy—the actual production of goods and services. The fact that a financial crisis always seems to happen just beforehand is, apparently, sheer coincidence, or at best some kind of forward-thinking response investors make as they see the storm coming. I want to you think for a minute about the idea that the kind of people who make computer programs that accidentally collapse the Dow, who made Bitcoin the first example in history of hyperdeflation, and who bought up Tweeter thinking it was Twitter are forward-thinking predictors of future events in real production.

And yet, it is on this sort of basis that our policy is made.

Can otherwise intelligent people really believe that these insane models are true? I’m not sure.
Sadly I think they may really believe that all people are psychopaths—because they themselves may be psychopaths. Economics students score higher on various psychopathic traits than other students. Part of this is self-selection—psychopaths are more likely to study economics—but the terrifying part is that part of it isn’t—studying economics may actually make you more like a sociopath. As I study for my master’s degree, I actually am somewhat afraid of being corrupted by this; I make sure to periodically disengage from their ideology and interact with normal people with normal human beliefs to recalibrate my moral compass.

Of course, it’s still pretty hard to imagine that anyone could honestly believe that the world economy is in a state of perfect information. But if they can’t really believe this insane assumption, why do they keep using models based on it?

The more charitable possibility is that they don’t appreciate just how sensitive the models are to the assumptions. They may think, for instance, that the General Welfare Theorems still basically apply if you relax the assumption of perfect information; maybe it’s not always Pareto-efficient, but it’s probably most of the time, right? Or at least close? Actually, no. The Myerson-Satterthwaithe Theorem says that once you give up perfect information, the whole theorem collapses; even a small amount of asymmetric information is enough to make it so that a Pareto-efficient outcome is impossible. And as you might expect, the more asymmetric the information is, the further the result deviates from Pareto-efficiency. And since we always have some asymmetric information, it looks like the General Welfare Theorems really aren’t doing much for us. They apply only in a magical fantasy world. (In case you didn’t know, Pareto-efficiency is a state in which it’s impossible to make any person better off without making someone else worse off. The real world is in a not Pareto-efficient state, which means that by smarter policy we could improve some people’s lives without hurting anyone else.)

The more sinister possibility is that they know full well that the models are wrong, they just don’t care. The models are really just excuses for an underlying ideology, the unshakeable belief that rich people are inherently better than poor people and private corporations are inherently better than governments. Hence, it must be bad for the economy to raise the minimum wage and good to cut income taxes, even though the empirical evidence runs exactly the opposite way; it must be good to subsidize big oil companies and bad to subsidize solar power research, even though that makes absolutely no sense.

One should normally be hesitant to attribute to malice what can be explained by stupidity, but the “I trust the models” explanation just doesn’t work for some of the really extreme privatizations that the US has undergone since Reagan.

No neoclassical model says that you should privatize prisons; prisons are a classic example of a public good, which would be underfunded in a competitive market and basically has to be operated or funded by the government.

No neoclassical model would support the idea that the EPA is a terrorist organization (yes, a member of the US Congress said this). In fact, the economic case for environmental regulations is unassailable. (What else are we supposed to do, privatize the air?) The question is not whether to regulate and tax pollution, but how and how much.

No neoclassical model says that you should deregulate finance; in fact, most neoclassical models don’t even include a financial sector (as bizarre and terrifying as that is), and those that do generally assume it is in a state of perfect equilibrium with zero arbitrage. If the financial sector were actually in a state of zero arbitrage, no banks would make a profit at all.

In case you weren’t aware, arbitrage is the practice of making money off of money without actually making any goods or doing any services. Unlike manufacturing (which, oddly enough, almost all neoclassical models are based on—despite the fact that it is now a minority sector in First World GDP), there’s no value added. Under zero arbitrage, the interest rate a bank charges should be almost exactly the same as the interest rate it receives, with just enough gap between to barely cover their operating expenses—which should in turn be minimal, especially in a modern electronic system. If financial markets were at zero arbitrage equilibrium, it would be sensible to speak of a single “real interest rate” in the economy, the one that everyone pays and everyone receives. Of course, those of us who live in the real world know that not only do different people pay radically different rates, most people have multiple outstanding lines of credit, each with a different rate. My savings account is 0.5%, my car loan is 5.5%, and my biggest credit card is 19%. These basically span the entire range of sensible interest rates (frankly 19% may even exceed that; that’s a doubling time of 3.6 years), and I know I’m not the exception but the rule.

So that’s the mess we’re in. Stay tuned; in future weeks I’ll talk about what we can do about it.