How to fix economics publishing

Aug 7 JDN 2459806

The current system of academic publishing in economics is absolutely horrible. It seems practically designed to undermine the mental health of junior faculty.

1. Tenure decisions, and even most hiring decisions, are almost entirely based upon publication in five (5) specific journals.

2. One of those “top five” journals is owned by Elsevier, a corrupt monopoly that has no basis for its legitimacy yet somehow controls nearly one-fifth of all scientific publishing.

3. Acceptance rates in all of these journals are between 5% and 10%—greatly decreased from what they were a generation or two ago. Given a typical career span, the senior faculty evaluating you on whether you were published in these journals had about a three times better chance to get their own papers published there than you do.

4. Submissions are only single-blinded, so while you have no idea who is reading your papers, they know exactly who you are and can base their decision on whether you are well-known in the profession—or simply whether they like you.

5. Simultaneous submissions are forbidden, so when submitting to journals you must go one at a time, waiting to hear back from one before trying the next.

6. Peer reviewers are typically unpaid and generally uninterested, and so procrastinate as long as possible on doing their reviews.

7. As a result, review times for a paper are often measured in months, for every single cycle.

So, a highly successful paper goes like this: You submit it to a top journal, wait three months, it gets rejected. You submit it to another one, wait another four months, it gets rejected. You submit it to a third one, wait another two months, and you are told to revise and resubmit. You revise and resubmit, wait another three months, and then finally get accepted.

You have now spent an entire year getting one paper published. And this was a success.

Now consider a paper that doesn’t make it into a top journal. You submit, wait three months, rejected; you submit again, wait four months, rejected; you submit again, wait two months, rejected. You submit again, wait another five months, rejected; you submit to the fifth and final top-five, wait another four months, and get rejected again.

Now, after a year and a half, you can turn to other journals. You submit to a sixth journal, wait three months, rejected. You submit to a seventh journal, wait four months, get told to revise and resubmit. You revise and resubmit, wait another two months, and finally—finally, after two years—actually get accepted, but not to a top-five journal. So it may not even help you get tenure, unless maybe a lot of people cite it or something.

And what if you submit to a seventh, an eighth, a ninth journal, and still keep getting rejected? At what point do you simply give up on that paper and try to move on with your life?

That’s a trick question: Because what really happens, at least to me, is I can’t move on with my life. I get so disheartened from all the rejections of that paper that I can’t bear to look at it anymore, much less go through the work of submitting it to yet another journal that will no doubt reject it again. But worse than that, I become so depressed about my academic work in general that I become unable to move on to any other research either. And maybe it’s me, but it isn’t just me: 28% of academic faculty suffer from severe depression, and 38% from severe anxiety. And that’s across all faculty—if you look just at junior faculty it’s even worse: 43% of junior academic faculty suffer from severe depression. When a problem is that prevalent, at some point we have to look at the system that’s making us this way.

I can blame the challenges of moving across the Atlantic during a pandemic, and the fact that my chronic migraines have been the most frequent and severe they have been in years, but the fact remains: I have accomplished basically nothing towards the goal of producing publishable research in the past year. I have two years left at this job; if I started right now, I might be able to get something published before my contract is done. Assuming that the project went smoothly, I could start submitting it as soon as it was done, and it didn’t get rejected as many times as the last one.

I just can’t find the motivation to do it. When the pain is so immediate and so intense, and the rewards are so distant and so uncertain, I just can’t bring myself to do the work. I had hoped that talking about this with my colleagues would help me cope, but it hasn’t; in fact it only makes me seem to feel worse, because so few of them seem to understand how I feel. Maybe I’m talking to the wrong people; maybe the ones who understand are themselves suffering too much to reach out to help me. I don’t know.

But it doesn’t have to be this way. Here are some simple changes that could make the entire process of academic publishing in economics go better:

1. Boycott Elsevier and all for-profit scientific journal publishers. Stop reading their journals. Stop submitting to their journals. Stop basing tenure decisions on their journals. Act as though they don’t exist, because they shouldn’t—and then hopefully soon they won’t.

2. Peer reviewers should be paid for their time, and in return required to respond promptly—no more than a few weeks. A lack of response should be considered a positive vote on that paper.

3. Allow simultaneous submissions; if multiple journals accept, let the author choose between them. This is already how it works in fiction publishing, which you’ll note has not collapsed.

4. Increase acceptance rates. You are not actually limited by paper constraints anymore; everything is digital now. Most of the work—even in the publishing process—already has to be done just to go through peer review, so you may as well publish it. Moreover, most papers that are submitted are actually worthy of publishing, and this whole process is really just an idiotic status hierarchy. If the prestige of your journal decreases because you accept more papers, we are measuring prestige wrong. Papers should be accepted something like 50% of the time, not 5-10%.

5. Double blind submissions, and insist on ethical standards that maintain that blinding. No reviewer should know whether they are reading the work of a grad student or a Nobel Laureate. Reputation should mean nothing; scientific rigor should mean everything.

And, most radical of all, what I really need in my life right now:

6. Faculty should not have to submit their own papers. Each university department should have administrative staff whose job it is to receive papers from their faculty, format them appropriately, and submit them to journals. They should deal with all rejections, and only report to the faculty member when they have received an acceptance or a request to revise and resubmit. Faculty should simply do the research, write the papers, and then fire and forget them. We have highly specialized skills, and our valuable time is being wasted on the clerical tasks of formatting and submitting papers, which many other people could do as well or better. Worse, we are uniquely vulnerable to the emotional impact of the rejection—seeing someone else’s paper rejected is an entirely different feeling from having your own rejected.

Do all that, and I think I could be happy to work in academia. As it is, I am seriously considering leaving and never coming back.

The replication crisis, and the future of science

Aug 27, JDN 2457628 [Sat]

After settling in a little bit in Irvine, I’m now ready to resume blogging, but for now it will be on a reduced schedule. I’ll release a new post every Saturday, at least for the time being.

Today’s post was chosen by Patreon vote, though only one person voted (this whole Patreon voting thing has not been as successful as I’d hoped). It’s about something we scientists really don’t like to talk about, but definitely need to: We are in the middle of a major crisis of scientific replication.

Whenever large studies are conducted attempting to replicate published scientific results, their ability to do so is almost always dismal.

Psychology is the one everyone likes to pick on, because their record is particularly bad. Only 39% of studies were really replicated with the published effect size, though a further 36% were at least qualitatively but not quantitatively similar. Yet economics has its own replication problem, and even medical research is not immune to replication failure.

It’s important not to overstate the crisis; the majority of scientific studies do at least qualitatively replicate. We are doing better than flipping a coin, which is better than one can say of financial forecasters.
There are three kinds of replication, and only one of them should be expected to give near-100% results. That kind is reanalysiswhen you take the same data and use the same methods, you absolutely should get the exact same results. I favor making reanalysis a routine requirement of publication; if we can’t get your results by applying your statistical methods to your data, then your paper needs revision before we can entrust it to publication. A number of papers have failed on reanalysis, which is absurd and embarrassing; the worst offender was probably Rogart-Reinhoff, which was used in public policy decisions around the world despite having spreadsheet errors.

The second kind is direct replication—when you do the exact same experiment again and see if you get the same result within error bounds. This kind of replication should work something like 90% of the time, but in fact works more like 60% of the time.

The third kind is conceptual replication—when you do a similar experiment designed to test the same phenomenon from a different perspective. This kind of replication should work something like 60% of the time, but actually only works about 20% of the time.

Economists are well equipped to understand and solve this crisis, because it’s not actually about science. It’s about incentives. I facepalm every time I see another article by an aggrieved statistician about the “misunderstanding” of p-values; no, scientist aren’t misunderstanding anything. They know damn well how p-values are supposed to work. So why do they keep using them wrong? Because their jobs depend on doing so.

The first key point to understand here is “publish or perish”; academics in an increasingly competitive system are required to publish their research in order to get tenure, and frequently required to get tenure in order to keep their jobs at all. (Or they could become adjuncts, who are paid one-fifth as much.)

The second is the fundamentally defective way our research journals are run (as I have discussed in a previous post). As private for-profit corporations whose primary interest is in raising more revenue, our research journals aren’t trying to publish what will genuinely advance scientific knowledge. They are trying to publish what will draw attention to themselves. It’s a similar flaw to what has arisen in our news media; they aren’t trying to convey the truth, they are trying to get ratings to draw advertisers. This is how you get hours of meaningless fluff about a missing airliner and then a single chyron scroll about a war in Congo or a flood in Indonesia. Research journals haven’t fallen quite so far because they have reputations to uphold in order to attract scientists to read them and publish in them; but still, their fundamental goal is and has always been to raise attention in order to raise revenue.

The best way to do that is to publish things that are interesting. But if a scientific finding is interesting, that means it is surprising. It has to be unexpected or unusual in some way. And above all, it has to be positive; you have to have actually found an effect. Except in very rare circumstances, the null result is never considered interesting. This adds up to making journals publish what is improbable.

In particular, it creates a perfect storm for the abuse of p-values. A p-value, roughly speaking, is the probability you would get the observed result if there were no effect at all—for instance, the probability that you’d observe this wage gap between men and women in your sample if in the real world men and women were paid the exact same wages. The standard heuristic is a p-value of 0.05; indeed, it has become so enshrined that it is almost an explicit condition of publication now. Your result must be less than 5% likely to happen if there is no real difference. But if you will only publish results that show a p-value of 0.05, then the papers that get published and read will only be the ones that found such p-values—which renders the p-values meaningless.

It was never particularly meaningful anyway; as we Bayesians have been trying to explain since time immemorial, it matters how likely your hypothesis was in the first place. For something like wage gaps where we’re reasonably sure, but maybe could be wrong, the p-value is not too unreasonable. But if the theory is almost certainly true (“does gravity fall off as the inverse square of distance?”), even a high p-value like 0.35 is still supportive, while if the theory is almost certainly false (“are human beings capable of precognition?”—actual study), even a tiny p-value like 0.001 is still basically irrelevant. We really should be using much more sophisticated inference techniques, but those are harder to do, and don’t provide the nice simple threshold of “Is it below 0.05?”

But okay, p-values can be useful in many cases—if they are used correctly and you see all the results. If you have effect X with p-values 0.03, 0.07, 0.01, 0.06, and 0.09, effect X is probably a real thing. If you have effect Y with p-values 0.04, 0.02, 0.29, 0.35, and 0.74, effect Y is probably not a real thing. But I’ve just set it up so that these would be published exactly the same. They each have two published papers with “statistically significant” results. The other papers never get published and therefore never get seen, so we throw away vital information. This is called the file drawer problem.

Researchers often have a lot of flexibility in designing their experiments. If their only goal were to find truth, they would use this flexibility to test a variety of scenarios and publish all the results, so they can be compared holistically. But that isn’t their only goal; they also care about keeping their jobs so they can pay rent and feed their families. And under our current system, the only way to ensure that you can do that is by publishing things, which basically means only including the parts that showed up as statistically significant—otherwise, journals aren’t interested. And so we get huge numbers of papers published that tell us basically nothing, because we set up such strong incentives for researchers to give misleading results.

The saddest part is that this could be easily fixed.

First, reduce the incentives to publish by finding other ways to evaluate the skill of academics—like teaching for goodness’ sake. Working papers are another good approach. Journals already get far more submissions than they know what to do with, and most of these papers will never be read by more than a handful of people. We don’t need more published findings, we need better published findings—so stop incentivizing mere publication and start finding ways to incentivize research quality.

Second, eliminate private for-profit research journals. Science should be done by government agencies and nonprofits, not for-profit corporations. (And yes, I would apply this to pharmaceutical companies as well, which should really be pharmaceutical manufacturers who make cheap drugs based off of academic research and carry small profit margins.) Why? Again, it’s all about incentives. Corporations have no reason to want to find truth and every reason to want to tilt it in their favor.

Third, increase the number of tenured faculty positions. Instead of building so many new grand edifices to please your plutocratic donors, use your (skyrocketing) tuition money to hire more professors so that you can teach more students better. You can find even more funds if you cut the salaries of your administrators and football coaches. Come on, universities; you are the one industry in the world where labor demand and labor supply are the same people a few years later. You have no excuse for not having the smoothest market clearing in the world. You should never have gluts or shortages.

Fourth, require pre-registration of research studies (as some branches of medicine already do). If the study is sound, an optimal rational agent shouldn’t care in the slightest whether it had a positive or negative result, and if our ape brains won’t let us think that way, we need to establish institutions to force it to happen. They shouldn’t even see the effect size and p-value before they make the decision to publish it; all they should care about is that the experiment makes sense and the proper procedure was conducted.
If we did all that, the replication crisis could be almost completely resolved, as the incentives would be realigned to more closely match the genuine search for truth.

Alas, I don’t see universities or governments or research journals having the political will to actually make such changes, which is very sad indeed.

Yes, but what about the next 5000 years?

JDN 2456991 PST 1:34.

This week’s post will be a bit different: I have a book to review. It’s called Debt: The First 5000 Years, by David Graeber. The book is long (about 400 pages plus endnotes), but such a compelling read that the hours melt away. “The First 5000 Years” is an incredibly ambitious subtitle, but Graeber actually manages to live up to it quite well; he really does tell us a story that is more or less continuous from 3000 BC to the present.

So who is this David Graeber fellow, anyway? None will be surprised that he is a founding member of Occupy Wall Street—he was in fact the man who coined “We are the 99%”. (As I’ve studied inequality more, I’ve learned he made a mistake; it really should be “We are the 99.99%”.) I had expected him to be a historian, or an economist; but in fact he is an anthropologist. He is looking at debt and its surrounding institutions in terms of a cultural ethnography—he takes a step outside our own cultural assumptions and tries to see them as he might if he were encountering them in a foreign society. This is what gives the book its freshest parts; Graeber recognizes, as few others seem willing to, that our institutions are not the inevitable product of impersonal deterministic forces, but decisions made by human beings.

(On a related note, I was pleasantly surprised to see in one of my economics textbooks yesterday a neoclassical economist acknowledging that the best explanation we have for why Botswana is doing so well—low corruption, low poverty by African standards, high growth—really has to come down to good leadership and good policy. For once they couldn’t remove all human agency and mark it down to grand impersonal ‘market forces’. It’s odd how strong the pressure is to do that, though; I even feel it in myself: Saying that civil rights progressed so much because Martin Luther King was a great leader isn’t very scientific, is it? Well, if that’s what the evidence points to… why not? At what point did ‘scientific’ come to mean ‘human beings are helplessly at the mercy of grand impersonal forces’? Honestly, doesn’t the link between science and technology make matters quite the opposite?)

Graeber provides a new perspective on many things we take for granted: in the introduction there is one particularly compelling passage where he starts talking—with a fellow left-wing activist—about the damage that has been done to the Third World by IMF policy, and she immediately interjects: “But surely one has to pay one’s debts.” The rest of the book is essentially an elaboration on why we say that—and why it is absolutely untrue.

Graeber has also made me think quite a bit differently about Medieval society and in particular Medieval Islam; this was certainly the society in which the writings of Socrates were preserved and algebra was invented, so it couldn’t have been all bad. But in fact, assuming that Graeber’s account is accurate, Muslim societies in the 14th century actually had something approaching the idyllic fair and free market to which all neoclassicists aspire. They did so, however, by rejecting one of the core assumptions of neoclassical economics, and you can probably guess which one: the assumption that human beings are infinite identical psychopaths. Instead, merchants in Medieval Muslim society were held to high moral standards, and their livelihood was largely based upon the reputation they could maintain as upstanding good citizens. Theoretically they couldn’t even lend at interest, though in practice they had workarounds (like payment in installments that total slightly higher than the original price) that amounted to low rates of interest. They did not, however, have anything approaching the levels of interest that we have today in credit cards at 29% or (it still makes me shudder every time I think about it) payday loans at 400%. Paying on installments to a Muslim merchant would make you end up paying about a 2% to 4% rate of interest—which sounds to me almost exactly what it should be, maybe even a bit low because we’re not taking inflation into account. In any case, the moral standards of society kept people from getting too poor or too greedy, and as a result there was little need for enforcement by the state. In spite of myself I have to admit that may not have been possible without the theological enforcement provided by Islam.
Graeber also avoids one of the most common failings of anthropologists, the cultural relativism that makes them unwilling to criticize any cultural practice as immoral even when it obviously is (except usually making exceptions for modern Western capitalist imperialism). While at times I can see he was tempted to go that way, he generally avoids it; several times he goes out of his way to point out how women were sold into slavery in hunter-gatherer tribes and how that contributed to the institutions of chattel slavery that developed once Western powers invaded.

Anthropologists have another common failing that I don’t think he avoids as well, which is a primitivist bent in which anthropologists speak of ancient societies as idyllic and modern societies as horrific. That’s part of why I said ‘if Graber’s account is accurate,’ because I’m honestly not sure it is. I’ll need to look more into the history of Medieval Islam to be sure. Graeber spends a great deal of time talking about how our current monetary system is fundamentally based on threats of violence—but I can tell you that I have honestly never been threatened with violence over money in my entire life. Not by the state, not by individuals, not by corporations. I haven’t even been mugged—and that’s the sort of the thing the state exists to prevent. (Not that I’ve never been threatened with violence—but so far it’s always been either something personal, or, more often, bigotry against LGBT people.) If violence is the foundation of our monetary system, then it’s hiding itself extraordinarily well. Granted, the violence probably pops up more if you’re near the very bottom, but I think I speak for most of the American middle class when I say that I’ve been through a lot of financial troubles, but none of them have involved any guns pointed at my head. And you can’t counter this by saying that we theoretically have laws on the books that allow you to be arrested for financial insolvency—because that’s always been true, in fact it’s less true now than any other point in history, and Graeber himself freely admits this. The important question is how many people actually get violence enforced upon them, and at least within the United States that number seems to be quite small.

Graeber describes the true story of the emergence of money historically, as the result of military conquest—a way to pay soldiers and buy supplies when in an occupied territory where nobody trusts you. He demolishes the (always fishy) argument that money emerged as a way of mediating a barter system: If I catch fish and he makes shoes and I want some shoes but he doesn’t want fish right now, why not just make a deal to pay later? This is of course exactly what they did. Indeed Graeber uses the intentionally provocative word communism to describe the way that resources are typically distributed within families and small villages—because it basically is “from each according to his ability, to each according to his need”. (I would probably use the less-charged word “community”, but I have to admit that those come from the same Latin root.) He also describes something I’ve tried to explain many times to neoclassical economists to no avail: There is equally a communism of the rich, a solidarity of deal-making and collusion that undermines the competitive market that is supposed to keep the rich in check. Graeber points out that wine, women and feasting have been common parts of deals between villages throughout history—and yet are still common parts of top-level business deals in modern capitalism. Even as we claim to be atomistic rational agents we still fall back on the community norms that guided our ancestors.

Another one of my favorite lines in the book is on this very subject: “Why, if I took a free-market economic theorist out to an expensive dinner, would that economist feel somewhat diminished—uncomfortably in my debt—until he had been able to return the favor? Why, if he were feeling competitive with me, would he be inclined to take me someplace even more expensive?” That doesn’t make any sense at all under the theory of neoclassical rational agents (an infinite identical psychopath would just enjoy the dinner—free dinner!—and might never speak to you again), but it makes perfect sense under the cultural norms of community in which gifts form bonds and generosity is a measure of moral character. I also got thinking about how introducing money directly into such exchanges can change them dramatically: For instance, suppose I took my professor out to a nice dinner with drinks in order to thank him for writing me recommendation letters. This seems entirely appropriate, right? But now suppose I just paid him $30 for writing the letters. All the sudden it seems downright corrupt. But the dinner check said $30 on it! My bank account debit is the same! He might go out and buy a dinner with it! What’s the difference? I think the difference is that the dinner forms a relationship that ties the two of us together as individuals, while the cash creates a market transaction between two interchangeable economic agents. By giving my professor cash I would effectively be saying that we are infinite identical psychopaths.

While Graeber doesn’t get into it, a similar argument also applies to gift-giving on holidays and birthdays. There seriously is—I kid you not—a neoclassical economist who argues that Christmas is economically inefficient and should be abolished in favor of cash transfers. He wrote a book about it. He literally does not understand the concept of gift-giving as a way of sharing experiences and solidifying relationships. This man must be such a joy to have around! I can imagine it now: “Will you play catch with me, Daddy?” “Daddy has to work, but don’t worry dear, I hired a minor league catcher to play with you. Won’t that be much more efficient?”

This sort of thing is what makes Debt such a compelling read, and Graeber does make some good points and presents a wealth of historical information. So now it’s time to talk about what’s wrong with the book, the things Graeber gets wrong.

First of all, he’s clearly quite ignorant about the state-of-the-art in economics, and I’m not even talking about the sort of cutting-edge cognitive economics experiments I want to be doing. (When I read what Molly Crockett has been working on lately in the neuroscience of moral judgments, I began to wonder if I should apply to University College London after all.)

No, I mean Graeber is ignorant of really basic stuff, like the nature of government debt—almost nothing of what I said in that post is controversial among serious economists; the equations certainly aren’t, though some of the interpretation and application might be. (One particularly likely sticking point called “Ricardian equivalence” is something I hope to get into in a future post. You already know the refrain: Ricardian equivalence only happens if you live in a world of infinite identical psychopaths.) Graeber has internalized the Republican talking points about how this is money our grandchildren will owe to China; it’s nothing of the sort, and most of it we “owe” to ourselves. In a particularly baffling passage Graeber talks about how there are no protections for creditors of the US government, when creditors of the US government have literally never suffered a single late payment in the last 200 years. There are literally no creditors in the world who are more protected from default—and only a few others that reach the same level, such as creditors to the Bank of England.

In an equally-bizarre aside he also says in one endnote that “mainstream economists” favor the use of the gold standard and are suspicious of fiat money; exactly the opposite is the case. Mainstream economists—even the neoclassicists with whom I have my quarrels—are in almost total agreement that a fiat monetary system managed by a central bank is the only way to have a stable money supply. The gold standard is the pet project of a bunch of cranks and quacks like Peter Schiff. Like most quacks, the are quite vocal; but they are by no means supported by academic research or respected by top policymakers. (I suppose the latter could change if enough Tea Party Republicans get into office, but so far even that hasn’t happened and Janet Yellen continues to manage our fiat money supply.) In fact, it’s basically a consensus among economists that the gold standard caused the Great Depression—that in addition to some triggering event (my money is on Minsky-style debt deflation—and so is Krugman’s), the inability of the money supply to adjust was the reason why the world economy remained in such terrible shape for such a long period. The gold standard has not been a mainstream position among economists since roughly the mid-1980s—before I was born.

He makes this really bizarre argument about how because Korea, Japan, Taiwan, and West Germany are major holders of US Treasury bonds and became so under US occupation—which is indisputably true—that means that their development was really just some kind of smokescreen to sell more Treasury bonds. First of all, we’ve never had trouble selling Treasury bonds; people are literally accepting negative interest rates in order to have them right now. More importantly, Korea, Japan, Taiwan, and West Germany—those exact four countries, in that order—are the greatest economic success stories in the history of the human race. West Germany was rebuilt literally from rubble to become once again a world power. The Asian Tigers were even more impressive, raised from the most abject Third World poverty to full First World high-tech economy status in a few generations. If this is what happens when you buy Treasury bonds, we should all buy as many Treasury bonds as we possibly can. And while that seems intuitively ridiculous, I have to admit, China’s meteoric rise also came with an enormous investment in Treasury bonds. Maybe the secret to economic development isn’t physical capital or exports or institutions; nope, it’s buying Treasury bonds. (I don’t actually believe this, but the correlation is there, and it totally undermines Graeber’s argument that buying Treasury bonds makes you some kind of debt peon.)

Speaking of correlations, Graeber is absolutely terrible at econometrics; he doesn’t even seem to grasp the most basic concepts. On page 366 he shows this graph of the US defense budget and the US federal debt side by side in order to argue that the military is the primary reason for our national debt. First of all, he doesn’t even correct for inflation—so most of the exponential rise in the two curves is simply the purchasing power of the dollar declining over time. Second, he doesn’t account for GDP growth, which is most of what’s left after you account for inflation. He has two nonstationary time-series with obvious exponential trends and doesn’t even formally correlate them, let alone actually perform the proper econometrics to show that they are cointegrated. I actually think they probably are cointegrated, and that a large portion of national debt is driven by military spending, but Graeber’s graph doesn’t even begin to make that argument. You could just as well graph the number of murders and the number of cheesecakes sold, each on an annual basis; both of them would rise exponentially with population, thus proving that cheesecakes cause murder (or murders cause cheesecakes?).

And then where Graeber really loses me is when he develops his theory of how modern capitalism and the monetary and debt system that go with it are fundamentally corrupt to the core and must be abolished and replaced with something totally new. First of all, he never tells us what that new thing is supposed to be. You’d think in 400 pages he could at least give us some idea, but no; nothing. He apparently wants us to do “not capitalism”, which is an infinite space of possible systems, some of which might well be better, but none of which can actually be implemented without more specific ideas. Many have declared that Occupy has failed—I am convinced that those who say this appreciate neither how long it takes social movements to make change, nor how effective Occupy has already been at changing our discourse, so that Capital in the Twenty-First Century can be a bestseller and the President of the United States can mention income inequality and economic mobility in his speeches—but insofar as Occupy has failed to achieve its goals, it seems to me that this is because it was never clear just what Occupy’s goals were to begin with. Now that I’ve read Graeber’s work, I understand why: He wanted it that way. He didn’t want to go through the hard work (which is also risky: you could be wrong) of actually specifying what this new economic system would look like; instead he’d prefer to find flaws in the current system and then wait for someone else to figure out how to fix them. That has always been the easy part; any human system comes with flaws. The hard part is actually coming up with a better system—and Graeber doesn’t seem willing to even try.

I don’t know exactly how accurate Graeber’s historical account is, but it seems to check out, and even make sense of some things that were otherwise baffling about the sketchy account of the past I had previously learned. Why were African tribes so willing to sell their people into slavery? Well, because they didn’t think of it as their people—they were selling captives from other tribes taken in war, which is something they had done since time immemorial in the form of slaves for slaves rather than slaves for goods. Indeed, it appears that trade itself emerged originally as what Graeber calls a “human economy”, in which human beings are literally traded as a fungible commodity—but always humans for humans. When money was introduced, people continued selling other people, but now it was for goods—and apparently most of the people sold were young women. So much of the Bible makes more sense that way: Why would Job be all right with getting new kids after losing his old ones? Kids are fungible! Why would people sell their daughters for goats? We always sell women! How quickly do we flirt with the unconscionable, when first we say that all is fungible.

One of Graeber’s central points is that debt came long before money—you owed people apples or hours of labor long before you ever paid anybody in gold. Money only emerged when debt became impossible to enforce, usually because trade was occurring between soldiers and the villages they had just conquered, so nobody was going to trust anyone to pay anyone back. Immediate spot trades were the only way to ensure that trades were fair in the absence of trust or community. In other words, the first use of gold as money was really using it as collateral. All of this makes a good deal of sense, and I’m willing to believe that’s where money originally came from.

But then Graeber tries to use this horrific and violent origin of money—in war, rape, and slavery, literally some of the worst things human beings have ever done to one another—as an argument for why money itself is somehow corrupt and capitalism with it. This is nothing short of a genetic fallacy: I could agree completely that money had this terrible origin, and yet still say that money is a good thing and worth preserving. (Indeed, I’m rather strongly inclined to say exactly that.) The fact that it was born of violence does not mean that it is violence; we too were born of violence, literally millions of years of rape and murder. It is astronomically unlikely that any one of us does not have a murderer somewhere in our ancestry. (Supposedly I’m descended from Julius Caesar, hence my last name Julius—not sure I really believe that—but if so, there you go, a murderer and tyrant.) Are we therefore all irredeemably corrupt? No. Where you come from does not decide what you are or where you are going.

In fact, I could even turn the argument around: Perhaps money was born of violence because it is the only alternative to violence; without money we’d still be trading our daughters away because we had no other way of trading. I don’t think I believe that either; but it should show you how fragile an argument from origin really is.

This is why the whole book gives this strange feeling of non sequitur; all this history is very interesting and enlightening, but what does it have to do with our modern problems? Oh. Nothing, that’s what. The connection you saw doesn’t make any sense, so maybe there’s just no connection at all. Well all right then. This was an interesting little experience.

This is a shame, because I do think there are important things to be said about the nature of money culturally, philosophically, morally—but Graeber never gets around to saying them, seeming to think that merely pointing out money’s violent origins is a sufficient indictment. It’s worth talking about the fact that money is something we made, something we can redistribute or unmake if we choose. I had such high expectations after I read that little interchange about the IMF: Yes! Finally, someone gets it! No, you don’t have to repay debts if that means millions of people will suffer! But then he never really goes back to that. The closest he veers toward an actual policy recommendation is at the very end of the book, a short section entitled “Perhaps the world really does owe you a living” in which he very briefly suggests—doesn’t even argue for, just suggests—that perhaps people do deserve a certain basic standard of living even if they aren’t working. He could have filled 50 pages arguing the ins and outs of a basic income with graphs and charts and citations of experimental data—but no, he just spends a few paragraphs proposing the idea and then ends the book. (I guess I’ll have to write that chapter myself; I think it would go well in The End of Economics, which I hope to get back to writing in a few months—while I also hope to finally publish my already-written book The Mathematics of Tears and Joy.)

If you want to learn about the history of money and debt over the last 5000 years, this is a good book to do so—and that is, after all, what the title said it would be. But if you’re looking for advice on how to improve our current economic system for the benefit of all humanity, you’ll need to look elsewhere.

And so in the grand economic tradition of reducing complex systems into a single numeric utility value, I rate Debt: The First 5000 Years a 3 out of 5.

Pareto Efficiency: Why we need it—and why it’s not enough

JDN 2456914 PDT 11:45.

I already briefly mentioned the concept in an earlier post, but Pareto-efficiency is so fundamental to both ethics and economics I decided I would spent some more time on explaining exactly what it’s about.

This is the core idea: A system is Pareto-efficient if you can’t make anyone better off without also making someone else worse off. It is Pareto-inefficient if the opposite is true, and you could improve someone’s situation without hurting anyone else.

Improving someone’s situation without harming anyone else is called a Pareto-improvement. A system is Pareto-efficient if and only if there are no possible Pareto-improvements.

Zero-sum games are always Pareto-efficient. If the game is about how we distribute the same $10 between two people, any dollar I get is a dollar you don’t get, so no matter what we do, we can’t make either of us better off without harming the other. You may have ideas about what the fair or right solution is—and I’ll get back to that shortly—but all possible distributions are Pareto-efficient.

Where Pareto-efficiency gets interesting is in nonzero-sum games. The most famous and most important such game is the so-called Prisoner’s Dilemma; I don’t like the standard story to set up the game, so I’m going to give you my own. Two corporations, Alphacomp and Betatech, make PCs. The computers they make are of basically the same quality and neither is a big brand name, so very few customers are going to choose on anything except price. Combining labor, materials, equipment and so on, each PC costs each company $300 to manufacture a new PC, and most customers are willing to buy a PC as long as it’s no more than $1000. Suppose there are 1000 customers buying. Now the question is, what price do they set? They would both make the most profit if they set the price at $1000, because customers would still buy and they’d make $700 on each unit, each making $350,000. But now suppose Alphacomp sets a price at $1000; Betatech could undercut them by making the price $999 and sell twice as many PCs, making $699,000. And then Alphacomp could respond by setting the price at $998, and so on. The only stable end result if they are both selfish profit-maximizers—the Nash equilibrium—is when the price they both set is $301, meaning each company only profits $1 per PC, making $1000. Indeed, this result is what we call in economics perfect competition. This is great for consumers, but not so great for the companies.

If you focus on the most important choice, $1000 versus $999—to collude or to compete—we can set up a table of how much each company would profit by making that choice (a payoff matrix or normal form game in game theory jargon).

A: $999 A: $1000
B: $999 A:$349k

B:$349k

A:$0

B:$699k

B: $1000 A:$699k

B:$0

A:$350k

B:$350k

Obviously the choice that makes both companies best-off is for both companies to make the price $1000; that is Pareto-efficient. But it’s also Pareto-efficient for Alphacomp to choose $999 and the other one to choose $1000, because then they sell twice as many computers. We have made someone worse off—Betatech—but it’s still Pareto-efficient because we couldn’t give Betatech back what they lost without taking some of what Alphacomp gained.

There’s only one option that’s not Pareto-efficient: If both companies charge $999, they could both have made more money if they’d charged $1000 instead. The problem is, that’s not the Nash equilibrium; the stable state is the one where they set the price lower.

This means that only case that isn’t Pareto-efficient is the one that the system will naturally trend toward if both compal selfish profit-maximizers. (And while most human beings are nothing like that, most corporations actually get pretty close. They aren’t infinite, but they’re huge; they aren’t identical, but they’re very similar; and they basically are psychopaths.)

In jargon, we say the Nash equilibrium of a Prisoner’s Dilemma is Pareto-inefficient. That one sentence is basically why John Nash was such a big deal; up until that point, everyone had assumed that if everyone acted in their own self-interest, the end result would have to be Pareto-efficient; Nash proved that this isn’t true at all. Everyone acting in their own self-interest can doom us all.

It’s not hard to see why Pareto-efficiency would be a good thing: if we can make someone better off without hurting anyone else, why wouldn’t we? What’s harder for most people—and even most economists—to understand is that just because an outcome is Pareto-efficient, that doesn’t mean it’s good.

I think this is easiest to see in zero-sum games, so let’s go back to my little game of distributing the same $10. Let’s say it’s all within my power to choose—this is called the ultimatum game. If I take $9 for myself and only give you $1, is that Pareto-efficient? It sure is; for me to give you any more, I’d have to lose some for myself. But is it fair? Obviously not! The fair option is for me to go fifty-fifty, $5 and $5; and maybe you’d forgive me if I went sixty-forty, $6 and $4. But if I take $9 and only offer you $1, you know you’re getting a raw deal.

Actually as the game is often played, you have the choice the say, “Forget it; if that’s your offer, we both get nothing.” In that case the game is nonzero-sum, and the choice you’ve just taken is not Pareto-efficient! Neoclassicists are typically baffled at the fact that you would turn down that free $1, paltry as it may be; but I’m not baffled at all, and I’d probably do the same thing in your place. You’re willing to pay that $1 to punish me for being so stingy. And indeed, if you allow this punishment option, guess what? People aren’t as stingy! If you play the game without the rejection option, people typically take about $7 and give about $3 (still fairer than the $9/$1, you may notice; most people aren’t psychopaths), but if you allow it, people typically take about $6 and give about $4. Now, these are pretty small sums of money, so it’s a fair question what people might do if $100,000 were on the table and they were offered $10,000. But that doesn’t mean people aren’t willing to stand up for fairness; it just means that they’re only willing to go so far. They’ll take a $1 hit to punish someone for being unfair, but that $10,000 hit is just too much. I suppose this means most of us do what Guess Who told us: “You can sell your soul, but don’t you sell it too cheap!”

Now, let’s move on to the more complicated—and more realistic—scenario of a nonzero-sum game. In fact, let’s make the “game” a real-world situation. Suppose Congress is debating a bill that would introduce a 70% marginal income tax on the top 1% to fund a basic income. (Please, can we debate that, instead of proposing a balanced-budget amendment that would cripple US fiscal policy indefinitely and lead to a permanent depression?)

This tax would raise about 14% of GDP in revenue, or about $2.4 trillion a year (yes, really). It would then provide, for every man, woman and child in America, a $7000 per year income, no questions asked. For a family of four, that would be $28,000, which is bound to make their lives better.

But of course it would also take a lot of money from the top 1%; Mitt Romney would only make $6 million a year instead of $20 million, and Bill Gates would have to settle for $2.4 billion a year instead of $8 billion. Since it’s the whole top 1%, it would also hurt a lot of people with more moderate high incomes, like your average neurosurgeon or Paul Krugman, who each make about $500,000 year. About $100,000 of that is above the cutoff for the top 1%, so they’d each have to pay about $70,000 more than they currently do in taxes; so if they were paying $175,000 they’re now paying $245,000. Once taking home $325,000, now only $255,000. (Probably not as big a difference as you thought, right? Most people do not seem to understand how marginal tax rates work, as evinced by “Joe the Plumber” who thought that if he made $250,001 he would be taxed at the top rate on the whole amount—no, just that last $1.)

You can even suppose that it would hurt the economy as a whole, though in fact there’s no evidence of that—we had tax rates like this in the 1960s and our economy did just fine. The basic income itself would inject so much spending into the economy that we might actually see more growth. But okay, for the sake of argument let’s suppose it also drops our per-capita GDP by 5%, from $53,000 to $50,300; that really doesn’t sound so bad, and any bigger drop than that is a totally unreasonable estimate based on prejudice rather than data. For the same tax rate might have to drop the basic income a bit too, say $6600 instead of $7000.

So, this is not a Pareto-improvement; we’re making some people better off, but others worse off. In fact, the way economists usually estimate Pareto-efficiency based on so-called “economic welfare”, they really just count up the total number of dollars and divide by the number of people and call it a day; so if we lose 5% in GDP they would register this as a Pareto-loss. (Yes, that’s a ridiculous way to do it for obvious reasons—$1 to Mitt Romney isn’t worth as much as it is to you and me—but it’s still how it’s usually done.)

But does that mean that it’s a bad idea? Not at all. In fact, if you assume that the real value—the utility—of a dollar decreases exponentially with each dollar you have, this policy could almost double the total happiness in US society. If you use a logarithm instead, it’s not quite as impressive; it’s only about a 20% improvement in total happiness—in other words, “only” making as much difference to the happiness of Americans from 2014 to 2015 as the entire period of economic growth from 1900 to 2000.

If right now you’re thinking, “Wow! Why aren’t we doing that?” that’s good, because I’ve been thinking the same thing for years. And maybe if we keep talking about it enough we can get people to start voting on it and actually make it happen.

But in order to make things like that happen, we must first get past the idea that Pareto-efficiency is the only thing that matters in moral decisions. And once again, that means overcoming the standard modes of thinking in neoclassical economics.

Something strange happened to economics in about 1950. Before that, economists from Marx to Smith to Keynes were always talking about differences in utility, marginal utility of wealth, how to maximize utility. But then economists stopped being comfortable talking about happiness, deciding (for reasons I still do not quite grasp) that it was “unscientific”, so they eschewed all discussion of the subject. Since we still needed to know why people choose what they do, a new framework was created revolving around “preferences”, which are a simple binary relation—you either prefer it or you don’t, you can’t like it “a lot more” or “a little more”—that is supposedly more measurable and therefore more “scientific”. But under this framework, there’s no way to say that giving a dollar to a homeless person makes a bigger difference to them than giving the same dollar to Mitt Romney, because a “bigger difference” is something you’ve defined out of existence. All you can say is that each would prefer to receive the dollar, and that both Mitt Romney and the homeless person would, given the choice, prefer to be Mitt Romney. While both of these things are true, it does seem to be kind of missing the point, doesn’t it?

There are stirrings of returning to actual talk about measuring actual (“cardinal”) utility, but still preferences (so-called “ordinal utility”) are the dominant framework. And in this framework, there’s really only one way to evaluate a situation as good or bad, and that’s Pareto-efficiency.

Actually, that’s not quite right; John Rawls cleverly came up with a way around this problem, by using the idea of “maximin”—maximize the minimum. Since each would prefer to be Romney, given the chance, we can say that the homeless person is worse off than Mitt Romney, and therefore say that it’s better to make the homeless person better off. We can’t say how much better, but at least we can say that it’s better, because we’re raising the floor instead of the ceiling. This is certainly a dramatic improvement, and on these grounds alone you can argue for the basic income—your floor is now explicitly set at the $6600 per year of the basic income.

But is that really all we can say? Think about how you make your own decisions; do you only speak in terms of strict preferences? I like Coke more than Pepsi; I like massages better than being stabbed. If preference theory is right, then there is no greater distance in the latter case than the former, because this whole notion of “distance” is unscientific. I guess we could expand the preference over groups of goods (baskets as they are generally called), and say that I prefer the set “drink Pepsi and get a massage” to the set “drink Coke and get stabbed”, which is certainly true. But do we really want to have to define that for every single possible combination of things that might happen to me? Suppose there are 1000 things that could happen to me at any given time, which is surely conservative. In that case there are 2^1000 = 10^300 possible combinations. If I were really just reading off a table of unrelated preference relations, there wouldn’t be room in my brain—or my planet—to store it, nor enough time in the history of the universe to read it. Even imposing rational constraints like transitivity doesn’t shrink the set anywhere near small enough—at best maybe now it’s 10^20, well done; now I theoretically could make one decision every billion years or so. At some point doesn’t it become a lot more parsimonious—dare I say, more scientific—to think that I am using some more organized measure than that? It certainly feels like I am; even if couldn’t exactly quantify it, I can definitely say that some differences in my happiness are large and others are small. The mild annoyance of drinking Pepsi instead of Coke will melt away in the massage, but no amount of Coke deliciousness is going to overcome the agony of being stabbed.

And indeed if you give people surveys and ask them how much they like things or how strongly they feel about things, they have no problem giving you answers out of 5 stars or on a scale from 1 to 10. Very few survey participants ever write in the comments box: “I was unable to take this survey because cardinal utility does not exist and I can only express binary preferences.” A few do write 1s and 10s on everything, but even those are fairly rare. This “cardinal utility” that supposedly doesn’t exist is the entire basis of the scoring system on Netflix and Amazon. In fact, if you use cardinal utility in voting, it is mathematically provable that you have the best possible voting system, which may have something to do with why Netflix and Amazon like it. (That’s another big “Why aren’t we doing this already?”)

If you can actually measure utility in this way, then there’s really not much reason to worry about Pareto-efficiency. If you just maximize utility, you’ll automatically get a Pareto-efficient result; but the converse is not true because there are plenty of Pareto-efficient scenarios that don’t maximize utility. Thinking back to our ultimatum game, all options are Pareto-efficient, but you can actually prove that the $5/$5 choice is the utility-maximizing one, if the two players have the same amount of wealth to start with. (Admittedly for those small amounts there isn’t much difference; but that’s also not too surprising, since $5 isn’t going to change anybody’s life.) And if they don’t—suppose I’m rich and you’re poor and we play the game—well, maybe I should give you more, precisely because we both know you need it more.

Perhaps even more significant, you can move from a Pareto-inefficient scenario to a Pareto-efficient one and make things worse in terms of utility. The scenario in which the top 1% are as wealthy as they can possibly be and the rest of us live on scraps may in fact be Pareto-efficient; but that doesn’t mean any of us should be interested in moving toward it (though sadly, we kind of are). If you’re only measuring in terms of Pareto-efficiency, your attempts at improvement can actually make things worse. It’s not that the concept is totally wrong; Pareto-efficiency is, other things equal, good; but other things are never equal.

So that’s Pareto-efficiency—and why you really shouldn’t care about it that much.

Schools of Thought

If you’re at all familiar with the schools of thought in economics, you may wonder where I stand. Am I a Keynesian? Or perhaps a post-Keynesian? A New Keynesian? A neo-Keynesian (not to be confused)? A neo-paleo-Keynesian? Or am I a Monetarist? Or a Modern Monetary Theorist? Or perhaps something more heterodox, like an Austrian or a Sraffian or a Marxist?

No, I am none of those things. I guess if you insist on labeling, you could call me a “cognitivist”; and in terms of policy I tend to agree with the Keynesians, but I also like the Modern Monetary Theorists.

But really I think this sort of labeling of ‘schools of thought’ is exactly the problem. There shouldn’t be schools of thought; the universe only works one way. When you don’t know the answer, you should have the courage to admit you don’t know. And once we actually have enough evidence to know something, people need to stop disagreeing about it. If you continue to disagree with what the evidence has shown, you’re not a ‘school of thought’; you’re just wrong.

The whole notion of ‘schools of thought’ smacks of cultural relativism; asking what the ‘Keynesian’ answer to a question is (and if you take enough economics classes I guarantee you will be asked exactly that) is rather like asking what religious beliefs prevail in a particular part of the world. It might be worth asking for some historical reason, but it’s not a question about economics; it’s a question about economic beliefs. This is the difference between asking how people believe the universe was created, and actually being a cosmologist. True, schools of thought aren’t as geographically localized as religions; but they do say the words ‘saltwater’ and ‘freshwater’ for a reason. I’m not all that interested in the Shinto myths versus the Hindu myths; I want to be a cosmologist.

At best, schools of thought are a sign of a field that hasn’t fully matured. Perhaps there were Newtonians and Einsteinians in 1910; but by 1930 there were just Einsteinians and bad physicists. Are there ‘schools of thought’ in physics today? Well, there are string theorists. But string theory hasn’t been a glorious success of physics advancement; on the contrary, it’s been a dead end from which the field has somehow failed to extricate itself for almost 50 years.

So where does that put us in economics? Well, some of the schools of thought are clearly dead ends, every bit as unfounded as string theory but far worse because they have direct influences on policy. String theory hasn’t ever killed anyone; bad economics definitely has. (How, you ask? Exposure to hazardous chemicals that were deregulated; poverty and starvation due to cuts to social welfare programs; and of course the Second Depression. I could go on.)

The worst offender is surely Austrian economics and its crazy cousin Randian libertarianism. Ayn Rand literally ruled a cult; Friedrich Hayek never took it quite that far, but there is certainly something cultish about Austrian economists. They insist that economics must be derived a priori, without recourse to empirical evidence (or at least that’s what they say when you point out that all the empirical evidence is against them). They are fond of ridiculous hyperbole about an inevitable slippery slope between raising taxes on capital gains and turning into Stalin’s Soviet Union, as well as rhetorical questions I find myself answering opposite to how they want (like “For are taxes not simply another form of robbery?” and “Once we allow the government to regulate what man can do, will they not continue until they control all aspects of our lives?”). They even co-opt and distort cognitivist concepts like herd instinct and asymmetric information; somehow Austrians think that asymmetric information is an argument for why markets are more efficient than government, even though Akerlof’s point was that asymmetric information is why we need regulations.

Marxists are on the opposite end of the political spectrum, but their ideas are equally nonsensical. (Marx himself was a bit more reasonable, but even he recognized they were going too far: “All I know is that I am not a Marxist.”) They have this whole “labor theory of value” thing where the value of something is the amount of work you have to put into it. This would mean that labor-saving innovations are pointless, because they devalue everything; it would also mean that putting an awful lot of work into something useless would nevertheless somehow make it enormously valuable. Really, it would never be worth doing much of anything, because the value you get out of something is exactly equal to the work you put in. Marxists also tend to think that what the world needs is a violent revolution to overthrow the bondage of capitalism; this is an absolutely terrible idea. During the transition it would be one of the bloodiest conflicts in history; afterward you’d probably get something like the Soviet Union or modern-day Venezuela. Even if you did somehow establish your glorious Communist utopia, you’d have destroyed so much productive capacity in the process that you’d make everyone poor. Socialist reforms make sense—and have worked well in Europe, particularly Scandinavia. But socialist revolution is a a good way to get millions of innocent people killed.

Sraffians are also quite silly; they have this bizarre notion that capital must be valued as “dated labor”, basically a formalized Marxism. I’ll admit, it’s weird how neoclassicists try to value labor as “human capital”; frankly it’s a bit disturbing how it echoes slavery. (And if you think slavery is dead, think again; it’s dead in the First World, but very much alive elsewhere.) But the solution to that problem is not to pretend that capital is a form of labor; it’s to recognize that capital and labor are different. Capital can be owned, sold, and redistributed; labor cannot. Labor is done by human beings, who have intrinsic value and rights; capital is made of inanimate matter, which does not. (This is what makes Citizens United so outrageous; “corporations are people” and “money is speech” are such fundamental distortions of democratic principles that they are literally Orwellian. We’re not that far from “freedom is slavery” and “war is peace”.)

Neoclassical economists do better, at least. They do respond to empirical data, albeit slowly. Their models are mathematically consistent. They rarely take account of human irrationality or asymmetric information, but when they do they rightfully recognize them as obstacles to efficient markets. But they still model people as infinite identical psychopaths, and they still divide themselves into schools of thought. Keynesians and Monetarists are particularly prominent, and Modern Monetary Theorists seem to be the next rising star. Each of these schools gets some things right and other things wrong, and that’s exactly why we shouldn’t make ourselves beholden to a particular tribe.

Monetarists follow Friedman, who said, “inflation is always and everywhere a monetary phenomenon.” This is wrong. You can definitely cause inflation without expanding your money supply; just ramp up government spending as in World War 2 or suffer a supply shock like we did when OPEC cut the oil supply. (In both cases, the US money supply was still tied to gold by the Bretton Woods system.) But they are right about one thing: To really have hyperinflation ala Weimar or Zimbabwe, you probably have to be printing money. If that were all there is to Monetarism, I can invert another Friedmanism: We’re all Monetarists now.

Keynesians are basically right about most things; in particular, they are the only branch of neoclassicists who understand recessions and know how to deal with them. The world’s most famous Keynesian is probably Krugman, who has the best track record of economic predictions in the popular media today. Keynesians much better appreciate the fact that humans are irrational; in fact, cognitivism can be partly traced to Keynes, who spoke often of the “animal spirits” that drive human behavior (Akerlof’s most recent book is called Animal Spirits). But even Keynesians have their sacred cows, like the Phillips Curve, the alleged inverse correlation between inflation and unemployment. This is fairly empirically accurate if you look just at First World economies after World War 2 and exclude major recessions. But Keynes himself said, “Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” The Phillips Curve “shifts” sometimes, and it’s not always clear why—and empirically it’s not easy to tell the difference between a curve that shifts a lot and a relationship that just isn’t there. There is very little evidence for a “natural rate of unemployment”. Worst of all, it’s pretty clear that the original policy implications of the Phillips Curve are all wrong; you can’t get rid of unemployment just by ramping up inflation, and that way really does lie Zimbabwe.

Finally, Modern Monetary Theorists understand money better than everyone else. They recognize that a sovereign government doesn’t have to get its money “from somewhere”; it can create however much money it needs. The whole narrative that the US is “out of money” isn’t just wrong, it’s incoherent; if there is one entity in the world that can never be out of money, it’s the US government, who print the world’s reserve currency. The panicked fears of quantitative easing causing hyperinflation aren’t quite as crazy; if the economy were at full capacity, printing $4 trillion over 5 years (yes, we did that) would absolutely cause some inflation. Since that’s only about 6% of US GDP, we might be back to 8% or even 10% inflation like the 1970s, but we certainly would not be in Zimbabwe. Moreover, we aren’t at full capacity; we needed to expand the money supply that much just to maintain prices where they are. The Second Depression is the Red Queen: It took all the running we could do to stay in one place. Modern Monetary Theorists also have some very good ideas about taxation; they point out that since the government only takes out the same thing it puts in—its own currency—it doesn’t make sense to say they are “taking” something (let alone “confiscating” it as Austrians would have you believe). Instead, it’s more like they are pumping it, taking money in and forcing it back out continuously. And just as pumping doesn’t take away water but rather makes it flow, taxation and spending doesn’t remove money from the economy but rather maintains its circulation. Now that I’ve said what they get right, what do they get wrong? Basically they focus too much on money, ignoring the real economy. They like to use double-entry accounting models, perfectly sensible for money, but absolutely nonsensical for real value. The whole point of an economy is that you can get more value out than you put in. From the Homo erectus who pulls apples from the trees to the software developer who buys a mansion, the reason they do it is that the value they get out (the gatherer gets to eat, the programmer gets to live in a mansion) is higher than the value they put in (the effort to climb the tree, the skill to write the code). If, as Modern Monetary Theorists are wont to do, you calculated a value for the human capital of the gatherer and the programmer equal to the value of the goods they purchase, you’d be missing the entire point.

Who are you? What is this new blog? Why “Infinite Identical Psychopaths”?

My name is Patrick Julius. I am about halfway through a master’s degree in economics, specializing in the new subfield of cognitive economics (closely related to the also quite new fields of cognitive science and behavioral economics). This makes me in one sense heterodox; I disagree adamantly with most things that typical neoclassical economists say. But in another sense, I am actually quite orthodox. All I’m doing is bringing the insights of psychology, sociology, history, and political science—not to mention ethics—to the study of economics. The problem is simply that economists have divorced themselves so far from the rest of social science.

Another way I differ from most critics of mainstream economics (I’m looking at you, Peter Schiff) is that, for lack of a better phrase, I’m good at math. (As Bill Clinton said, “It’s arithmetic!”) I understand things like partial differential equations and subgame perfect equilibria, and therefore I am equipped to criticize them on their own terms. In this blog I will do my best to explain the esoteric mathematical concepts in terms most readers can understand, but it’s not always easy. The important thing to keep in mind is that fancy math can’t make a lie true; no matter how sophisticated its equations, a model that doesn’t fit the real world can’t be correct.

This blog, which I plan to update every Saturday, is about the current state of economics, both as it is and how economists imagine it to be. One of my central points is that these two are quite far apart, which has exacerbated if not caused the majority of economic problems in the world today. (Economists didn’t invent world hunger, but for over a decade now we’ve had the power to end it and haven’t done so. You’d be amazed how cheap it would be; we’re talking about 1% of First World GDP at most.)

The reason I call it “infinite identical psychopaths” is that this is what neoclassical economists appear to believe human beings are, at least if we judge by the models they use. These are the typical assumptions of a neoclassical economic model:

      1. Perfect information: All individuals know everything they need to know about the state of the world and the actions of other individuals.
      2. Rational expectations: Predictions about the future can only be wrong within a normal distribution, and in the long run are on average correct.
      3. Representative agents: All individuals are identical and interchangeable; a single type represents them all.
      4. Perfect competition: There are infinitely many agents in the market, and none of them ever collude with one another.
      5. “Economic rationality”: Individuals act according to a monotonic increasing utility function that is only dependent upon their own present and future consumption of goods.

I put the last one in scare quotes because it is the worst of the bunch. What economists call “rationality” has only a distant relation to actual rationality, either as understood by common usage or by formal philosophical terminology.

Don’t be scared by the terminology; a “utility function” is just a formal model of the things you care about when you make decisions. Things you want have positive utility; things you don’t want have negative utility. Larger numbers reflect stronger feelings: a bar of chocolate has much less positive utility than a decade of happy marriage; a pinched finger has much less negative utility than a year of continual torture. Utility maximization just means that you try to get the things you want and avoid the things you don’t. By talking about expected utility, we make some allowance for an uncertain future—but not much, because we have so-called “rational expectations”.

Since any action taken by an “economically rational” agent maximizes expected utility, it is impossible for such an agent to ever make a mistake in the usual sense. Whatever they do is always the best idea at the time. This is already an extremely strong assumption that doesn’t make a whole lot of sense applied to human beings; who among us can honestly say they’ve never done anything they later regretted?

The worst part, however, is the assumption that an individual’s utility function depends only upon their own consumption. What this means is that the only thing anyone cares about is how much stuff they have; considerations like family, loyalty, justice, honesty, and fairness cannot factor into their decisions. The “monotonic increasing” part means that more stuff is always better; if they already have twelve private jets, they’d still want a thirteenth; and even if children had to starve for it, they’d be just fine with that. They are, in other words, psychopaths. So that’s one word of my title.

I think “identical” is rather self-explanatory; by using representative agent models, neoclassicists effectively assume that there is no variation between human beings whatsoever. They all have the same desires, the same goals, the same capabilities, the same resources. Implicit in this assumption is the notion that there is no such thing as poverty or wealth inequality, not to mention diversity, disability, or even differences in taste. (One wonders why you’d even bother with economics if that were the case.)

As for “infinite”, that comes from the assumptions of perfect information and perfect competition. In order to really have perfect information, one would need a brain with enough storage capacity to contain the state of every particle in the visible universe. Maybe not quite infinite, but pretty darn close. Likewise, in order to have true perfect competition, there must be infinitely many individuals in the economy, all of whom are poised to instantly take any opportunity offered that allows them to make even the tiniest profit.

Now, you might be thinking this is a strawman; surely neoclassicists don’t actually believe that people are infinite identical psychopaths. They just model that way to simplify the mathematics, which is of course necessary because the world is far too vast and interconnected to analyze in its full complexity.

This is certainly true: Suppose it took you one microsecond to consider each possible position on a Go board; how long would it take you to go through them all? More time than we have left before the universe fades into heat death. A Go board has two colors (plus empty) and 361 spaces. Now imagine trying to understand a global economy of 7 billion people by brute-force analysis. Simplifying heuristics are unavoidable.

And some neoclassical economists—for example Paul Krugman and Joseph Stiglitz—generally use these heuristics correctly; they understand the limitations of their models and don’t apply them in cases where they don’t belong. In that sort of case, there’s nothing particularly bad about these simplifying assumptions; they are like when a physicist models the trajectory of a spacecraft by assuming frictionless vacuum. Since outer space actually is close to a frictionless vacuum, this works pretty well; and if you need to make minor corrections (like the Pioneer Anomaly) you can.

However, this explanation already seems weird for the “economically rational” assumption (the psychopath part), because that doesn’t really make things much simpler. Why would we exclude the fact that people care about each other, they like to cooperate, they have feelings of loyalty and trust? And don’t tell me it’s because that’s impossible to quantify; behavioral geneticists already have a simple equation (C < r B) designed precisely to quantify altruism. (C is cost, B is benefit, r is relatedness.) I’d make only one slight modification; instead of r for relatedness, use p for psychological closeness, or as I like to call it, solidarity. For humans, solidarity is usually much higher than relatedness, though the two are correlated. C < p B.

Worse, there are other neoclassical economists—those of the most fanatically “free-market” bent—who really don’t seem to do this. I don’t know if they honestly believe that people are infinite identical psychopaths, but they make policy as if they did.

We have people like Stephen Moore saying that unemployment is “like a paid vacation” because obviously anyone who truly wants a job can immediately find one, or people like N. Gregory Mankiw arguing—in a published paper no less!—that the reason Steve Jobs was a billionaire was that he was actually a million times as productive as the rest of us, and therefore it would be inefficient (and, he implies but does not say outright, immoral) to take the fruits of those labors from him. (Honestly, I think I could concede the point and still argue for redistribution, on the grounds that people do not deserve to starve to death simply because they aren’t productive; but that’s the sort of thing never even considered by most neoclassicists, and anyway it’s a topic for another time.)

These kinds of statements would only make sense if markets were really as efficient and competitive as neoclassical models—that is, if people were infinite identical psychopaths. Allow even a single monopoly or just a few bits of imperfect information, and that whole edifice collapses.

And indeed if you’ve ever been unemployed or known someone who was, you know that our labor markets just ain’t that efficient. If you want to cut unemployment payments, you need a better argument than that. Similarly, it’s obvious to anyone who isn’t wearing the blinders of economic ideology that many large corporations exert monopoly power to increase their profits at our expense (How can you not see that Apple is a monopoly!?).

This sort of reasoning is more like plotting the trajectory of an aircraft on the assumption of frictionless vacuum; you’d be baffled as to where the oxidizer comes from, or how the craft manages to lift itself off the ground when the exhaust vents are pointed sideways instead of downward. And then you’d be telling the aerospace engineers to cut off the wings because they’re useless mass.

Worst of all, if we continue this analogy, the engineers would listen to you—they’d actually be convinced by your differential equations and cut off the wings just as you requested. Then the plane would never fly, and they’d ask if they could put the wings back on—but you’d adamantly insist that it was just coincidence, you just happened to be hit by a random problem at the very same moment as you cut off the wings, and putting them back on will do nothing and only make things worse.

No, seriously; so-called “Real Business Cycle” theory, while thoroughly obfuscated in esoteric mathematics, ultimately boils down to the assertion that financial crises have nothing to do with recessions, which are actually caused by random shocks to the real economy—the actual production of goods and services. The fact that a financial crisis always seems to happen just beforehand is, apparently, sheer coincidence, or at best some kind of forward-thinking response investors make as they see the storm coming. I want to you think for a minute about the idea that the kind of people who make computer programs that accidentally collapse the Dow, who made Bitcoin the first example in history of hyperdeflation, and who bought up Tweeter thinking it was Twitter are forward-thinking predictors of future events in real production.

And yet, it is on this sort of basis that our policy is made.

Can otherwise intelligent people really believe that these insane models are true? I’m not sure.
Sadly I think they may really believe that all people are psychopaths—because they themselves may be psychopaths. Economics students score higher on various psychopathic traits than other students. Part of this is self-selection—psychopaths are more likely to study economics—but the terrifying part is that part of it isn’t—studying economics may actually make you more like a sociopath. As I study for my master’s degree, I actually am somewhat afraid of being corrupted by this; I make sure to periodically disengage from their ideology and interact with normal people with normal human beliefs to recalibrate my moral compass.

Of course, it’s still pretty hard to imagine that anyone could honestly believe that the world economy is in a state of perfect information. But if they can’t really believe this insane assumption, why do they keep using models based on it?

The more charitable possibility is that they don’t appreciate just how sensitive the models are to the assumptions. They may think, for instance, that the General Welfare Theorems still basically apply if you relax the assumption of perfect information; maybe it’s not always Pareto-efficient, but it’s probably most of the time, right? Or at least close? Actually, no. The Myerson-Satterthwaithe Theorem says that once you give up perfect information, the whole theorem collapses; even a small amount of asymmetric information is enough to make it so that a Pareto-efficient outcome is impossible. And as you might expect, the more asymmetric the information is, the further the result deviates from Pareto-efficiency. And since we always have some asymmetric information, it looks like the General Welfare Theorems really aren’t doing much for us. They apply only in a magical fantasy world. (In case you didn’t know, Pareto-efficiency is a state in which it’s impossible to make any person better off without making someone else worse off. The real world is in a not Pareto-efficient state, which means that by smarter policy we could improve some people’s lives without hurting anyone else.)

The more sinister possibility is that they know full well that the models are wrong, they just don’t care. The models are really just excuses for an underlying ideology, the unshakeable belief that rich people are inherently better than poor people and private corporations are inherently better than governments. Hence, it must be bad for the economy to raise the minimum wage and good to cut income taxes, even though the empirical evidence runs exactly the opposite way; it must be good to subsidize big oil companies and bad to subsidize solar power research, even though that makes absolutely no sense.

One should normally be hesitant to attribute to malice what can be explained by stupidity, but the “I trust the models” explanation just doesn’t work for some of the really extreme privatizations that the US has undergone since Reagan.

No neoclassical model says that you should privatize prisons; prisons are a classic example of a public good, which would be underfunded in a competitive market and basically has to be operated or funded by the government.

No neoclassical model would support the idea that the EPA is a terrorist organization (yes, a member of the US Congress said this). In fact, the economic case for environmental regulations is unassailable. (What else are we supposed to do, privatize the air?) The question is not whether to regulate and tax pollution, but how and how much.

No neoclassical model says that you should deregulate finance; in fact, most neoclassical models don’t even include a financial sector (as bizarre and terrifying as that is), and those that do generally assume it is in a state of perfect equilibrium with zero arbitrage. If the financial sector were actually in a state of zero arbitrage, no banks would make a profit at all.

In case you weren’t aware, arbitrage is the practice of making money off of money without actually making any goods or doing any services. Unlike manufacturing (which, oddly enough, almost all neoclassical models are based on—despite the fact that it is now a minority sector in First World GDP), there’s no value added. Under zero arbitrage, the interest rate a bank charges should be almost exactly the same as the interest rate it receives, with just enough gap between to barely cover their operating expenses—which should in turn be minimal, especially in a modern electronic system. If financial markets were at zero arbitrage equilibrium, it would be sensible to speak of a single “real interest rate” in the economy, the one that everyone pays and everyone receives. Of course, those of us who live in the real world know that not only do different people pay radically different rates, most people have multiple outstanding lines of credit, each with a different rate. My savings account is 0.5%, my car loan is 5.5%, and my biggest credit card is 19%. These basically span the entire range of sensible interest rates (frankly 19% may even exceed that; that’s a doubling time of 3.6 years), and I know I’m not the exception but the rule.

So that’s the mess we’re in. Stay tuned; in future weeks I’ll talk about what we can do about it.