The irrationality of racism

JDN 2457039 EST 12:07.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Biologically, what exactly does that mean? Next to nothing.

Human genetic diversity is a real thing, and there are genetic links to not only dozens of genetic diseases and propensity toward certain types of cancer, but also personality and intelligence. There are also of course genes for skin pigmentation.

The human population does exhibit some genetic clustering, but the categories are not what you’re probably used to: Good examples of relatively well-defined genetic clusters include Ashkenazi, Papuan, and Mbuti. There are also many different haplogroups, such as mitochondrial haplogroups L3 and CZ.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

Let’s start with “Black”; defining a “Black” race is like defining a category of animals by the fact that they are all tinted red—foxes yes, dogs no; robins yes, swallows no; ladybirds yes, cockroaches no. There is more genetic diversity within Africa than there is outside of it. There are African populations that are more closely related to European populations than they are to other African populations. The only thing “Black” people have in common is that their skin is dark, which is due to convergent evolution: It’s not due to common ancestry, but a common environment. Dark skin has a direct survival benefit in climates with intense sunlight.  The similarity is literally skin deep.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club. The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almost a tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

I imagine most of my readers are more sophisticated than this, but just in case you’re wondering about the difference in IQ scores between Whites and Blacks, that is indeed a real observation, but IQ isn’t entirely genetic. The reason IQ scores are rising worldwide (the Flynn Effect) is due to improvements in environmental conditions: Fewer environmental pollutants—particularly lead and mercury, the removal of which is responsible for most of the reduction in crime in America over the last 20 yearsbetter nutrition, better education, less stress. Being stupid does not make you poor (or how would we explain Donald Trump?), but being poor absolutely does make you stupid. Combine that with the challenges and inconsistencies in cross-national IQ comparisons, and it’s pretty clear that the higher IQ scores in rich nations are an effect, not a cause, of their affluence. Likewise, the lower IQ scores of Black people in the US are entirely explained by their poorer living conditions, with no need for any genetic hypothesis—which would also be very difficult in the first place precisely because “Black” is such a weird genetic category.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

How is the economy doing?

JDN 2457033 EST 12:22.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

Why are our forecasts so bad? Some argue that the task is just inherently too difficult due to the chaotic system involved; but they used to say that about weather forecasts, and yet with satellites and computer models our forecasts are now far more accurate than they were 20 years ago. Others have argued that “politics always dominates over economics”, as though politics were somehow a fundamentally separate thing, forever exogenous, a parameter in our models that cannot be predicted. I have a number of economic aphorisms I’m trying to popularize; the one for this occasion is: “Nothing is exogenous.” (Maybe fundamental constants of physics? But actually many physicists think that those constants can be derived from even more fundamental laws.) My most common is “It’s the externalities, stupid.”; next is “It’s not the incentives, it’s the opportunities.”; and the last is “Human beings are 90% rational. But woe betide that other 10%.” In fact, it’s not quite true that all our macroeconomic forecasters are bad; a few, such as Krugman, are actually quite good. The Klein Award is given each year to the best macroeconomic forecasters, and the same names pop up too often for it to be completely random. (Sadly, one of the most common is Citigroup, meaning that our banksters know perfectly well what they’re doing when they destroy our economy—they just don’t care.) So in fact I think our failures of forecasting are not inevitable or permanent.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someone will probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”

What just happened in that election?

JDN 2456970 PST 11:12.

My head is still spinning from the election results on Tuesday. Republicans gained a net of 12 seats to secure their majority in the House. Even worse, Republicans gained at least 7 seats in the Senate (note that each Senate seat should count for 4.35 House seats because there are 100 Senators and 435 Representatives) and may gain two more depending on how runoffs go. This gives them a majority in both houses of Congress. So people like Republicans then? Maybe they’re fed up with Obama and dissatisfied with his handling of the economy (even though it has actually been spectacular given what he had to work with).
But then when we look at actual ballot proposals, the ones that passed were mostly liberal issues. California passed proposition 47, which will reduce sentences for minor drug and theft crimes and substantially reduce our incidence of incarceration. (There’s no sign of releasing current prisoners, unfortunately; but at least we won’t be adding as many new ones.) Marijuana was legalized—fully legalized, for all purposes—in Alaska, Oregon, and DC, further reducing incarceration. At last, the US may finally stop being the incarceration capitol of the world! We currently hold the title in both per-capita and total incarceration, so there can be no dispute. (Technically the Seychelles has a higher per-capita rate, but come on, they don’t count as a real country; they have a population smaller than Ann Arbor—or for that matter the annual throughput of Riker’s Island.)

The proposals to allow wolf hunting in Michigan failed, for which many wolves would thank you if they could. Minimum wages were raised in five states, four of which are Republican-leaning states. The most extreme minimum wage hike was in San Francisco, where the minimum wage is going to be raised as high as $18 over the next four years. So people basically agree with Democrats on policy, but decided to hand the Senate over to Republicans.

I think the best explanation for what happened is the voting demographics. When we have a Senate election, we aren’t sampling randomly from the American population; we’re pulling from specific states, and specific populations within those states. Geography played a huge role in these election results. So did age; the voting population was much older on average than the general population, because most young people simply didn’t vote. I know some of these young people, who tell me things like “I’m not voting because I won’t be part of that system!” Apparently their level of understanding of social change approaches that of the Lonely Island song “I Threw it on the Ground”. Not voting isn’t rebellion, it’s surrender. (I’m not sure who said that first, but it’s clearly right.) Rebellion would be voting for a radical third-party candidate, or running as one yourself. Rebellion would be leading rallies to gather support—that is, votes—for that candidate. Alternatively, you could say that rebellion is too risky and simply aim for reform, in which case you’d vote for Democrats as I did.

Your failure to vote did not help change that system. On the contrary, it was because of your surrender that we got two houses of Congress controlled by Republicans who have veered so far to the right they are bordering on fascism and feudalism. It is strange living in a society where the “mainstream” has become so extremist. You end up feeling like a radical far-left Marxist when in fact you agree—as I do—with the core policies of FDR or even Eisenhower. You have been told that the right is capitalism and the left is socialism; this is wrong. The left is capitalism; the right is feudalism. When I tell you I want a basic income funded by a progressive income tax, I am agreeing with Milton Friedman.

This must be how it feels to be a secularist in an Islamist theocracy like Iran. Now that Colorado has elected a state legislator who is so extreme that he literally has performed exorcisms to make people not gay or transgender (his name is apparently Gordon Klingenschmitt), I fear we’re dangerously on the verge of a theocracy of our own.

Of course, I shouldn’t just blame the people who didn’t vote; I should also blame the people who did vote, and voted for candidates who are completely insane. Even though it’s just a state legislature, tens of thousands of people voted for that guy in Colorado; tens of thousands of Americans were okay with the fact that he thinks gay and transgender people have demons inside us that need to be removed by exorcism. Even in Iran theocracy is astonishingly popular. People are voting for these candidates, and we must find out why and change their minds. We must show them that the people they are voting for are not going to make good decisions that benefit America, they are going to make selfish decisions that benefit themselves or their corporate cronies, or even just outright bad decisions that hurt everyone. As an example of the latter (which is arguably worse), there is literally no benefit to discrimination against women or racial minorities or LGBT people. It’s just absolute pure deadweight loss that causes massive harm without any benefit at all. It’s deeply, deeply irrational, and one of the central projects of cognitive economics must be figuring out what makes people discriminate and figuring out how to make them stop.

To be fair, some of the candidates that were elected are not so extreme. Tom Cotton of Arkansas (whose name is almost offensively down-homey rural American; I don’t think I could name a character that in a novel without people thinking it was satire) supported the state minimum wage increase and is sponsoring a bill that would ban abortions after 20 weeks, which is actually pretty reasonable, rather than at conception, which is absurd.

Thom Tillis of North Carolina is your standard old rich White male corporate stooge, but I don’t see anything in his platform that is particularly terrifying. David Perdue of Georgia is the same; he’s one of those business owners who thinks he knows how to run the economy because he can own a business while it makes money. (Even if he did have something to do with the profitability of the business—which is not entirely clear—that’s still like a fighter pilot saying he’s a great aerospace engineer.) Cory Gardner is similar (not old, but rich White male corporate stooge), but he’s scary simply because he came from the Colorado state legislature, where they just installed that exorcist guy.

Thad Cochran of Mississippi was re-elected, so he was already there; he generally votes along whatever lines the Republican leadership asks him to, so he is not so much a villain as a henchman. Shelley Moore Capito of West Virginia also seems to basically vote whatever the party says.

Joni Ernst of Iowa is an interesting character; despite being a woman, she basically agrees with all the standard Republican positions, including those that are obviously oppressive of women. She voted for an abortion ban at conception, which is totally different from what Cotton wants. She even takes the bizarre confederalist view of Paul Ryan that a federal minimum wage is “big government” but a state minimum wage is just fine. The one exception is that she supports reform of sexual harassment policy in the military, probably because she experienced it herself.

But I’m supposed to be an economist, so what do I think is going to happen to the economy? (Of course, don’t forget, the economy is made of people. One of the best things that can ever happen to an economy is the empowerment of women, racial minorities, and LGBT people, all of which are now in jeopardy under a Republican Congress.)

The best-case scenario is “not much”; the obstructionism continues, and despite an utterly useless government the market repairs itself as it will always do eventually. Job growth will continue at its slow but steady pace, GDP will get back to potential trend. Inequality will continue to increase as it has been doing for about 30 years now. In a couple years there will be another election and hopefully Republicans will lose their majority.

The worst-case scenario is “Republicans get what they want”. The budget will finally be balanced—by cutting education, infrastructure, and social services. Then they’ll unbalance it again by cutting taxes on the rich and starting a couple more wars, because that kind of government spending doesn’t count. (They are weaponized Keynesians all.) They’ll restrict immigration even though immigration is what the First World needs right now (not to mention the fact that the people coming here need it even more). They’ll impose draconian regulations on abortion, they’ll stop or reverse the legalization of marijuana and same-sex marriage.

Democrats must not cave in to demands for “compromise” and “bipartisanship”. If the Republicans truly believed in those things, they wouldn’t have cost the economy $24 billion and downgraded the credit rating of the US government by their ridiculous ploy to shut down the government. They wouldn’t have refused to deal until the sequester forced nonsensical budget cuts. They wouldn’t make it a central part of their platform to undermine or repeal the universal healthcare system that they invented just so that Democrats can’t take credit for it. They have become so committed to winning political arguments at any cost that they are willing to do real harm to America and its people in order to do it. They are overcome by the tribal paradigm, and we all suffer for it.

No, the Republicans in Congress today are like 3-year-olds who throw a tantrum when they don’t get everything exactly their way. You can’t negotiate with these people, you can’t compromise with them. I wish you could, I really do. I’ve heard of days long gone when Congress actually accomplished things, but I have only vague recollections, for I was young in the Clinton era. (I do remember times under Bush II when Congress did things, but they were mostly bad things.) Maybe if we’re firm enough or persuasive enough some of them will even come around. But the worst thing Democrats could do right now is start caving to Republican demands thinking that it will restore unity to our government—because that unity would come only at the price of destroying people’s lives.

Unfortunately I fear that Democrats will appease Republicans in this way, because they’ve been doing that so far. In the campaign, hardly any of the Democrats mentioned Obama’s astonishing economic record or the numerous benefits of Obamacare—which by the way is quite popular among its users, at least more so than getting rid of it entirely (most people want to fix it, not eliminate it). Most of the Democratic candidates barely ran a campaign deserving of the name.

To be clear: Do not succumb to the tribal paradigm yourself. Do not think that everyone who votes Republican is a bad person—the vast majority are good people who were misled. Do not even assume that every Republican politician is evil; a few obviously are (see also Dick Cheney), but most are actually not so much evil as blinded by the ideology of their tribe. I believe that Paul Ryan and Rand Paul think that what they do is in the best interests of America; the problem is not their intentions but their results and their unwillingness to learn from those results. We do need to find ways to overcome partisanship and restore unity and compromise—but we must not simply bow to their demands in order to do that.

Democrats: Do not give in. Stand up for your principles. Every time you give in to their obstructionism, you are incentivizing that obstructionism. And maybe next election you could actually talk about the good things your party does for people—or the bad things their party does—instead of running away from your own party and apologizing for everything?

 Who are the job creators?

JDN 2456956 PDT 11:30.

For about 20 years now, conservatives have opposed any economic measures that might redistribute wealth from the rich as hurting “job creators” and thereby damaging the economy. This has become so common that the phrase “job creator” has become a euphemism for “rich person”; indeed, when Paul Ryan was asked to define “rich” he stumbled over himself and ended up with “job creators”. A few years ago, John Boehner gave a speech saying that ‘the job creators are on strike’. During his presidential campaign, Mitt Romney said Obama was ‘waging war on job creators’.

If you get the impression that the “job creator” narrative is used more often now than ever, you’re not imagining things; the term was used almost as many times in a single month of Obama’s presidency than it was in George W. Bush’s entire second term.

This narrative is not just wrong; it’s utterly ludicrous. The vision seems to be something like this: Out there somewhere, beyond the view of ordinary mortals, there lives a race of beings known as Job Creators. Ours is not to judge them, not to influence them; ours is only to appease them so that they might look upon us with favor and bestow upon us our much-needed Jobs. Without these Jobs, we will surely die, and so all other concerns are secondary: We must appease the Job Creators.

Businesses don’t create jobs because they feel like it, or because they love us, or because we have gone through the appropriate appeasement rituals. They don’t create jobs because their taxes are low or because they have extra money lying around. They create jobs because they see profit in it. They create jobs because the marginal revenue of hiring an additional worker exceeds the marginal cost.

And of course they’ll gladly destroy jobs for the exact same reasons; if they think the marginal cost exceeds the marginal revenue, out come the pink slips. If demand for the product has fallen, if the raw materials have become more expensive, or if new technology has allowed some of the labor to be cheaply automated, workers will be laid off in the interests of the company. In fact, sometimes it won’t even be in the interests of the company; corporate executives are lately in the habit of using layoffs and stock buybacks to artificially boost the value of their stock options so they can exercise them, pocket the money, and run away as the company comes crashing to the ground. Because of market deregulation and the ridiculous theory of “shareholder value” (as though shareholders are the only ones who matter!), our stock market has changed from a system of value creation to a system of value extraction.

What actually creates jobs? Demand. If the demand for their product exceeds the company’s capacity to produce it, they will hire more people in order to produce more of the product. The marginal revenue has to go up, or companies will have no reason to hire new workers. (The marginal cost could also go down, but then you get low-paying jobs, which isn’t really what we’re aiming for.) They will continue hiring more people up until the point at which it costs more to hire someone than they’d make from selling the products that person could make for them.

What if they don’t have enough money? They’ll borrow it. As long as they know they are going to make a profit from that worker, they will gladly borrow money in order to hire them. Indeed, corporations do this sort of thing all the time. If banks stop lending, that’s a big problem—it’s called a credit crunchand it’s a major part of just about any financial crisis. But that isn’t because rich people don’t have enough money, it’s because our banking system is fundamentally defective and corrupt. Yes, fixing the banking system would create jobs in a number of different ways. (The biggest three I can think of: There would be more credit for real businesses to fund investment, more credit for individuals to increase demand, and labor effort that is currently wasted on useless financial speculation would be once again returned to real production.) But that’s not what Paul Ryan and his ilk are talking about—indeed, Paul Ryan seems to think that we should undo the meager reforms we’ve already made. Unless we fundamentally change the financial system, the way to create jobs would be to create demand.

And what decides demand? Well, a lot of things I suppose; preferences, technologies, cultural norms, fads, advertising, and so on. But when you’re looking at short-run changes like the business cycle, the driving factor in most cases is actually quite simple: How much money does the middle class have to spend? The middle class is where most of the consumer spending comes from, and if the middle class has money to spend we will buy products. If we don’t have money to spend—we’re out of work, or we have too much debt to pay—then we won’t buy products. It’s not that we suddenly stopped wanting products; the utility value of those products to us is unchanged. The problem is that we simply can’t afford them anymore. This is what happens in a recession: After some sort of shock to the economy, the middle class stops being able to spend, which reduces demand. That causes corporations to lay off workers, which creates unemployment, which reduces demand even further. To correct for the lost demand, prices are supposed to go down (deflation); but this doesn’t actually work, for two reasons.

First, people absolutely hate seeing their wages go down; even if there is a legitimate economic reason, people still have a sense that they are being exploited by their employers (and sometimes they are). This is called downward nominal wage rigidity.

Second, when prices go down, the real value of debt doesn’t go down; it goes up. Your loans are denominated in dollars, not apples; so reducing the price of apples means that you actually owe more apples than you did before. Since debt is usually one of the big things holding back spending by the middle class in the first place, deflation doesn’t correct the imbalance; it makes it worse. This is called debt deflation. Maybe we shouldn’t call it that, since the problem isn’t the prices, it’s the debt. In 2008, the first thing that happened wasn’t that prices in general went down, which is what we normally mean by “deflation”; it was that housing prices went down, and so suddenly people owed vastly more on their mortgages than they had before, and many of them couldn’t afford to pay. It wasn’t a drop in prices so much as a rise in the real value of debt. (I actually think one of the reasons there is no successful comprehensive theory of the cause of business cycles is that there isn’t a single comprehensive cause of business cycles. It’s usually some form of financial crisis followed by debt deflation—and these are the ones to be worried about, 1929 and 2008—but that isn’t always what happens. In 2001, we actually had an unanticipated negative real economic shock—the 9/11 attacks. In 1973 we had a different kind of real economic shock when OPEC raised oil prices at the same time as the US hit peak oil. We should probably be distinguishing between financial recession and real recession.)

Notice how in this entire discussion of what drives aggregate demand, I have never mentioned rich people getting free money; I haven’t even mentioned tax rates. If you have the simplistic view “taxes are bad” (or the totally insane, yet still common, view “taxation is slavery”), then you’re going to look for excuses to lower taxes whenever you can. If you specifically love rich people more than poor people, you’re going to look for excuses to lower taxes on the rich and raise them on the poor (and there is really no other way to interpret Mitt Romney’s infamous “47%” comments). But none of this has anything to do with aggregate demand and job creation. It is pure ideology and has no basis in economics.

Indeed, there’s little reason to think that a tax on corporate profits or capital income would change hiring decisions at all. When we talk about the potential distortions of income taxes, we really have to be talking about labor income, because labor can actually be disincentivized. Say you’re making $15 an hour and not paying any taxes, but your tax rate is suddenly raised to 40%. You can see that after taxes your real wage is now only $9, and maybe you’ll decide that it’s just not worth it to work those hours. This is because you pay a real cost to work—it’s hard, it’s stressful, it’s frustrating, it takes up time.

Capital income can’t be disincentivized. You can have relative incentives, if you tax certain kinds of capital more than others. But if you tax all capital income at the same rate, the incentives remain exactly as they were before: Seek the highest return on investment. Your only costs were financial, and your only benefits are financial. Yes, you’ll be unhappy that your after-tax return on investment has gone down; but it won’t change your investment decisions. If you previously had the choice between investment A yielding 5% return and investment B yielding a 10% return, you’d choose B. Now you pay a 40% tax on capital income; you now have a choice between a 3% real return on A and a 6% real return on B—you’re still going to choose B. That’s probably why high marginal tax rates on income don’t reduce job growth—because most high incomes are capital incomes of one form or another; even when a CEO reports ordinary income it’s really a due to profits and stock options, it’s not like he was paid a wage for work he did.

To be fair, it does get more complicated when you include borrowing and interest rates (now you have the option of lending your money at interest or borrowing more from someone else, which may be taxed differently), and because it’s so easy to move money across borders you can have a relative incentive even when tax rates within a given nation are all the same. Don’t take this literally as saying that you can do whatever you want with taxes on capital income. But in fact you can do quite a lot, because you can change the real rate of return and have no incentive effect as long as you don’t change the relative rate of return. That’s different from wages, for which the real value of the wage can have a direct effect on employers and employees. (The only way to have the same effect on workers would be to somehow lower the real cost of working—make working easier or more fun—which actually sounds like a great idea if you can do it.) The people who are constantly telling us that workers need to tighten their belts but we mustn’t dare tax the “job creators” have the whole situation exactly backwards.

There’s something else I should bring up as well. In everything I’ve said above, I have taken as given the assumption that we need jobs. For many people, probably most Americans in fact, this is an unquestioned assumption, seemingly so obvious as to be self-evident; of course we need jobs, right? But no, actually, we don’t; what we need is production and distribution of wealth. We need to make food and clothing and houses—those are truly basic needs. We could even say we “need” (or at least want) to make televisions and computers and cars. As individuals and as a society we benefit from having these goods. And in our present capitalist economy, the way that we produce and distribute goods is through a system of jobs—you are paid to make goods, and then you can use that money to buy other goods. Don’t get me wrong; this system works pretty well, and for the most part I want to make small adjustments and reforms around the edges rather than throw the whole thing out. Thus far, other systems have not worked as well; when we have attempted to centrally plan production and distribution, the best-case scenario has been inefficiency and the worst-case scenario has been mass starvation.

But we should also be open to the possibility of other systems that are better than capitalism. We should be open to the possibility of a culture like, well, The Culture (and if you haven’t read any Iain Banks novels you should; I’d probably start with Player of Games), in which artificial intelligence and automation allows central planning to finally achieve efficient production and distribution. We should be open to the possibility of a culture like the Federation (and don’t tell me you haven’t seen Star Trek!), in which resources are so plentiful that anyone can have whatever they want, and people work not because they have to, but because they want to—it gives them meaning and purpose in their lives. Fanciful? Perhaps. But lightspeed worldwide communication and landing robots on other planets would have seemed pretty fanciful a century ago.
Capitalism is really an Industrial Era system. It was designed in, and for, a world in which the most important determinants of production are machines, raw materials, and labor hours. But we don’t live in that world anymore. The most important determinants of production are now ideas; software, research, patents, copyrights. Microsoft, Google, and Amazon don’t make things at all, they make ideas; Sony, IBM, Apple, and Toshiba make things, but those things are primarily for the production and dissemination of ideas. Ideas are just as valuable as things—if not more so—but they obey different rules.

Capitalism was designed for a world of rival, excludable goods with increasing marginal cost. Rival, meaning that if one person has it, someone else can’t have it anymore. We speak of piracy as “stealing”, but that’s totally wrong; if you steal something I have, I don’t have it anymore. If you pirate something I have, I still have it. If I gave you my computer, I wouldn’t have it anymore; but I can give you the ideas in this blog post and then we’ll both have them. Excludable, meaning that there is a way to prevent someone else from getting it if you don’t want them to. And increasing marginal cost, meaning that the more you make, the more it costs to make each one. Under these conditions, you get a very nice equilibrium that is efficient under competition.

But ideas are nonrival, they have nearly zero marginal cost, and we are increasingly finding that they aren’t even very excludable; DRM is astonishingly ineffective. Under these conditions, your nice efficient equilibrium completely evaporates. There can be many different equilibria, or no equilibrium at all; and the results are almost always inefficient. We have shoehorned capitalism onto an economy that it was not designed to deal with. Capitalism was designed for the Industrial Era; but we are now in the Information Era.

Indeed, you can see this in all our neoclassical growth models: K is physical capital—machines—and L is labor, and sometimes it is augmented with N—natural resources. But these typically only explain about 50% of the variation in economic output, so we add an extra term, A, which goes by many names: “productivity”, “efficiency”, “technology”; I think the most informative one is actually “the Solow residual”. It’s the residual; it’s the part we can’t explain, dare I say, the part capitalism isn’t designed to explain. It is, in short, made of ideas. One of my thesis papers is actually about this “total factor productivity”, and how a major component of it is made up of one class of ideas in particular: Corruption. Corruption isn’t a thing, some object in space. It’s a cultural norm, a systemic idea that permeates the thoughts and actions of the whole society. It affects what we do, whom we trust, how the rules are made, and how well we follow those rules. You can even think of capitalism as an idea, a system, a culture—and a good part of “productivity” can be accounted for by “market orientation”, which is to say how capitalist a nation is. I would like to see someday a new model that actually includes these factors as terms in the equation, instead of throwing them all together in the mysterious A that we don’t understand.

With this in mind, we should be asking ourselves whether we need jobs at all, because jobs are a system designed for the production of physical goods in the Industrial Era. Now that we live in the Information Era and most of our production is in the form of ideas, do we still need jobs? Does everyone need a job? If you’re trying to make cars for a million people, it may not take a million people to do it, but it’s going to take thousands. But if you’re trying to design a car for a million people, or make a computer game about cars for a million people to play, that can be done with a lot fewer people. Ideas can be made by a few and then disseminated to the world. General Motors has 200,000 employees (and used to have about twice as many in the 1970s); Blizzard Entertainment has less than 5,000. It’s not because they produce for fewer people; GM sells about 3 million cars a year, and Starcraft sold over 11 million copies. Starcraft came out in 1998, so I added up how many cars GM sold in the US since 1998: 61 million. That’s still 3.28 employees per thousand cars sold, but only 0.45 employees per thousand computer games sold.

Still, I don’t have a detailed account of what this new jobless economic system might look like. For now, it’s probably best if people have jobs. But if we really want to create jobs, we need to increase aggregate demand. That most likely means either reducing debt or giving more money to consumers. It certainly doesn’t have anything to do with tax cuts for the rich.

And really, this is pretty obvious; if you stop and think for a minute about why businesses create jobs, you realize that it has to do with demand for products, not how nice the government treats them or how much extra cash they have laying around. I actually have trouble believing that the people who say “job creators” unironically actually believe the words they are saying. Do they honestly think that rich people create jobs out of sheer brilliance and benevolence, but are constrained by how much money they have and “go on strike” if the government doesn’t kowtow to them?

The only way I can see that they could actually believe this sort of thing would be if they read so much Ayn Rand that it totally infested their brains and rendered them incapable of thinking outside that framework. Perhaps Krugman is right, and Rand Paul really does believe that he is John Galt. Maybe they really do honestly believe that this is how economics works—in which case it’s no wonder that our economy is in trouble. Indeed, the marvel is that it works at all.

Should we raise the minimum wage?

JDN 2456949 PDT 10:22.

The minimum wage is an economic issue that most people are familiar with; a large portion of the population has worked for minimum wage at some point in their lives, and those who haven’t generally know someone who has. As Chris Rock famously remarked (in the recording, Chris Rock, as usual, uses some foul language), “You know what that means when they pay you minimum wage? You know what they’re trying to tell you? It’s like, ‘Hey, if I could pay you less, I would; but it’s against the law.’ ”

The minimum wage was last raised in 2009, but adjusted for inflation its real value has been trending downward since 1968. The dollar values are going up, but not fast enough to keep up with inflation.

So, should we raise it again? How much? Should we just match it to inflation, or actually raise it higher in real terms? Productivity (in terms of GDP per worker) has more than doubled since 1968, so perhaps the minimum wage should double as well?

There are two major sides in this debate, and I basically disagree with both of them.

The first is the right-wing view (here espoused by the self-avowed “Objectivist” Don Watkins) that the minimum wage should be abolished entirely because it is an arbitrary price floor that prevents workers from selling their labor at whatever wage the market will bear. He argues that the free market is the only way the value of labor should be assessed and the government has no business getting involved.

On the other end of the spectrum we have Robert Reich, who thinks we should definitely raise the minimum wage and it would be the best way to lift workers out of poverty. He argues that by providing minimum-wage workers with welfare and Medicaid, we are effectively subsidizing employers to pay lower wages. While I sympathize a good deal more with this view, I still don’t think it’s quite right.

Why not? Because Watkins is right about one thing: The minimum wage is, in fact, an arbitrary price floor. Out of all the possible wages that an employer could pay, how did we decide that this one should be the lowest? And the same applies to everyone, no matter who they are or what sort of work they do?

What Watkins gets wrong—and Reich gets right—is that wages are not actually set in a free and competitive market. Large corporations have market power; they can influence wages and prices to their own advantage. They use monopoly power to raise prices, and its inverse, monopsony power, to lower wages. The workers who are making a minimum wage of $7.25 wouldn’t necessarily make $7.25 in a competitive market; they could make more than that. All we know, actually, is that they would make at least this much, because if a worker’s marginal productivity is below the minimum wage the corporation simply wouldn’t have hired them.

Monopsony power doesn’t just lower wages; it also reduces employment. One of the ways that corporations can control wages is by controlling hiring; if they tried to hire more people, they’d have to offer a higher wage, so instead they hire fewer people. Under these circumstances, a higher minimum wage can actually create jobs, as Reich argues it will. And in this particular case I think he’s right about that, because corporations have enormous market power to hold wages down and in the Second Depression we have a huge amount of unused productive capacity. But this isn’t true in general. If markets are competitive, then raising minimum wage just causes unemployment. Even when corporations have market power, if there isn’t much unused capacity then raising minimum wage will just lead them to raise prices instead of hiring more workers.

Reich is also wrong about this idea that welfare payments subsidize low wages. On the contrary, the stronger your welfare system, the higher your wages will be. The reason is quite simple: A stronger welfare system gives workers more bargaining power. If not getting this job means you turn to prostitution or starve to death, then you’re going to take just about any wage they offer you. (I don’t entirely agree with Krugman’s defense of sweatshops—I believe there are ways to increase trade without allowing oppressive working conditions—but he makes this point quite vividly.) On the other hand, if you live in the US with a moderate welfare system, you can sometimes afford to say no; you might end up broke or worse, homeless, but you’re unlikely to starve to death because at least you have food stamps. And in a nation with a really robust welfare system like Sweden, you can walk away from any employer who offers to pay you less than your labor is worth, because you know that even if you can’t find a job for awhile your basic livelihood will be protected. As a result, stronger welfare programs make labor markets more competitive and raise wages. Welfare and Medicaid do not subsidize low-wage employers; they exert pressure on employers to raise their low wages. Indeed, a sufficiently strong welfare system could render minimum wage redundant, as I’ll get back to at the end of this post.

Of course, I am above all an empiricist; all theory must bow down before the data. So what does the data say? Does raising the minimum wage create jobs or destroy jobs? Our best answer from compiling various studies is… neither. Moderate increases in the minimum wage have no discernible effect on employment. In some studies we’ve found increases, in others decreases, but the overall average effect across many studies is indistinguishable from zero.

Of course, a sufficiently large increase is going to decrease employment; a Fox News reporter once famously asked: “Why not raise the minimum wage to $100,000 an hour!?” (which Jon Stewart aptly satirized as “Why not pay people in cocaine and unicorns!?”) Yes, raising the minimum wage to $100,000 an hour would create massive inflation and unemployment. But that really says nothing about whether raising the minimum wage to $10 or $20 would be a good idea. Covering your car with 4000 gallons of gasoline is a bad idea, but filling it with 10 gallons is generally necessary for its proper functioning.

This kind of argument is actually pretty common among Republicans, come to think of it. Take the Laffer Curve, for instance; it’s basically saying that since a 99% tax on everyone would damage the economy (which is obviously true) then a 40% tax specifically on millionaires must have the same effect. Another good one is Rush Limbaugh’s argument that if unemployment benefits are good, why not just put everyone on unemployment benefits? Well, again, because there’s a difference between doing something for some people sometimes and doing it for everyone all the time. There are these things called numbers; they measure whether something is bigger or smaller instead of just “there” or “not there”. You might want to learn about that.

Since moderate increases in minimum wage have no effect on unemployment, and we are currently under conditions of extremely low—in fact, dangerously low—inflation, then I think on balance we should go with Reich: Raising the minimum wage would do more good than harm.

But in general, is minimum wage the best way to help workers out of poverty? No, I don’t think it is. It’s awkward and heavy-handed; it involves trying to figure out what the optimal wage should be and writing it down in legislation, instead of regulating markets so that they will naturally seek that optimal level and respond to changes in circumstances. It only helps workers at the very bottom: Someone making $12 an hour is hardly rich, but they won’t benefit from increasing minimum wage to $10; in fact they might be worse off, if that increase triggers inflation that lowers the real value of their $12 wage.

What do I propose instead? A basic income. There should be a cash payment that every adult citizen receives, once a month, directly from the government—no questions asked. You don’t have to be unemployed, you don’t have to be disabled, you don’t have to be looking for work. You don’t have to spend it on anything in particular; you can use it for food, for housing, for transportation; or if you like you can use it for entertainment or save it for a rainy day. We don’t keep track of what you do with it, because it’s your own freedom and none of our business. We just give you this money as your dividends for being a shareholder in the United States of America.

This would be extremely easy to implement—the IRS already has all the necessary infrastructure, they just need to turn some minus signs into plus signs. We could remove all the bureaucracy involved in administering TANF and SNAP and Medicaid, because there’s no longer any reason to keep track of who is in poverty since nobody is. We could in fact fold the $500 billion a year we currently spend on means-tested programs into the basic income itself. We could pull another $300 billion from defense spending while still solidly retaining the world’s most powerful military.

Which brings me to the next point: How much would this cost? Probably less than you think. I propose indexing the basic income to the poverty line for households of 2 or more; since currently a household of 2 or more at the poverty line makes $15,730 per year, the basic income would be $7,865 per person per year. The total cost of giving that amount to each of the 243 million adults in the United States would be $1.9 trillion, or about 12% of our GDP. If we fold in the means-tested programs, that lowers the net cost to $1.4 trillion, 9% of GDP. This means that an additional flat tax of 9% would be enough to cover the entire amount, even if we don’t cut any other government spending.

If you use a progressive tax system like I recommended a couple of posts ago, you could raise this much with a tax on less than 5% of utility, which means that someone making the median income of $30,000 would only pay 5.3% more than they presently do. At the mean income of $50,000, you’d only pay 7.7%. And keep in mind that you are also receiving the additional $7,865; so in fact in both cases you actually end up with more than you had before the basic income was implemented. The break-even point is at about $80,000, where you pay an extra 9.9% ($7,920) and receive $7,865, so your after-tax income is now $79,945. Anyone making less than $80,000 per year actually gains from this deal; the only people who pay more than they receive are those who make more than $80,000. This is about the average income of someone in the fourth quintile (the range where 60% to 80% of the population is below you), so this means that roughly 70% of Americans would benefit from this program.

With this system in place, we wouldn’t need a minimum wage. Working full-time at our current minimum wage makes you $7.25*40*52 = $15,080 per year. If you are a single person, you’re getting $7,865 from the basic income, this means that you’ll still have more than you presently do as long as your employer pays you at least $3.47 per hour. And if they don’t? Well then you can just quit, knowing that at least you have that $7,865. If you’re married, it’s even better; the two of you already get $15,730 from the basic income. If you were previously raising a family working full-time on minimum wage while your spouse is unemployed, guess what: You actually will make more money after the policy no matter what wage your employer pays you.

This system can adapt to changes in the market, because it is indexed to the poverty level (which is indexed to inflation), and also because it doesn’t say anything about what wage an employer pays. They can pay as little or as much as the market will bear; but the market is going to bear more, because workers can afford to quit. Billionaires are going to hate this plan, because it raises their taxes (by about 40%) and makes it harder for them to exploit workers. But for 70% of Americans, this plan is a pretty good deal.

What are the limits to growth?

JDN 2456941 PDT 12:25.

Paul Krugman recently wrote a column about the “limits to growth” community, and as usual, it’s good stuff; his example of how steamships substituted more ships for less fuel is quite compelling. But there’s a much stronger argument to made against “limits to growth”, and I thought I’d make it here.

The basic idea, most famously propounded by Jay Forrester but still with many proponents today (and actually owing quite a bit to Thomas Malthus), is this: There’s only so much stuff in the world. If we keep adding more people and trying to give people higher standards of living, we’re going to exhaust all the stuff, and then we’ll be in big trouble.

This argument seems intuitively reasonable, but turns out to be economically naïve. It can take several specific forms, from the basically reasonable to the utterly ridiculous. On the former end is “peak oil”, the point at which we reach a maximum rate of oil extraction. We’re actually past that point in most places, and it won’t be long before the whole world crosses that line. So yes, we really are running out of oil, and we need to transition to other fuels as quickly as possible. On the latter end is the original Mathusian argument (we now have much more food per person worldwide than they did in Malthus’s time—that’s why ending world hunger is a realistic option now), and, sadly, the argument Mark Buchanan made a few days ago. No, you don’t always need more energy to produce more economic output—as Krugman’s example cleverly demonstrates. You can use other methods to improve your energy efficiency, and that doesn’t necessarily require new technology.

Here’s the part that Krugman missed: Even if we need more energy, there’s plenty of room at the top. The total amount of sunlight that hits the Earth is about 1.3 kW/m^2, and the Earth has a surface area of about 500 million km^2, which is 5e14 m^2. That means that if we could somehow capture all the sunlight that hits the Earth, we’d have 6.5e17 W, which is 5.7e18 kilowatt-hours per year. Total world energy consumption is about 140,000 terawatt-hours per year, which is 1.4e14 kilowatt-hours per year. That means we could increase energy consumption by a factor of one thousand just using Earth-based solar power (Covering the oceans with synthetic algae? A fleet of high-altitude balloons covered in high-efficiency solar panels?). That’s not including fission power, which is already economically efficient, or fusion power, which has passed break-even and may soon become economically feasible as well. Fusion power is only limited by the size of your reactor and your quantity of deuterium, and deuterium is found in ocean water (about 33 milligrams per liter), not to mention permeating all of outer space. If we can figure out how to fuse ordinary hydrogen, well now our fuel is literally the most abundant substance in the universe.

And what if we move beyond the Earth? What if we somehow captured not just the solar energy that hits the Earth, but the totality of solar energy that the Sun itself releases? That figure is about 1e31 joules per day, which is 1e27 kilowatt-hours per day, or seven trillion times as much energy as we currently consume. It is literally enough to annihilate entire planets, which the Sun would certainly do if you put a planet near enough to it. A theoretical construct to capture all this energy is called a Dyson Sphere, and the ability to construct one officially makes you a Type 2 Kardashev Civilization. (We currently stand at about Type 0.7. Building that worldwide solar network would raise us to Type 1.)

Can we actually capture all that energy with our current technology? Of course not. Indeed, we probably won’t have that technology for centuries if not millennia. But if your claim—as Mark Buchanan’s was—is about fundamental physical limits, then you should be talking about Dyson Spheres. If you’re not, then we are really talking about practical economic limits.

Are there practical economic limits to growth? Of course there are; indeed, they are what actually constrains growth in the real world. That’s why the US can’t grow above 2% and China won’t be growing at 7% much longer. (I am rather disturbed by the fact that many of the Chinese nationals I know don’t appreciate this; they seem to believe the propaganda that this rapid growth is something fundamentally better about the Chinese system, rather than the simple economic fact that it’s easier to grow rapidly when you are starting very small. I had a conversation with a man the other day who honestly seemed to think that Macau could sustain its 12% annual GDP growth—driven by gambling, no less! Zero real productivity!—into the indefinite future. Don’t get me wrong, I’m thrilled that China is growing so fast and lifting so many people out of poverty. But no remotely credible economist believes they can sustain this growth forever. The best-case scenario is to follow the pattern of Korea, rising from Third World to First World status in a few generations. Korea grew astonishingly fast from about 1950 to 1990, but now that they’ve made it, their growth rate is only 3%.)

There is also a reasonable argument to be made about the economic tradeoffs involved in fighting climate change and natural resource depletion. While the people of Brazil may like to have more firewood and space for farming, the fact is the rest of need that Amazon in order to breathe. While any given fisherman may be rational in the amount of fish he catches, worldwide we are running out of fish. And while we Americans may love our low gas prices (and become furious when they rise even slightly), the fact is, our oil subsidies are costing hundreds of billions of dollars and endangering millions of lives.

We may in fact have to bear some short-term cost in economic output in order to ensure long-term environmental sustainability (though to return to Krugman, that cost may be a lot less than many people think!). Economic growth does slow down as you reach high standards of living, and it may even continue to slow down as technology begins to reach diminishing returns (though this is much harder to forecast). So yes, in that sense there are limits to growth. But the really fundamental limits aren’t something we have to worry about for at least a thousand years. Right now, it’s just a question of good economic policy.

Pareto Efficiency: Why we need it—and why it’s not enough

JDN 2456914 PDT 11:45.

I already briefly mentioned the concept in an earlier post, but Pareto-efficiency is so fundamental to both ethics and economics I decided I would spent some more time on explaining exactly what it’s about.

This is the core idea: A system is Pareto-efficient if you can’t make anyone better off without also making someone else worse off. It is Pareto-inefficient if the opposite is true, and you could improve someone’s situation without hurting anyone else.

Improving someone’s situation without harming anyone else is called a Pareto-improvement. A system is Pareto-efficient if and only if there are no possible Pareto-improvements.

Zero-sum games are always Pareto-efficient. If the game is about how we distribute the same $10 between two people, any dollar I get is a dollar you don’t get, so no matter what we do, we can’t make either of us better off without harming the other. You may have ideas about what the fair or right solution is—and I’ll get back to that shortly—but all possible distributions are Pareto-efficient.

Where Pareto-efficiency gets interesting is in nonzero-sum games. The most famous and most important such game is the so-called Prisoner’s Dilemma; I don’t like the standard story to set up the game, so I’m going to give you my own. Two corporations, Alphacomp and Betatech, make PCs. The computers they make are of basically the same quality and neither is a big brand name, so very few customers are going to choose on anything except price. Combining labor, materials, equipment and so on, each PC costs each company $300 to manufacture a new PC, and most customers are willing to buy a PC as long as it’s no more than $1000. Suppose there are 1000 customers buying. Now the question is, what price do they set? They would both make the most profit if they set the price at $1000, because customers would still buy and they’d make $700 on each unit, each making $350,000. But now suppose Alphacomp sets a price at $1000; Betatech could undercut them by making the price $999 and sell twice as many PCs, making $699,000. And then Alphacomp could respond by setting the price at $998, and so on. The only stable end result if they are both selfish profit-maximizers—the Nash equilibrium—is when the price they both set is $301, meaning each company only profits $1 per PC, making $1000. Indeed, this result is what we call in economics perfect competition. This is great for consumers, but not so great for the companies.

If you focus on the most important choice, $1000 versus $999—to collude or to compete—we can set up a table of how much each company would profit by making that choice (a payoff matrix or normal form game in game theory jargon).

A: $999 A: $1000
B: $999 A:$349k

B:$349k

A:$0

B:$699k

B: $1000 A:$699k

B:$0

A:$350k

B:$350k

Obviously the choice that makes both companies best-off is for both companies to make the price $1000; that is Pareto-efficient. But it’s also Pareto-efficient for Alphacomp to choose $999 and the other one to choose $1000, because then they sell twice as many computers. We have made someone worse off—Betatech—but it’s still Pareto-efficient because we couldn’t give Betatech back what they lost without taking some of what Alphacomp gained.

There’s only one option that’s not Pareto-efficient: If both companies charge $999, they could both have made more money if they’d charged $1000 instead. The problem is, that’s not the Nash equilibrium; the stable state is the one where they set the price lower.

This means that only case that isn’t Pareto-efficient is the one that the system will naturally trend toward if both compal selfish profit-maximizers. (And while most human beings are nothing like that, most corporations actually get pretty close. They aren’t infinite, but they’re huge; they aren’t identical, but they’re very similar; and they basically are psychopaths.)

In jargon, we say the Nash equilibrium of a Prisoner’s Dilemma is Pareto-inefficient. That one sentence is basically why John Nash was such a big deal; up until that point, everyone had assumed that if everyone acted in their own self-interest, the end result would have to be Pareto-efficient; Nash proved that this isn’t true at all. Everyone acting in their own self-interest can doom us all.

It’s not hard to see why Pareto-efficiency would be a good thing: if we can make someone better off without hurting anyone else, why wouldn’t we? What’s harder for most people—and even most economists—to understand is that just because an outcome is Pareto-efficient, that doesn’t mean it’s good.

I think this is easiest to see in zero-sum games, so let’s go back to my little game of distributing the same $10. Let’s say it’s all within my power to choose—this is called the ultimatum game. If I take $9 for myself and only give you $1, is that Pareto-efficient? It sure is; for me to give you any more, I’d have to lose some for myself. But is it fair? Obviously not! The fair option is for me to go fifty-fifty, $5 and $5; and maybe you’d forgive me if I went sixty-forty, $6 and $4. But if I take $9 and only offer you $1, you know you’re getting a raw deal.

Actually as the game is often played, you have the choice the say, “Forget it; if that’s your offer, we both get nothing.” In that case the game is nonzero-sum, and the choice you’ve just taken is not Pareto-efficient! Neoclassicists are typically baffled at the fact that you would turn down that free $1, paltry as it may be; but I’m not baffled at all, and I’d probably do the same thing in your place. You’re willing to pay that $1 to punish me for being so stingy. And indeed, if you allow this punishment option, guess what? People aren’t as stingy! If you play the game without the rejection option, people typically take about $7 and give about $3 (still fairer than the $9/$1, you may notice; most people aren’t psychopaths), but if you allow it, people typically take about $6 and give about $4. Now, these are pretty small sums of money, so it’s a fair question what people might do if $100,000 were on the table and they were offered $10,000. But that doesn’t mean people aren’t willing to stand up for fairness; it just means that they’re only willing to go so far. They’ll take a $1 hit to punish someone for being unfair, but that $10,000 hit is just too much. I suppose this means most of us do what Guess Who told us: “You can sell your soul, but don’t you sell it too cheap!”

Now, let’s move on to the more complicated—and more realistic—scenario of a nonzero-sum game. In fact, let’s make the “game” a real-world situation. Suppose Congress is debating a bill that would introduce a 70% marginal income tax on the top 1% to fund a basic income. (Please, can we debate that, instead of proposing a balanced-budget amendment that would cripple US fiscal policy indefinitely and lead to a permanent depression?)

This tax would raise about 14% of GDP in revenue, or about $2.4 trillion a year (yes, really). It would then provide, for every man, woman and child in America, a $7000 per year income, no questions asked. For a family of four, that would be $28,000, which is bound to make their lives better.

But of course it would also take a lot of money from the top 1%; Mitt Romney would only make $6 million a year instead of $20 million, and Bill Gates would have to settle for $2.4 billion a year instead of $8 billion. Since it’s the whole top 1%, it would also hurt a lot of people with more moderate high incomes, like your average neurosurgeon or Paul Krugman, who each make about $500,000 year. About $100,000 of that is above the cutoff for the top 1%, so they’d each have to pay about $70,000 more than they currently do in taxes; so if they were paying $175,000 they’re now paying $245,000. Once taking home $325,000, now only $255,000. (Probably not as big a difference as you thought, right? Most people do not seem to understand how marginal tax rates work, as evinced by “Joe the Plumber” who thought that if he made $250,001 he would be taxed at the top rate on the whole amount—no, just that last $1.)

You can even suppose that it would hurt the economy as a whole, though in fact there’s no evidence of that—we had tax rates like this in the 1960s and our economy did just fine. The basic income itself would inject so much spending into the economy that we might actually see more growth. But okay, for the sake of argument let’s suppose it also drops our per-capita GDP by 5%, from $53,000 to $50,300; that really doesn’t sound so bad, and any bigger drop than that is a totally unreasonable estimate based on prejudice rather than data. For the same tax rate might have to drop the basic income a bit too, say $6600 instead of $7000.

So, this is not a Pareto-improvement; we’re making some people better off, but others worse off. In fact, the way economists usually estimate Pareto-efficiency based on so-called “economic welfare”, they really just count up the total number of dollars and divide by the number of people and call it a day; so if we lose 5% in GDP they would register this as a Pareto-loss. (Yes, that’s a ridiculous way to do it for obvious reasons—$1 to Mitt Romney isn’t worth as much as it is to you and me—but it’s still how it’s usually done.)

But does that mean that it’s a bad idea? Not at all. In fact, if you assume that the real value—the utility—of a dollar decreases exponentially with each dollar you have, this policy could almost double the total happiness in US society. If you use a logarithm instead, it’s not quite as impressive; it’s only about a 20% improvement in total happiness—in other words, “only” making as much difference to the happiness of Americans from 2014 to 2015 as the entire period of economic growth from 1900 to 2000.

If right now you’re thinking, “Wow! Why aren’t we doing that?” that’s good, because I’ve been thinking the same thing for years. And maybe if we keep talking about it enough we can get people to start voting on it and actually make it happen.

But in order to make things like that happen, we must first get past the idea that Pareto-efficiency is the only thing that matters in moral decisions. And once again, that means overcoming the standard modes of thinking in neoclassical economics.

Something strange happened to economics in about 1950. Before that, economists from Marx to Smith to Keynes were always talking about differences in utility, marginal utility of wealth, how to maximize utility. But then economists stopped being comfortable talking about happiness, deciding (for reasons I still do not quite grasp) that it was “unscientific”, so they eschewed all discussion of the subject. Since we still needed to know why people choose what they do, a new framework was created revolving around “preferences”, which are a simple binary relation—you either prefer it or you don’t, you can’t like it “a lot more” or “a little more”—that is supposedly more measurable and therefore more “scientific”. But under this framework, there’s no way to say that giving a dollar to a homeless person makes a bigger difference to them than giving the same dollar to Mitt Romney, because a “bigger difference” is something you’ve defined out of existence. All you can say is that each would prefer to receive the dollar, and that both Mitt Romney and the homeless person would, given the choice, prefer to be Mitt Romney. While both of these things are true, it does seem to be kind of missing the point, doesn’t it?

There are stirrings of returning to actual talk about measuring actual (“cardinal”) utility, but still preferences (so-called “ordinal utility”) are the dominant framework. And in this framework, there’s really only one way to evaluate a situation as good or bad, and that’s Pareto-efficiency.

Actually, that’s not quite right; John Rawls cleverly came up with a way around this problem, by using the idea of “maximin”—maximize the minimum. Since each would prefer to be Romney, given the chance, we can say that the homeless person is worse off than Mitt Romney, and therefore say that it’s better to make the homeless person better off. We can’t say how much better, but at least we can say that it’s better, because we’re raising the floor instead of the ceiling. This is certainly a dramatic improvement, and on these grounds alone you can argue for the basic income—your floor is now explicitly set at the $6600 per year of the basic income.

But is that really all we can say? Think about how you make your own decisions; do you only speak in terms of strict preferences? I like Coke more than Pepsi; I like massages better than being stabbed. If preference theory is right, then there is no greater distance in the latter case than the former, because this whole notion of “distance” is unscientific. I guess we could expand the preference over groups of goods (baskets as they are generally called), and say that I prefer the set “drink Pepsi and get a massage” to the set “drink Coke and get stabbed”, which is certainly true. But do we really want to have to define that for every single possible combination of things that might happen to me? Suppose there are 1000 things that could happen to me at any given time, which is surely conservative. In that case there are 2^1000 = 10^300 possible combinations. If I were really just reading off a table of unrelated preference relations, there wouldn’t be room in my brain—or my planet—to store it, nor enough time in the history of the universe to read it. Even imposing rational constraints like transitivity doesn’t shrink the set anywhere near small enough—at best maybe now it’s 10^20, well done; now I theoretically could make one decision every billion years or so. At some point doesn’t it become a lot more parsimonious—dare I say, more scientific—to think that I am using some more organized measure than that? It certainly feels like I am; even if couldn’t exactly quantify it, I can definitely say that some differences in my happiness are large and others are small. The mild annoyance of drinking Pepsi instead of Coke will melt away in the massage, but no amount of Coke deliciousness is going to overcome the agony of being stabbed.

And indeed if you give people surveys and ask them how much they like things or how strongly they feel about things, they have no problem giving you answers out of 5 stars or on a scale from 1 to 10. Very few survey participants ever write in the comments box: “I was unable to take this survey because cardinal utility does not exist and I can only express binary preferences.” A few do write 1s and 10s on everything, but even those are fairly rare. This “cardinal utility” that supposedly doesn’t exist is the entire basis of the scoring system on Netflix and Amazon. In fact, if you use cardinal utility in voting, it is mathematically provable that you have the best possible voting system, which may have something to do with why Netflix and Amazon like it. (That’s another big “Why aren’t we doing this already?”)

If you can actually measure utility in this way, then there’s really not much reason to worry about Pareto-efficiency. If you just maximize utility, you’ll automatically get a Pareto-efficient result; but the converse is not true because there are plenty of Pareto-efficient scenarios that don’t maximize utility. Thinking back to our ultimatum game, all options are Pareto-efficient, but you can actually prove that the $5/$5 choice is the utility-maximizing one, if the two players have the same amount of wealth to start with. (Admittedly for those small amounts there isn’t much difference; but that’s also not too surprising, since $5 isn’t going to change anybody’s life.) And if they don’t—suppose I’m rich and you’re poor and we play the game—well, maybe I should give you more, precisely because we both know you need it more.

Perhaps even more significant, you can move from a Pareto-inefficient scenario to a Pareto-efficient one and make things worse in terms of utility. The scenario in which the top 1% are as wealthy as they can possibly be and the rest of us live on scraps may in fact be Pareto-efficient; but that doesn’t mean any of us should be interested in moving toward it (though sadly, we kind of are). If you’re only measuring in terms of Pareto-efficiency, your attempts at improvement can actually make things worse. It’s not that the concept is totally wrong; Pareto-efficiency is, other things equal, good; but other things are never equal.

So that’s Pareto-efficiency—and why you really shouldn’t care about it that much.

Schools of Thought

If you’re at all familiar with the schools of thought in economics, you may wonder where I stand. Am I a Keynesian? Or perhaps a post-Keynesian? A New Keynesian? A neo-Keynesian (not to be confused)? A neo-paleo-Keynesian? Or am I a Monetarist? Or a Modern Monetary Theorist? Or perhaps something more heterodox, like an Austrian or a Sraffian or a Marxist?

No, I am none of those things. I guess if you insist on labeling, you could call me a “cognitivist”; and in terms of policy I tend to agree with the Keynesians, but I also like the Modern Monetary Theorists.

But really I think this sort of labeling of ‘schools of thought’ is exactly the problem. There shouldn’t be schools of thought; the universe only works one way. When you don’t know the answer, you should have the courage to admit you don’t know. And once we actually have enough evidence to know something, people need to stop disagreeing about it. If you continue to disagree with what the evidence has shown, you’re not a ‘school of thought’; you’re just wrong.

The whole notion of ‘schools of thought’ smacks of cultural relativism; asking what the ‘Keynesian’ answer to a question is (and if you take enough economics classes I guarantee you will be asked exactly that) is rather like asking what religious beliefs prevail in a particular part of the world. It might be worth asking for some historical reason, but it’s not a question about economics; it’s a question about economic beliefs. This is the difference between asking how people believe the universe was created, and actually being a cosmologist. True, schools of thought aren’t as geographically localized as religions; but they do say the words ‘saltwater’ and ‘freshwater’ for a reason. I’m not all that interested in the Shinto myths versus the Hindu myths; I want to be a cosmologist.

At best, schools of thought are a sign of a field that hasn’t fully matured. Perhaps there were Newtonians and Einsteinians in 1910; but by 1930 there were just Einsteinians and bad physicists. Are there ‘schools of thought’ in physics today? Well, there are string theorists. But string theory hasn’t been a glorious success of physics advancement; on the contrary, it’s been a dead end from which the field has somehow failed to extricate itself for almost 50 years.

So where does that put us in economics? Well, some of the schools of thought are clearly dead ends, every bit as unfounded as string theory but far worse because they have direct influences on policy. String theory hasn’t ever killed anyone; bad economics definitely has. (How, you ask? Exposure to hazardous chemicals that were deregulated; poverty and starvation due to cuts to social welfare programs; and of course the Second Depression. I could go on.)

The worst offender is surely Austrian economics and its crazy cousin Randian libertarianism. Ayn Rand literally ruled a cult; Friedrich Hayek never took it quite that far, but there is certainly something cultish about Austrian economists. They insist that economics must be derived a priori, without recourse to empirical evidence (or at least that’s what they say when you point out that all the empirical evidence is against them). They are fond of ridiculous hyperbole about an inevitable slippery slope between raising taxes on capital gains and turning into Stalin’s Soviet Union, as well as rhetorical questions I find myself answering opposite to how they want (like “For are taxes not simply another form of robbery?” and “Once we allow the government to regulate what man can do, will they not continue until they control all aspects of our lives?”). They even co-opt and distort cognitivist concepts like herd instinct and asymmetric information; somehow Austrians think that asymmetric information is an argument for why markets are more efficient than government, even though Akerlof’s point was that asymmetric information is why we need regulations.

Marxists are on the opposite end of the political spectrum, but their ideas are equally nonsensical. (Marx himself was a bit more reasonable, but even he recognized they were going too far: “All I know is that I am not a Marxist.”) They have this whole “labor theory of value” thing where the value of something is the amount of work you have to put into it. This would mean that labor-saving innovations are pointless, because they devalue everything; it would also mean that putting an awful lot of work into something useless would nevertheless somehow make it enormously valuable. Really, it would never be worth doing much of anything, because the value you get out of something is exactly equal to the work you put in. Marxists also tend to think that what the world needs is a violent revolution to overthrow the bondage of capitalism; this is an absolutely terrible idea. During the transition it would be one of the bloodiest conflicts in history; afterward you’d probably get something like the Soviet Union or modern-day Venezuela. Even if you did somehow establish your glorious Communist utopia, you’d have destroyed so much productive capacity in the process that you’d make everyone poor. Socialist reforms make sense—and have worked well in Europe, particularly Scandinavia. But socialist revolution is a a good way to get millions of innocent people killed.

Sraffians are also quite silly; they have this bizarre notion that capital must be valued as “dated labor”, basically a formalized Marxism. I’ll admit, it’s weird how neoclassicists try to value labor as “human capital”; frankly it’s a bit disturbing how it echoes slavery. (And if you think slavery is dead, think again; it’s dead in the First World, but very much alive elsewhere.) But the solution to that problem is not to pretend that capital is a form of labor; it’s to recognize that capital and labor are different. Capital can be owned, sold, and redistributed; labor cannot. Labor is done by human beings, who have intrinsic value and rights; capital is made of inanimate matter, which does not. (This is what makes Citizens United so outrageous; “corporations are people” and “money is speech” are such fundamental distortions of democratic principles that they are literally Orwellian. We’re not that far from “freedom is slavery” and “war is peace”.)

Neoclassical economists do better, at least. They do respond to empirical data, albeit slowly. Their models are mathematically consistent. They rarely take account of human irrationality or asymmetric information, but when they do they rightfully recognize them as obstacles to efficient markets. But they still model people as infinite identical psychopaths, and they still divide themselves into schools of thought. Keynesians and Monetarists are particularly prominent, and Modern Monetary Theorists seem to be the next rising star. Each of these schools gets some things right and other things wrong, and that’s exactly why we shouldn’t make ourselves beholden to a particular tribe.

Monetarists follow Friedman, who said, “inflation is always and everywhere a monetary phenomenon.” This is wrong. You can definitely cause inflation without expanding your money supply; just ramp up government spending as in World War 2 or suffer a supply shock like we did when OPEC cut the oil supply. (In both cases, the US money supply was still tied to gold by the Bretton Woods system.) But they are right about one thing: To really have hyperinflation ala Weimar or Zimbabwe, you probably have to be printing money. If that were all there is to Monetarism, I can invert another Friedmanism: We’re all Monetarists now.

Keynesians are basically right about most things; in particular, they are the only branch of neoclassicists who understand recessions and know how to deal with them. The world’s most famous Keynesian is probably Krugman, who has the best track record of economic predictions in the popular media today. Keynesians much better appreciate the fact that humans are irrational; in fact, cognitivism can be partly traced to Keynes, who spoke often of the “animal spirits” that drive human behavior (Akerlof’s most recent book is called Animal Spirits). But even Keynesians have their sacred cows, like the Phillips Curve, the alleged inverse correlation between inflation and unemployment. This is fairly empirically accurate if you look just at First World economies after World War 2 and exclude major recessions. But Keynes himself said, “Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” The Phillips Curve “shifts” sometimes, and it’s not always clear why—and empirically it’s not easy to tell the difference between a curve that shifts a lot and a relationship that just isn’t there. There is very little evidence for a “natural rate of unemployment”. Worst of all, it’s pretty clear that the original policy implications of the Phillips Curve are all wrong; you can’t get rid of unemployment just by ramping up inflation, and that way really does lie Zimbabwe.

Finally, Modern Monetary Theorists understand money better than everyone else. They recognize that a sovereign government doesn’t have to get its money “from somewhere”; it can create however much money it needs. The whole narrative that the US is “out of money” isn’t just wrong, it’s incoherent; if there is one entity in the world that can never be out of money, it’s the US government, who print the world’s reserve currency. The panicked fears of quantitative easing causing hyperinflation aren’t quite as crazy; if the economy were at full capacity, printing $4 trillion over 5 years (yes, we did that) would absolutely cause some inflation. Since that’s only about 6% of US GDP, we might be back to 8% or even 10% inflation like the 1970s, but we certainly would not be in Zimbabwe. Moreover, we aren’t at full capacity; we needed to expand the money supply that much just to maintain prices where they are. The Second Depression is the Red Queen: It took all the running we could do to stay in one place. Modern Monetary Theorists also have some very good ideas about taxation; they point out that since the government only takes out the same thing it puts in—its own currency—it doesn’t make sense to say they are “taking” something (let alone “confiscating” it as Austrians would have you believe). Instead, it’s more like they are pumping it, taking money in and forcing it back out continuously. And just as pumping doesn’t take away water but rather makes it flow, taxation and spending doesn’t remove money from the economy but rather maintains its circulation. Now that I’ve said what they get right, what do they get wrong? Basically they focus too much on money, ignoring the real economy. They like to use double-entry accounting models, perfectly sensible for money, but absolutely nonsensical for real value. The whole point of an economy is that you can get more value out than you put in. From the Homo erectus who pulls apples from the trees to the software developer who buys a mansion, the reason they do it is that the value they get out (the gatherer gets to eat, the programmer gets to live in a mansion) is higher than the value they put in (the effort to climb the tree, the skill to write the code). If, as Modern Monetary Theorists are wont to do, you calculated a value for the human capital of the gatherer and the programmer equal to the value of the goods they purchase, you’d be missing the entire point.