In honor of Pi Day, I for one welcome our new robot overlords

JDN 2457096 EDT 16:08

Despite my preference to use the Julian Date Number system, it has not escaped my attention that this weekend was Pi Day of the Century, 3/14/15. Yesterday morning we had the Moment of Pi: 3/14/15 9:26:53.58979… We arguably got an encore that evening if we allow 9:00 PM instead of 21:00.

Though perhaps it is a stereotype and/or cheesy segue, pi and associated mathematical concepts are often associated with computers and robots. Robots are an increasing part of our lives, from the industrial robots that manufacture our cars to the precision-timed satellites that provide our GPS navigation. When you want to know how to get somewhere, you pull out your pocket thinking machine and ask it to commune with the space robots who will guide you to your destination.

There are obvious upsides to these robots—they are enormously productive, and allow us to produce great quantities of useful goods at astonishingly low prices, including computers themselves, creating a positive feedback loop that has literally lowered the price of a given amount of computing power by a factor of one trillion in the latter half of the 20th century. We now very much live in the early parts of a cyberpunk future, and it is due almost entirely to the power of computer automation.

But if you know your SF you may also remember another major part of cyberpunk futures aside from their amazing technology; they also tend to be dystopias, largely because of their enormous inequality. In the cyberpunk future corporations own everything, governments are virtually irrelevant, and most individuals can barely scrape by—and that sounds all too familiar, doesn’t it? This isn’t just something SF authors made up; there really are a number of ways that computer technology can exacerbate inequality and give more power to corporations.

Why? The reason that seems to get the most attention among economists is skill-biased technological change; that’s weird because it’s almost certainly the least important. The idea is that computers can automate many routine tasks (no one disputes that part) and that routine tasks tend to be the sort of thing that uneducated workers generally do more often than educated ones (already this is looking fishy; think about accountants versus artists). But educated workers are better at using computers and the computers need people to operate them (clearly true). Hence while uneducated workers are substitutes for computers—you can use the computers instead—educated workers are complements for computers—you need programmers and engineers to make the computers work. As computers get cheaper, their substitutes also get cheaper—and thus wages for uneducated workers go down. But their complements get more valuable—and so wages for educated workers go up. Thus, we get more inequality, as high wages get higher and low wages get lower.

Or, to put it more succinctly, robots are taking our jobs. Not all our jobs—actually they’re creating jobs at the top for software programmers and electrical engineers—but a lot of our jobs, like welders and metallurgists and even nurses. As the technology improves more and more jobs will be replaced by automation.

The theory seems plausible enough—and in some form is almost certainly true—but as David Card has pointed out, this fails to explain most of the actual variation in inequality in the US and other countries. Card is one of my favorite economists; he is also famous for completely revolutionizing the economics of minimum wage, showing that prevailing theory that minimum wages must hurt employment simply doesn’t match the empirical data.

If it were just that college education is getting more valuable, we’d see a rise in income for roughly the top 40%, since over 40% of American adults have at least an associate’s degree. But we don’t actually see that; in fact contrary to popular belief we don’t even really see it in the top 1%. The really huge increases in income for the last 40 years have been at the top 0.01%—the top 1% of 1%.

Many of the jobs that are now automated also haven’t seen a fall in income; despite the fact that high-frequency trading algorithms do what stockbrokers do a thousand times better (“better” at making markets more unstable and siphoning wealth from the rest of the economy that is), stockbrokers have seen no such loss in income. Indeed, they simply appropriate the additional income from those computer algorithms—which raises the question why welders couldn’t do the same thing. And indeed, I’ll get to in a moment why that is exactly what we must do, that the robot revolution must also come with a revolution in property rights and income distribution.

No, the real reasons why technology exacerbates inequality are twofold: Patent rents and the winner-takes-all effect.

In an earlier post I already talked about the winner-takes-all effect, so I’ll just briefly summarize it this time around. Under certain competitive conditions, a small fraction of individuals can reap a disproportionate share of the rewards despite being only slightly more productive than those beneath them. This often happens when we have network externalities, in which a product becomes more valuable when more people use it, thus creating a positive feedback loop that makes the products which are already successful wildly so and the products that aren’t successful resigned to obscurity.

Computer technology—more specifically, the Internet—is particularly good at creating such situations. Facebook, Google, and Amazon are all examples of companies that (1) could not exist without Internet technology and (2) depend almost entirely upon network externalities for their business model. They are the winners who take all; thousands of other software companies that were just as good or nearly so are now long forgotten. The winners are not always the same, because the system is unstable; for instance MySpace used to be much more important—and much more profitable—until Facebook came along.

But the fact that a different handful of upper-middle-class individuals can find themselves suddenly and inexplicably thrust into fame and fortune while the rest of us toil in obscurity really isn’t much comfort, now is it? While technically the rise and fall of MySpace can be called “income mobility”, it’s clearly not what we actually mean when we say we want a society with a high level of income mobility. We don’t want a society where the top 10% can by little more than chance find themselves becoming the top 0.01%; we want a society where you don’t have to be in the top 10% to live well in the first place.

Even without network externalities the Internet still nurtures winner-takes-all markets, because digital information can be copied infinitely. When it comes to sandwiches or even cars, each new one is costly to make and costly to transport; it can be more cost-effective to choose the ones that are made near you even if they are of slightly lower quality. But with books (especially e-books), video games, songs, or movies, each individual copy costs nothing to create, so why would you settle for anything but the best? This may well increase the overall quality of the content consumers get—but it also ensures that the creators of that content are in fierce winner-takes-all competition. Hence J.K. Rowling and James Cameron on the one hand, and millions of authors and independent filmmakers barely scraping by on the other. Compare a field like engineering; you probably don’t know a lot of rich and famous engineers (unless you count engineers who became CEOs like Bill Gates and Thomas Edison), but nor is there a large segment of “starving engineers” barely getting by. Though the richest engineers (CEOs excepted) are not nearly as rich as the richest authors, the typical engineer is much better off than the typical author, because engineering is not nearly as winner-takes-all.

But the main topic for today is actually patent rents. These are a greatly underappreciated segment of our economy, and they grow more important all the time. A patent rent is more or less what it sounds like; it’s the extra money you get from owning a patent on something. You can get that money either by literally renting it—charging license fees for other companies to use it—or simply by being the only company who is allowed to manufacture something, letting you sell it at monopoly prices. It’s surprisingly difficult to assess the real value of patent rents—there’s a whole literature on different econometric methods of trying to tackle this—but one thing is clear: Some of the largest, wealthiest corporations in the world are built almost entirely upon patent rents. Drug companies, R&D companies, software companies—even many manufacturing companies like Boeing and GM obtain a substantial portion of their income from patents.

What is a patent? It’s a rule that says you “own” an idea, and anyone else who wants to use it has to pay you for the privilege. The very concept of owning an idea should trouble you—ideas aren’t limited in number, you can easily share them with others. But now think about the fact that most of these patents are owned by corporationsnot by inventors themselves—and you’ll realize that our system of property rights is built around the notion that an abstract entity can own an idea—that one idea can own another.

The rationale behind patents is that they are supposed to provide incentives for innovation—in exchange for investing the time and effort to invent something, you receive a certain amount of time where you get to monopolize that product so you can profit from it. But how long should we give you? And is this really the best way to incentivize innovation?

I contend it is not; when you look at the really important world-changing innovations, very few of them were done for patent rents, and virtually none of them were done by corporations. Jonas Salk was indignant at the suggestion he should patent the polio vaccine; it might have made him a billionaire, but only by letting thousands of children die. (To be fair, here’s a scholar arguing that he probably couldn’t have gotten the patent even if he wanted to—but going on to admit that even then the patent incentive had basically nothing to do with why penicillin and the polio vaccine were invented.)

Who landed on the moon? Hint: It wasn’t Microsoft. Who built the Hubble Space Telescope? Not Sony. The Internet that made Google and Facebook possible was originally invented by DARPA. Even when corporations seem to do useful innovation, it’s usually by profiting from the work of individuals: Edison’s corporation stole most of its good ideas from Nikola Tesla, and by the time the Wright Brothers founded a company their most important work was already done (though at least then you could argue that they did it in order to later become rich, which they ultimately did). Universities and nonprofits brought you the laser, light-emitting diodes, fiber optics, penicillin and the polio vaccine. Governments brought you liquid-fuel rockets, the Internet, GPS, and the microchip. Corporations brought you, uh… Viagra, the Snuggie, and Furbies. Indeed, even Google’s vaunted search algorithms were originally developed by the NSF. I can think of literally zero examples of a world-changing technology that was actually invented by a corporation in order to secure a patent. I’m hesitant to say that none exist, but clearly the vast majority of seminal inventions have been created by governments and universities.

This has always been true throughout history. Rome’s fire departments were notorious for shoddy service—and wholly privately-owned—but their great aqueducts that still stand today were built as government projects. When China invented paper, turned it into money, and defended it with the Great Wall, it was all done on government funding.

The whole idea that patents are necessary for innovation is simply a lie; and even the idea that patents lead to more innovation is quite hard to defend. Imagine if instead of letting Google and Facebook patent their technology all the money they receive in patent rents were instead turned into tax-funded research—frankly is there even any doubt that the results would be better for the future of humanity? Instead of better ad-targeting algorithms we could have had better cancer treatments, or better macroeconomic models, or better spacecraft engines.

When they feel their “intellectual property” (stop and think about that phrase for awhile, and it will begin to seem nonsensical) has been violated, corporations become indignant about “free-riding”; but who is really free-riding here? The people who copy music albums for free—because they cost nothing to copy, or the corporations who make hundreds of billions of dollars selling zero-marginal-cost products using government-invented technology over government-funded infrastructure? (Many of these companies also continue receive tens or hundreds of millions of dollars in subsidies every year.) In the immortal words of Barack Obama, “you didn’t build that!”

Strangely, most economists seem to be supportive of patents, despite the fact that their own neoclassical models point strongly in the opposite direction. There’s no logical connection between the fixed cost of inventing a technology and the monopoly rents that can be extracted from its patent. There is some connection—albeit a very weak one—between the benefits of the technology and its monopoly profits, since people are likely to be willing to pay more for more beneficial products. But most of the really great benefits are either in the form of public goods that are unenforceable even with patents (go ahead, try enforcing on that satellite telescope on everyone who benefits from its astronomical discoveries!) or else apply to people who are so needy they can’t possibly pay you (like anti-malaria drugs in Africa), so that willingness-to-pay link really doesn’t get you very far.

I guess a lot of neoclassical economists still seem to believe that willingness-to-pay is actually a good measure of utility, so maybe that’s what’s going on here; if it were, we could at least say that patents are a second-best solution to incentivizing the most important research.

But even then, why use second-best when you have best? Why not devote more of our society’s resources to governments and universities that have centuries of superior track record in innovation? When this is proposed the deadweight loss of taxation is always brought up, but somehow the deadweight loss of monopoly rents never seems to bother anyone. At least taxes can be designed to minimize deadweight loss—and democratic governments actually have incentives to do that; corporations have no interest whatsoever in minimizing the deadweight loss they create so long as their profit is maximized.

I’m not saying we shouldn’t have corporations at all—they are very good at one thing and one thing only, and that is manufacturing physical goods. Cars and computers should continue to be made by corporations—but their technologies are best invented by government. Will this dramatically reduce the profits of corporations? Of course—but I have difficulty seeing that as anything but a good thing.

Why am I talking so much about patents, when I said the topic was robots? Well, it’s typically because of the way these patents are assigned that robots taking people’s jobs becomes a bad thing. The patent is owned by the company, which is owned by the shareholders; so when the company makes more money by using robots instead of workers, the workers lose.

If when a robot takes your job, you simply received the income produced by the robot as capital income, you’d probably be better off—you get paid more and you also don’t have to work. (Of course, if you define yourself by your career or can’t stand the idea of getting “handouts”, you might still be unhappy losing your job even though you still get paid for it.)

There’s a subtler problem here though; robots could have a comparative advantage without having an absolute advantage—that is, they could produce less than the workers did before, but at a much lower cost. Where it cost $5 million in wages to produce $10 million in products, it might cost only $3 million in robot maintenance to produce $9 million in products. Hence you can’t just say that we should give the extra profits to the workers; in some cases those extra profits only exist because we are no longer paying the workers.

As a society, we still want those transactions to happen, because producing less at lower cost can still make our economy more efficient and more productive than it was before. Those displaced workers can—in theory at least—go on to other jobs where they are needed more.

The problem is that this often doesn’t happen, or it takes such a long time that workers suffer in the meantime. Hence the Luddites; they don’t want to be made obsolete even if it does ultimately make the economy more productive.

But this is where patents become important. The robots were probably invented at a university, but then a corporation took them and patented them, and is now selling them to other corporations at a monopoly price. The manufacturing company that buys the robots now has to spend more in order to use the robots, which drives their profits down unless they stop paying their workers.

If instead those robots were cheap because there were no patents and we were only paying for the manufacturing costs, the workers could be shareholders in the company and the increased efficiency would allow both the employers and the workers to make more money than before.

What if we don’t want to make the workers into shareholders who can keep their shares after they leave the company? There is a real downside here, which is that once you get your shares, why stay at the company? We call that a “golden parachute” when CEOs do it, which they do all the time; but most economists are in favor of stock-based compensation for CEOs, and once again I’m having trouble seeing why it’s okay when rich people do it but not when middle-class people do.

Another alternative would be my favorite policy, the basic income: If everyone knows they can depend on a basic income, losing your job to a robot isn’t such a terrible outcome. If the basic income is designed to grow with the economy, then the increased efficiency also raises everyone’s standard of living, as economic growth is supposed to do—instead of simply increasing the income of the top 0.01% and leaving everyone else where they were. (There is a good reason not to make the basic income track economic growth too closely, namely the business cycle; you don’t want the basic income payments to fall in a recession, because that would make the recession worse. Instead they should be smoothed out over multiple years or designed to follow a nominal GDP target, so that they continue to rise even in a recession.)

We could also combine this with expanded unemployment insurance (explain to me again why you can’t collect unemployment if you weren’t working full-time before being laid off, even if you wanted to be or you’re a full-time student?) and active labor market policies that help people re-train and find new and better jobs. These policies also help people who are displaced for reasons other than robots making their jobs obsolete—obviously there are all sorts of market conditions that can lead to people losing their jobs, and many of these we actually want to happen, because they involve reallocating the resources of our society to more efficient ends.

Why aren’t these sorts of policies on the table? I think it’s largely because we don’t think of it in terms of distributing goods—we think of it in terms of paying for labor. Since the worker is no longer laboring, why pay them?

This sounds reasonable at first, but consider this: Why give that money to the shareholder? What did they do to earn it? All they do is own a piece of the company. They may not have contributed to the goods at all. Honestly, on a pay-for-work basis, we should be paying the robot!

If it bothers you that the worker collects dividends even when he’s not working—why doesn’t it bother you that shareholders do exactly the same thing? By definition, a shareholder is paid according to what they own, not what they do. All this reform would do is make workers into owners.

If you justify the shareholder’s wealth by his past labor, again you can do exactly the same to justify worker shares. (And as I said above, if you’re worried about the moral hazard of workers collecting shares and leaving, you should worry just as much about golden parachutes.)

You can even justify a basic income this way: You paid taxes so that you could live in a society that would protect you from losing your livelihood—and if you’re just starting out, your parents paid those taxes and you will soon enough. Theoretically there could be “welfare queens” who live their whole lives on the basic income, but empirical data shows that very few people actually want to do this, and when given opportunities most people try to find work. Indeed, even those who don’t, rarely seem to be motivated by greed (even though, capitalists tell us, “greed is good”); instead they seem to be de-motivated by learned helplessness after trying and failing for so long. They don’t actually want to sit on the couch all day and collect welfare payments; they simply don’t see how they can compete in the modern economy well enough to actually make a living from work.

One thing is certain: We need to detach income from labor. As a society we need to get over the idea that a human being’s worth is decided by the amount of work they do for corporations. We need to get over the idea that our purpose in life is a job, a career, in which our lives are defined by the work we do that can be neatly monetized. (I admit, I suffer from the same cultural blindness at times, feeling like a failure because I can’t secure the high-paying and prestigious employment I want. I feel this clear sense that my society does not value me because I am not making money, and it damages my ability to value myself.)

As robots do more and more of our work, we will need to redefine the way we live by something else, like play, or creativity, or love, or compassion. We will need to learn to see ourselves as valuable even if nothing we do ever sells for a penny to anyone else.

A basic income can help us do that; it can redefine our sense of what it means to earn money. Instead of the default being that you receive nothing because you are worthless unless you work, the default is that you receive enough to live on because you are a human being of dignity and a citizen. This is already the experience of people who have substantial amounts of capital income; they can fall back on their dividends if they ever can’t or don’t want to find employment. A basic income would turn us all into capital owners, shareholders in the centuries of established capital that has been built by our forebears in the form of roads, schools, factories, research labs, cars, airplanes, satellites, and yes—robots.

Oppression is quantitative.

JDN 2457082 EDT 11:15.

Economists are often accused of assigning dollar values to everything, of being Oscar Wilde’s definition of a cynic, someone who knows the price of everything and the value of nothing. And there is more than a little truth to this, particularly among neoclassical economists; I was alarmed a few days ago to receive an email response from an economist that included the word ‘altruism’ in scare quotes as though this were somehow a problematic or unrealistic concept. (Actually, altruism is already formally modeled by biologists, and my claim that human beings are altruistic would be so uncontroversial among evolutionary biologists as to be considered trivial.)

But sometimes this accusation is based upon things economists do that is actually tremendously useful, even necessary to good policymaking: We make everything quantitative. Nothing is ever “yes” or “no” to an economist (sometimes even when it probably should be; the debate among economists in the 1960s over whether slavery is economically efficient does seem rather beside the point), but always more or less; never good or bad but always better or worse. For example, as I discussed in my post on minimum wage, the mainstream position among economists is not that minimum wage is always harmful nor that minimum wage is always beneficial, but that minimum wage is a policy with costs and benefits that on average neither increases nor decreases unemployment. The mainstream position among economists about climate policy is that we should institute either a high carbon tax or a system of cap-and-trade permits; no economist I know wants us to either do nothing and let the market decide (a position most Republicans currently seem to take) or suddenly ban coal and oil (the latter is a strawman position I’ve heard environmentalists accused of, but I’ve never actually heard advocated; even Greenpeace wants to ban offshore drilling, not oil in general.).

This makes people uncomfortable, I think, because they want moral issues to be simple. They want “good guys” who are always right and “bad guys” who are always wrong. (Speaking of strawman environmentalism, a good example of this is Captain Planet, in which no one ever seems to pollute the environment in order to help people or even in order to make money; no, they simply do it because the hate clean water and baby animals.) They don’t want to talk about options that are more good or less bad; they want one option that is good and all other options that are bad.

This attitude tends to become infused with righteousness, such that anyone who disagrees is an agent of the enemy. Politics is the mind-killer, after all. If you acknowledge that there might be some downside to a policy you agree with, that’s like betraying your team.

But in reality, the failure to acknowledge downsides can lead to disaster. Problems that could have been prevented are instead ignored and denied. Getting the other side to recognize the downsides of their own policies might actually help you persuade them to your way of thinking. And appreciating that there is a continuum of possibilities that are better and worse in various ways to various degrees is what allows us to make the world a better place even as we know that it will never be perfect.

There is a common refrain you’ll hear from a lot of social justice activists which sounds really nice and egalitarian, but actually has the potential to completely undermine the entire project of social justice.

This is the idea that oppression can’t be measured quantitatively, and we shouldn’t try to compare different levels of oppression. The notion that some people are more oppressed than others is often derided as the Oppression Olympics. (Some use this term more narrowly to mean when a discussion is derailed by debate over who has it worse—but then the problem is really discussions being derailed, isn’t it?)

This sounds nice, because it means we don’t have to ask hard questions like, “Which is worse, sexism or racism?” or “Who is worse off, people with cancer or people with diabetes?” These are very difficult questions, and maybe they aren’t the right ones to ask—after all, there’s no reason to think that fighting racism and fighting sexism are mutually exclusive; they can in fact be complementary. Research into cancer only prevents us from doing research into diabetes if our total research budget is fixed—this is more than anything else an argument for increasing research budgets.

But we must not throw out the baby with the bathwater. Oppression is quantitative. Some kinds of oppression are clearly worse than others.

Why is this important? Because otherwise you can’t measure progress. If you have a strictly qualitative notion of oppression where it’s black-and-white, on-or-off, oppressed-or-not, then we haven’t made any progress on just about any kind of oppression. There is still racism, there is still sexism, there is still homophobia, there is still religious discrimination. Maybe these things will always exist to some extent. This makes the fight for social justice a hopeless Sisyphean task.

But in fact, that’s not true at all. We’ve made enormous progress. Unbelievably fast progress. Mind-boggling progress. For hundreds of millennia humanity made almost no progress at all, and then in the last few centuries we have suddenly leapt toward justice.

Sexism used to mean that women couldn’t own property, they couldn’t vote, they could be abused and raped with impunity—or even beaten or killed for being raped (which Saudi Arabia still does by the way). Now sexism just means that women aren’t paid as well, are underrepresented in positions of power like Congress and Fortune 500 CEOs, and they are still sometimes sexually harassed or raped—but when men are caught doing this they go to prison for years. This change happened in only about 100 years. That’s fantastic.

Racism used to mean that Black people were literally property to be bought and sold. They were slaves. They had no rights at all, they were treated like animals. They were frequently beaten to death. Now they can vote, hold office—one is President!—and racism means that our culture systematically discriminates against them, particularly in the legal system. Racism used to mean you could be lynched; now it just means that it’s a bit harder to get a job and the cops will sometimes harass you. This took only about 200 years. That’s amazing.

Homophobia used to mean that gay people were criminals. We could be sent to prison or even executed for the crime of making love in the wrong way. If we were beaten or murdered, it was our fault for being faggots. Now, homophobia means that we can’t get married in some states (and fewer all the time!), we’re depicted on TV in embarrassing stereotypes, and a lot of people say bigoted things about us. This has only taken about 50 years! That’s astonishing.

And above all, the most extreme example: Religious discrimination used to mean you could be burned at the stake for not being Catholic. It used to mean—and in some countries still does mean—that it’s illegal to believe in certain religions. Now, it means that Muslims are stereotyped because, well, to be frank, there are some really scary things about Muslim culture and some really scary people who are Muslim leaders. (Personally, I think Muslims should be more upset about Ahmadinejad and Al Qaeda than they are about being profiled in airports.) It means that we atheists are annoyed by “In God We Trust”, but we’re no longer burned at the stake. This has taken longer, more like 500 years. But even though it took a long time, I’m going to go out on a limb and say that this progress is wonderful.

Obviously, there’s a lot more progress remaining to be made on all these issues, and others—like economic inequality, ableism, nationalism, and animal rights—but the point is that we have made a lot of progress already. Things are better than they used to be—a lot betterand keeping this in mind will help us preserve the hope and dedication necessary to make things even better still.

If you think that oppression is either-or, on-or-off, you can’t celebrate this progress, and as a result the whole fight seems hopeless. Why bother, when it’s always been on, and will probably never be off? But we started with oppression that was absolutely horrific, and now it’s considerably milder. That’s real progress. At least within the First World we have gone from 90% oppressed to 25% oppressed, and we can bring it down to 10% or 1% or 0.1% or even 0.01%. Those aren’t just numbers, those are the lives of millions of people. As democracy spreads worldwide and poverty is eradicated, oppression declines. Step by step, social changes are made, whether by protest marches or forward-thinking politicians or even by lawyers and lobbyists (they aren’t all corrupt).

And indeed, a four-year-old Black girl with a mental disability living in Ghana whose entire family’s income is $3 a day is more oppressed than I am, and not only do I have no qualms about saying that, it would feel deeply unseemly to deny it. I am not totally unoppressed—I am a bisexual atheist with chronic migraines and depression in a country that is suspicious of atheists, systematically discriminates against LGBT people, and does not make proper accommodations for chronic disorders, particularly mental ones. But I am far less oppressed, and that little girl (she does exist, though I know not her name) could be made much less oppressed than she is even by relatively simple interventions (like a basic income). In order to make her fully and totally unoppressed, we would need such a radical restructuring of human society that I honestly can’t really imagine what it would look like. Maybe something like The Culture? Even then as Iain Banks imagines it, there is inequality between those within The Culture and those outside it, and there have been wars like the Idiran-Culture War which killed billions, and among those trillions of people on thousands of vast orbital habitats someone, somewhere is probably making a speciesist remark. Yet I can state unequivocally that life in The Culture would be better than my life here now, which is better than the life of that poor disabled girl in Ghana.

To be fair, we can’t actually put a precise number on it—though many economists try, and one of my goals is to convince them to improve their methods so that they stop using willingness-to-pay and instead try to actually measure utility by something like QALY. A precise number would help, actually—it would allow us to do cost-benefit analyses to decide where to focus our efforts. But while we don’t need a precise number to tell when we are making progress, we do need to acknowledge that there are degrees of oppression, some worse than others.

Oppression is quantitative. And our goal should be minimizing that quantity.

The irrationality of racism

JDN 2457039 EST 12:07.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Biologically, what exactly does that mean? Next to nothing.

Human genetic diversity is a real thing, and there are genetic links to not only dozens of genetic diseases and propensity toward certain types of cancer, but also personality and intelligence. There are also of course genes for skin pigmentation.

The human population does exhibit some genetic clustering, but the categories are not what you’re probably used to: Good examples of relatively well-defined genetic clusters include Ashkenazi, Papuan, and Mbuti. There are also many different haplogroups, such as mitochondrial haplogroups L3 and CZ.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

Let’s start with “Black”; defining a “Black” race is like defining a category of animals by the fact that they are all tinted red—foxes yes, dogs no; robins yes, swallows no; ladybirds yes, cockroaches no. There is more genetic diversity within Africa than there is outside of it. There are African populations that are more closely related to European populations than they are to other African populations. The only thing “Black” people have in common is that their skin is dark, which is due to convergent evolution: It’s not due to common ancestry, but a common environment. Dark skin has a direct survival benefit in climates with intense sunlight.  The similarity is literally skin deep.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club. The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almost a tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

I imagine most of my readers are more sophisticated than this, but just in case you’re wondering about the difference in IQ scores between Whites and Blacks, that is indeed a real observation, but IQ isn’t entirely genetic. The reason IQ scores are rising worldwide (the Flynn Effect) is due to improvements in environmental conditions: Fewer environmental pollutants—particularly lead and mercury, the removal of which is responsible for most of the reduction in crime in America over the last 20 yearsbetter nutrition, better education, less stress. Being stupid does not make you poor (or how would we explain Donald Trump?), but being poor absolutely does make you stupid. Combine that with the challenges and inconsistencies in cross-national IQ comparisons, and it’s pretty clear that the higher IQ scores in rich nations are an effect, not a cause, of their affluence. Likewise, the lower IQ scores of Black people in the US are entirely explained by their poorer living conditions, with no need for any genetic hypothesis—which would also be very difficult in the first place precisely because “Black” is such a weird genetic category.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

How is the economy doing?

JDN 2457033 EST 12:22.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

Why are our forecasts so bad? Some argue that the task is just inherently too difficult due to the chaotic system involved; but they used to say that about weather forecasts, and yet with satellites and computer models our forecasts are now far more accurate than they were 20 years ago. Others have argued that “politics always dominates over economics”, as though politics were somehow a fundamentally separate thing, forever exogenous, a parameter in our models that cannot be predicted. I have a number of economic aphorisms I’m trying to popularize; the one for this occasion is: “Nothing is exogenous.” (Maybe fundamental constants of physics? But actually many physicists think that those constants can be derived from even more fundamental laws.) My most common is “It’s the externalities, stupid.”; next is “It’s not the incentives, it’s the opportunities.”; and the last is “Human beings are 90% rational. But woe betide that other 10%.” In fact, it’s not quite true that all our macroeconomic forecasters are bad; a few, such as Krugman, are actually quite good. The Klein Award is given each year to the best macroeconomic forecasters, and the same names pop up too often for it to be completely random. (Sadly, one of the most common is Citigroup, meaning that our banksters know perfectly well what they’re doing when they destroy our economy—they just don’t care.) So in fact I think our failures of forecasting are not inevitable or permanent.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someone will probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?

Why we give gifts

JDN 2457020 EST 18:28.

You’ll notice it’s Sunday, not Saturday; I apologize for not actually posting on time this week. Due to the holiday season I was whisked away to family activities in Cleveland, and could not find wifi that was both free and reliable.

But since it is the Christmas season—Christmas Day was last Thursday—the time during which most Americans spend more than we can probably afford buying gifts (the highest rate of consumer spending all year long, much of it on credit cards, a significant boost for the economy in these times of depression), I thought it would be worthwhile to talk about why gifts are so important to us.

As I’ve already mentioned a few posts ago, neoclassical economists are typically baffled by gift-giving, and several have written research papers and books about why Christmas gifts are economically inefficient and should be stopped. Oddly it never seems to occur to them that if this is true, then there is widespread irrational consumer behavior that has nothing to do with government intervention or perverse incentives—which already means neoclassical economic theory is in serious trouble. Nobody forces you to buy gifts, so if it’s such a bad idea but we do it anyway, we must not be rational agents.

But in fact it’s not such a bad idea, and it’s “inefficient” only in a very narrow-minded sense that takes no account of relationships or human emotions. Gifts only make us not “rational agents” in that we are not infinite identical psychopaths. There is in fact nothing irrational about gifts.

Gift-giving is a human universal; it has been with us far longer than money or markets or indeed civilization itself. Everyone from tribal hunter-gatherers to neoclassical economists gives gifts, and in fact most people who descend from populations that lived in higher latitudes (that is, “White people”, though perhaps in a later post I’ll explain why our “race” categories are genetically absurd) actually celebrate some sort of gift-giving ceremony around the time of the Winter Solstice. Many of our Christmas traditions actually come from the Germanic holiday Yule, which is why we say things like “Yuletide greetings” even though that has absolutely nothing to do with Jesus. We celebrate around the Solstice because it was such a momentous season for us, the darkest night of the year; as if the darkness and cold weren’t bad enough by themselves they are the harbinger of the dreaded winter that prevents our crops from growing and may not allow us all to survive. We reaffirm our family ties and promise to help each other through this dangerous time. Music, gifts, and feasting are simply the way that humans organize our celebrations—again this is universal.

What do gifts accomplish that a simple transfer of cash would not? I can think of three things:

      1. Convey closeness: First of all there is of course the fact that by buying someone a gift at all, you are expressing the fact that you care about them and want to be close to them. But the choice of the gift also matters. Your closest friends always buy you the best gifts, because they know you the best. Thus the sort of gift you receive from someone is a measure of how well they know you. Many of us give each other lists of ideas to buy, but I always include more on the list than I expect to receive and encourage people to buy things that are not on the list that they think I might enjoy. A computer program can buy things off a list; the point is that we express our relationships by choosing things we know people want without them having to ask. We trust people to know us well enough to get it right most of the time; they’ll probably make mistakes (most people think they know others better than they actually do), but the mistakes are made up for by the successes. The disappointment in getting something you didn’t want isn’t even so much in the thing as it is in the fear that your loved ones don’t know you as well as you thought they did; this is why I consider it important to express—gently and tactfully of course—when you really don’t like a gift you received; you want them to know you better and do better next time, not keep giving you things you hate while you brood behind fake smiles. What you choose to buy conveys what you know and how you feel; this is why the best gift is one you love to have but didn’t ask for. That’s why I’m honestly more excited about my new travel pillow and copy of Randall Monroe’s What If? than I am about my new Bluetooth headset; of course the headset is more expensive and more useful, but I specifically asked for it. My sister and my mother knew me well enough that the book and the travel pillow I didn’t have to ask for.
      2. Grant permission to indulge: This is particularly important in the United States, because our society has Puritanical roots that make us suspicious of any activity that isn’t directly linked to productive efficiency. Honestly when those economists criticize Christmas as “inefficient” they are not so much making a serious economic argument as they are expressing in terms familiar to them the centuries-old Puritanical norm. It is considered unseemly to buy things for yourself that are purely for fun, particularly if they are expensive. You are expected to buy only the minimum you need, because any more is greedy; the notion seems to be that there is only so much stuff to go around, and if you take more others will have less. (This could scarcely be further from the truth; your frivolous consumer purchases can save children from starvation by giving their parents jobs in factories.) Neoclassical economists often think they are immune to this sort of norm, but aside from their discomfort with Christmas, the sense of righteousness they often have around “raising the savings rate” says otherwise. The link from savings to investment is tenuous at best, but one thing saving definitely does do is prevent you from spending indulgently. But since buying things that make us happy is actually kind of the entire point of having an economy in the first place, it is necessary to find workarounds for this oppressive ethic. One solution is gifts; to give someone else an indulgent gift allows them to engage in indulgent activities, while preserving their own status as someone who wouldn’t normally waste money in that way, and since you are not the one indulging you can hardly be accused of frivolity either. This is also what gift cards accomplish; in economic terms gift cards seem weird, because they are at best as good as cash, and often far worse. But gift cards are typically for retail stores where it is hard to buy something that’s not indulgent, thus offering permission to indulge. This is why a gift card for GameStop or Dick’s Sporting Goods makes sense, but a gift card to Walmart or Kroger seems odd. This is also why receiving cash or an Amazon gift card doesn’t feel as good; since you can buy just about anything, the social norm toward spending responsibly returns. (Never buy anyone a VISA gift card; it’s basically the same as cash except you’re giving some of the money to VISA.)
      3. Conveys your own status: By buying expensive things for other people, you raise your own reputation as an individual. This one is easy to become cynical about, so it’s important to be clear what it actually means. Conveying your own status doesn’t necessarily mean arrogantly domineering over other people. It certainly can mean that, which is why if your cousin has $20 million and buys everyone in the family a new car every year, you’d honestly not be that thrilled about it; yeah, it’s nice getting a new car, but your cousin is clearly showboating his superior wealth and trying to make everyone else look cheap and/or poor. But there is a way to elevate your own status without downgrading everyone else’s, and truly generous gifts are a way of doing that. If the things you buy are really things your loved ones truly need, then you express your generosity and love for them by buying more than you can easily afford. Philanthropy is also a means of conveying status, and again comes in both forms. When Carnegie built buildings and named them after himself, he was being arrogant and domineering. When Bill Gates established a foundation to combat malaria and poverty in Africa, he was being genuinely generous. This kind of status is always a bit paradoxical: The best way to earn a reputation as a good person is to honestly try to help people and have little concern for your own reputation; people who try too hard to improve their own reputations just end up seeming arrogant and narcissistic. In order to deserve status, it is necessary not to directly seek it. The clearest example here is Jonas Salk: He invented a vaccine that saved the lives of thousands of children, making him more deserving of a billion dollars than anyone else I can think of. And he had a chance at a billion dollars, but he specifically gave it up, because in order to get it he would have had to enforce a patent that would raise the price of the vaccine and allow children to needlessly suffer and die. It was the very character that made him deserve the wealth that caused him to refuse it. The only way to hit the target is to aim much higher.

If you really want to insist, yes, there’s also some sort of net transfer of wealth involved in gift-giving, because it is expected that the richer you are the more you’ll spend on gifts. But that’s a very small part; even in hunter-gather societies that have negligible levels of inequality human beings still give each other gifts. Gifts are a part of us; they are written in the language of life itself upon the ancient thread that binds us to our ancestors and makes us who we are—by which I mean, of course, DNA. We could probably no more stop giving gifts than we could stop feeling love.

The World Development Report is on cognitive economics this year!

JDN 2457013 EST 21:01.

On a personal note, I can now proudly report that I have successfully defended my thesis “Corruption, ‘the Inequality Trap’, and ‘the 1% of the 1%’ “, and I now have completed a master’s degree in economics. I’m back home in Michigan for the holidays (hence my use of Eastern Standard Time), and then, well… I’m not entirely sure. I have a gap of about six months before PhD programs start. I have a number of job applications out, but unless I get a really good offer (such as the position at the International Food Policy Research Institute in DC) I think I may just stay in Michigan for awhile and work on my own projects, particularly publishing two of my books (my nonfiction magnum opus, The Mathematics of Tears and Joy, and my first novel, First Contact) and making some progress on a couple of research papers—ideally publishing one of them as well. But the future for me right now is quite uncertain, and that is now my major source of stress. Ironically I’d probably be less stressed if I were working full-time, because I would have a clear direction and sense of purpose. If I could have any job in the world, it would be a hard choice between a professorship at UC Berkeley or a research position at the World Bank.

Which brings me to the topic of today’s post: The people who do my dream job have just released a report showing that they basically agree with me on how it should be done.

If you have some extra time, please take a look at the World Bank World Development Report. They put one out each year, and it provides a rigorous and thorough (236 pages) but quite readable summary of the most important issues in the world economy today. It’s not exactly light summer reading, but nor is it the usual morass of arcane jargon. If you like my blog, you can probably follow most of the World Development Report. If you don’t have time to read the whole thing, you can at least skim through all the sidebars and figures to get a general sense of what it’s all about. Much of the report is written in the form of personal vignettes that make the general principles more vivid; but these are not mere anecdotes, for the report rigorously cites an enormous volume of empirical research.

The title of the 2015 report? “Mind, Society, and Behavior”. In other words, cognitive economics. The world’s foremost international economic institution has just endorsed cognitive economics and rejected neoclassical economics, and their report on the subject provides a brilliant introduction to the subject replete with direct applications to international development.

For someone like me who lives and breathes cognitive economics, the report is pure joy. It’s all there, from anchoring heuristic to social proof, corruption to discrimination. The report is broadly divided into three parts.

Part 1 explains the theory and evidence of cognitive economics, subdivided into “thinking automatically” (heuristics), “thinking socially” (social cognition), and “thinking with mental models” (bounded rationality). (If I wrote it I’d also include sections on the tribal paradigm and narrative, but of course I’ll have to publish that stuff in the actual research literature first.) Anyway the report is so amazing as it is I really can’t complain. It includes some truly brilliant deorbits on neoclassical economics, such as this one from page 47: ” In other words, the canonical model of human behavior is not supported in any society that has been studied.”

Part 2 uses cognitive economic theory to analyze and improve policy. This is the core of the report, with chapters on poverty, childhood, finance, productivity, ethnography, health, and climate change. So many different policies are analyzed I’m not sure I can summarize them with any justice, but a few particularly stuck out: First, the high cognitive demands of poverty can basically explain the whole observed difference in IQ between rich and poor people—so contrary to the right-wing belief that people are poor because they are stupid, in fact people seem stupid because they are poor. Simplifying the procedures for participation in social welfare programs (which is desperately needed, I say with a stack of incomplete Medicaid paperwork on my table—even I find these packets confusing, and I have a master’s degree in economics) not only increases their uptake but also makes people more satisfied with them—and of course a basic income could simplify social welfare programs enormously. “Are you a US citizen? Is it the first of the month? Congratulations, here’s $670.” Another finding that I found particularly noteworthy is that productivity is in many cases enhanced by unconditional gifts more than it is by incentives that are conditional on behavior—which goes against the very core of neoclassical economic theory. (It also gives us yet another item on the enormous list of benefits of a basic income: Far from reducing work incentives by the income effect, an unconditional basic income, as a shared gift from your society, may well motivate you even more than the same payment as a wage.)

Part 3 is a particularly bold addition: It turns the tables and applies cognitive economics to economists themselves, showing that human irrationality is by no means limited to idiots or even to poor people (as the report discusses in chapter 4, there are certain biases that poor people exhibit more—but there are also some they exhibit less.); all human beings are limited by the same basic constraints, and economists are human beings. We like to think of ourselves as infallibly rational, but we are nothing of the sort. Even after years of studying cognitive economics I still sometimes catch myself making mistakes based on heuristics, particularly when I’m stressed or tired. As a long-term example, I have a number of vague notions of entrepreneurial projects I’d like to do, but none for which I have been able to muster the effort and confidence to actually seek loans or investors. Rationally, I should either commit or abandon them, yet cannot quite bring myself to do either. And then of course I’ve never met anyone who didn’t procrastinate to some extent, and actually those of us who are especially smart often seem especially prone—though we often adopt the strategy of “active procrastination”, in which you end up doing something else useful when procrastinating (my apartment becomes cleanest when I have an important project to work on), or purposefully choose to work under pressure because we are more effective that way.

And the World Bank pulled no punches here, showing experiments on World Bank economists clearly demonstrating confirmation bias, sunk-cost fallacy, and what the report calls “home team advantage”, more commonly called ingroup-outgroup bias—which is basically a form of the much more general principle that I call the tribal paradigm.

If there is one flaw in the report, it’s that it’s quite long and fairly exhausting to read, which means that many people won’t even try and many who do won’t make it all the way through. (The fact that it doesn’t seem to be available in hard copy makes it worse; it’s exhausting to read lengthy texts online.) We only have so much attention and processing power to devote to a task, after all—which is kind of the whole point, really.

Yes, but what about the next 5000 years?

JDN 2456991 PST 1:34.

This week’s post will be a bit different: I have a book to review. It’s called Debt: The First 5000 Years, by David Graeber. The book is long (about 400 pages plus endnotes), but such a compelling read that the hours melt away. “The First 5000 Years” is an incredibly ambitious subtitle, but Graeber actually manages to live up to it quite well; he really does tell us a story that is more or less continuous from 3000 BC to the present.

So who is this David Graeber fellow, anyway? None will be surprised that he is a founding member of Occupy Wall Street—he was in fact the man who coined “We are the 99%”. (As I’ve studied inequality more, I’ve learned he made a mistake; it really should be “We are the 99.99%”.) I had expected him to be a historian, or an economist; but in fact he is an anthropologist. He is looking at debt and its surrounding institutions in terms of a cultural ethnography—he takes a step outside our own cultural assumptions and tries to see them as he might if he were encountering them in a foreign society. This is what gives the book its freshest parts; Graeber recognizes, as few others seem willing to, that our institutions are not the inevitable product of impersonal deterministic forces, but decisions made by human beings.

(On a related note, I was pleasantly surprised to see in one of my economics textbooks yesterday a neoclassical economist acknowledging that the best explanation we have for why Botswana is doing so well—low corruption, low poverty by African standards, high growth—really has to come down to good leadership and good policy. For once they couldn’t remove all human agency and mark it down to grand impersonal ‘market forces’. It’s odd how strong the pressure is to do that, though; I even feel it in myself: Saying that civil rights progressed so much because Martin Luther King was a great leader isn’t very scientific, is it? Well, if that’s what the evidence points to… why not? At what point did ‘scientific’ come to mean ‘human beings are helplessly at the mercy of grand impersonal forces’? Honestly, doesn’t the link between science and technology make matters quite the opposite?)

Graeber provides a new perspective on many things we take for granted: in the introduction there is one particularly compelling passage where he starts talking—with a fellow left-wing activist—about the damage that has been done to the Third World by IMF policy, and she immediately interjects: “But surely one has to pay one’s debts.” The rest of the book is essentially an elaboration on why we say that—and why it is absolutely untrue.

Graeber has also made me think quite a bit differently about Medieval society and in particular Medieval Islam; this was certainly the society in which the writings of Socrates were preserved and algebra was invented, so it couldn’t have been all bad. But in fact, assuming that Graeber’s account is accurate, Muslim societies in the 14th century actually had something approaching the idyllic fair and free market to which all neoclassicists aspire. They did so, however, by rejecting one of the core assumptions of neoclassical economics, and you can probably guess which one: the assumption that human beings are infinite identical psychopaths. Instead, merchants in Medieval Muslim society were held to high moral standards, and their livelihood was largely based upon the reputation they could maintain as upstanding good citizens. Theoretically they couldn’t even lend at interest, though in practice they had workarounds (like payment in installments that total slightly higher than the original price) that amounted to low rates of interest. They did not, however, have anything approaching the levels of interest that we have today in credit cards at 29% or (it still makes me shudder every time I think about it) payday loans at 400%. Paying on installments to a Muslim merchant would make you end up paying about a 2% to 4% rate of interest—which sounds to me almost exactly what it should be, maybe even a bit low because we’re not taking inflation into account. In any case, the moral standards of society kept people from getting too poor or too greedy, and as a result there was little need for enforcement by the state. In spite of myself I have to admit that may not have been possible without the theological enforcement provided by Islam.
Graeber also avoids one of the most common failings of anthropologists, the cultural relativism that makes them unwilling to criticize any cultural practice as immoral even when it obviously is (except usually making exceptions for modern Western capitalist imperialism). While at times I can see he was tempted to go that way, he generally avoids it; several times he goes out of his way to point out how women were sold into slavery in hunter-gatherer tribes and how that contributed to the institutions of chattel slavery that developed once Western powers invaded.

Anthropologists have another common failing that I don’t think he avoids as well, which is a primitivist bent in which anthropologists speak of ancient societies as idyllic and modern societies as horrific. That’s part of why I said ‘if Graber’s account is accurate,’ because I’m honestly not sure it is. I’ll need to look more into the history of Medieval Islam to be sure. Graeber spends a great deal of time talking about how our current monetary system is fundamentally based on threats of violence—but I can tell you that I have honestly never been threatened with violence over money in my entire life. Not by the state, not by individuals, not by corporations. I haven’t even been mugged—and that’s the sort of the thing the state exists to prevent. (Not that I’ve never been threatened with violence—but so far it’s always been either something personal, or, more often, bigotry against LGBT people.) If violence is the foundation of our monetary system, then it’s hiding itself extraordinarily well. Granted, the violence probably pops up more if you’re near the very bottom, but I think I speak for most of the American middle class when I say that I’ve been through a lot of financial troubles, but none of them have involved any guns pointed at my head. And you can’t counter this by saying that we theoretically have laws on the books that allow you to be arrested for financial insolvency—because that’s always been true, in fact it’s less true now than any other point in history, and Graeber himself freely admits this. The important question is how many people actually get violence enforced upon them, and at least within the United States that number seems to be quite small.

Graeber describes the true story of the emergence of money historically, as the result of military conquest—a way to pay soldiers and buy supplies when in an occupied territory where nobody trusts you. He demolishes the (always fishy) argument that money emerged as a way of mediating a barter system: If I catch fish and he makes shoes and I want some shoes but he doesn’t want fish right now, why not just make a deal to pay later? This is of course exactly what they did. Indeed Graeber uses the intentionally provocative word communism to describe the way that resources are typically distributed within families and small villages—because it basically is “from each according to his ability, to each according to his need”. (I would probably use the less-charged word “community”, but I have to admit that those come from the same Latin root.) He also describes something I’ve tried to explain many times to neoclassical economists to no avail: There is equally a communism of the rich, a solidarity of deal-making and collusion that undermines the competitive market that is supposed to keep the rich in check. Graeber points out that wine, women and feasting have been common parts of deals between villages throughout history—and yet are still common parts of top-level business deals in modern capitalism. Even as we claim to be atomistic rational agents we still fall back on the community norms that guided our ancestors.

Another one of my favorite lines in the book is on this very subject: “Why, if I took a free-market economic theorist out to an expensive dinner, would that economist feel somewhat diminished—uncomfortably in my debt—until he had been able to return the favor? Why, if he were feeling competitive with me, would he be inclined to take me someplace even more expensive?” That doesn’t make any sense at all under the theory of neoclassical rational agents (an infinite identical psychopath would just enjoy the dinner—free dinner!—and might never speak to you again), but it makes perfect sense under the cultural norms of community in which gifts form bonds and generosity is a measure of moral character. I also got thinking about how introducing money directly into such exchanges can change them dramatically: For instance, suppose I took my professor out to a nice dinner with drinks in order to thank him for writing me recommendation letters. This seems entirely appropriate, right? But now suppose I just paid him $30 for writing the letters. All the sudden it seems downright corrupt. But the dinner check said $30 on it! My bank account debit is the same! He might go out and buy a dinner with it! What’s the difference? I think the difference is that the dinner forms a relationship that ties the two of us together as individuals, while the cash creates a market transaction between two interchangeable economic agents. By giving my professor cash I would effectively be saying that we are infinite identical psychopaths.

While Graeber doesn’t get into it, a similar argument also applies to gift-giving on holidays and birthdays. There seriously is—I kid you not—a neoclassical economist who argues that Christmas is economically inefficient and should be abolished in favor of cash transfers. He wrote a book about it. He literally does not understand the concept of gift-giving as a way of sharing experiences and solidifying relationships. This man must be such a joy to have around! I can imagine it now: “Will you play catch with me, Daddy?” “Daddy has to work, but don’t worry dear, I hired a minor league catcher to play with you. Won’t that be much more efficient?”

This sort of thing is what makes Debt such a compelling read, and Graeber does make some good points and presents a wealth of historical information. So now it’s time to talk about what’s wrong with the book, the things Graeber gets wrong.

First of all, he’s clearly quite ignorant about the state-of-the-art in economics, and I’m not even talking about the sort of cutting-edge cognitive economics experiments I want to be doing. (When I read what Molly Crockett has been working on lately in the neuroscience of moral judgments, I began to wonder if I should apply to University College London after all.)

No, I mean Graeber is ignorant of really basic stuff, like the nature of government debt—almost nothing of what I said in that post is controversial among serious economists; the equations certainly aren’t, though some of the interpretation and application might be. (One particularly likely sticking point called “Ricardian equivalence” is something I hope to get into in a future post. You already know the refrain: Ricardian equivalence only happens if you live in a world of infinite identical psychopaths.) Graeber has internalized the Republican talking points about how this is money our grandchildren will owe to China; it’s nothing of the sort, and most of it we “owe” to ourselves. In a particularly baffling passage Graeber talks about how there are no protections for creditors of the US government, when creditors of the US government have literally never suffered a single late payment in the last 200 years. There are literally no creditors in the world who are more protected from default—and only a few others that reach the same level, such as creditors to the Bank of England.

In an equally-bizarre aside he also says in one endnote that “mainstream economists” favor the use of the gold standard and are suspicious of fiat money; exactly the opposite is the case. Mainstream economists—even the neoclassicists with whom I have my quarrels—are in almost total agreement that a fiat monetary system managed by a central bank is the only way to have a stable money supply. The gold standard is the pet project of a bunch of cranks and quacks like Peter Schiff. Like most quacks, the are quite vocal; but they are by no means supported by academic research or respected by top policymakers. (I suppose the latter could change if enough Tea Party Republicans get into office, but so far even that hasn’t happened and Janet Yellen continues to manage our fiat money supply.) In fact, it’s basically a consensus among economists that the gold standard caused the Great Depression—that in addition to some triggering event (my money is on Minsky-style debt deflation—and so is Krugman’s), the inability of the money supply to adjust was the reason why the world economy remained in such terrible shape for such a long period. The gold standard has not been a mainstream position among economists since roughly the mid-1980s—before I was born.

He makes this really bizarre argument about how because Korea, Japan, Taiwan, and West Germany are major holders of US Treasury bonds and became so under US occupation—which is indisputably true—that means that their development was really just some kind of smokescreen to sell more Treasury bonds. First of all, we’ve never had trouble selling Treasury bonds; people are literally accepting negative interest rates in order to have them right now. More importantly, Korea, Japan, Taiwan, and West Germany—those exact four countries, in that order—are the greatest economic success stories in the history of the human race. West Germany was rebuilt literally from rubble to become once again a world power. The Asian Tigers were even more impressive, raised from the most abject Third World poverty to full First World high-tech economy status in a few generations. If this is what happens when you buy Treasury bonds, we should all buy as many Treasury bonds as we possibly can. And while that seems intuitively ridiculous, I have to admit, China’s meteoric rise also came with an enormous investment in Treasury bonds. Maybe the secret to economic development isn’t physical capital or exports or institutions; nope, it’s buying Treasury bonds. (I don’t actually believe this, but the correlation is there, and it totally undermines Graeber’s argument that buying Treasury bonds makes you some kind of debt peon.)

Speaking of correlations, Graeber is absolutely terrible at econometrics; he doesn’t even seem to grasp the most basic concepts. On page 366 he shows this graph of the US defense budget and the US federal debt side by side in order to argue that the military is the primary reason for our national debt. First of all, he doesn’t even correct for inflation—so most of the exponential rise in the two curves is simply the purchasing power of the dollar declining over time. Second, he doesn’t account for GDP growth, which is most of what’s left after you account for inflation. He has two nonstationary time-series with obvious exponential trends and doesn’t even formally correlate them, let alone actually perform the proper econometrics to show that they are cointegrated. I actually think they probably are cointegrated, and that a large portion of national debt is driven by military spending, but Graeber’s graph doesn’t even begin to make that argument. You could just as well graph the number of murders and the number of cheesecakes sold, each on an annual basis; both of them would rise exponentially with population, thus proving that cheesecakes cause murder (or murders cause cheesecakes?).

And then where Graeber really loses me is when he develops his theory of how modern capitalism and the monetary and debt system that go with it are fundamentally corrupt to the core and must be abolished and replaced with something totally new. First of all, he never tells us what that new thing is supposed to be. You’d think in 400 pages he could at least give us some idea, but no; nothing. He apparently wants us to do “not capitalism”, which is an infinite space of possible systems, some of which might well be better, but none of which can actually be implemented without more specific ideas. Many have declared that Occupy has failed—I am convinced that those who say this appreciate neither how long it takes social movements to make change, nor how effective Occupy has already been at changing our discourse, so that Capital in the Twenty-First Century can be a bestseller and the President of the United States can mention income inequality and economic mobility in his speeches—but insofar as Occupy has failed to achieve its goals, it seems to me that this is because it was never clear just what Occupy’s goals were to begin with. Now that I’ve read Graeber’s work, I understand why: He wanted it that way. He didn’t want to go through the hard work (which is also risky: you could be wrong) of actually specifying what this new economic system would look like; instead he’d prefer to find flaws in the current system and then wait for someone else to figure out how to fix them. That has always been the easy part; any human system comes with flaws. The hard part is actually coming up with a better system—and Graeber doesn’t seem willing to even try.

I don’t know exactly how accurate Graeber’s historical account is, but it seems to check out, and even make sense of some things that were otherwise baffling about the sketchy account of the past I had previously learned. Why were African tribes so willing to sell their people into slavery? Well, because they didn’t think of it as their people—they were selling captives from other tribes taken in war, which is something they had done since time immemorial in the form of slaves for slaves rather than slaves for goods. Indeed, it appears that trade itself emerged originally as what Graeber calls a “human economy”, in which human beings are literally traded as a fungible commodity—but always humans for humans. When money was introduced, people continued selling other people, but now it was for goods—and apparently most of the people sold were young women. So much of the Bible makes more sense that way: Why would Job be all right with getting new kids after losing his old ones? Kids are fungible! Why would people sell their daughters for goats? We always sell women! How quickly do we flirt with the unconscionable, when first we say that all is fungible.

One of Graeber’s central points is that debt came long before money—you owed people apples or hours of labor long before you ever paid anybody in gold. Money only emerged when debt became impossible to enforce, usually because trade was occurring between soldiers and the villages they had just conquered, so nobody was going to trust anyone to pay anyone back. Immediate spot trades were the only way to ensure that trades were fair in the absence of trust or community. In other words, the first use of gold as money was really using it as collateral. All of this makes a good deal of sense, and I’m willing to believe that’s where money originally came from.

But then Graeber tries to use this horrific and violent origin of money—in war, rape, and slavery, literally some of the worst things human beings have ever done to one another—as an argument for why money itself is somehow corrupt and capitalism with it. This is nothing short of a genetic fallacy: I could agree completely that money had this terrible origin, and yet still say that money is a good thing and worth preserving. (Indeed, I’m rather strongly inclined to say exactly that.) The fact that it was born of violence does not mean that it is violence; we too were born of violence, literally millions of years of rape and murder. It is astronomically unlikely that any one of us does not have a murderer somewhere in our ancestry. (Supposedly I’m descended from Julius Caesar, hence my last name Julius—not sure I really believe that—but if so, there you go, a murderer and tyrant.) Are we therefore all irredeemably corrupt? No. Where you come from does not decide what you are or where you are going.

In fact, I could even turn the argument around: Perhaps money was born of violence because it is the only alternative to violence; without money we’d still be trading our daughters away because we had no other way of trading. I don’t think I believe that either; but it should show you how fragile an argument from origin really is.

This is why the whole book gives this strange feeling of non sequitur; all this history is very interesting and enlightening, but what does it have to do with our modern problems? Oh. Nothing, that’s what. The connection you saw doesn’t make any sense, so maybe there’s just no connection at all. Well all right then. This was an interesting little experience.

This is a shame, because I do think there are important things to be said about the nature of money culturally, philosophically, morally—but Graeber never gets around to saying them, seeming to think that merely pointing out money’s violent origins is a sufficient indictment. It’s worth talking about the fact that money is something we made, something we can redistribute or unmake if we choose. I had such high expectations after I read that little interchange about the IMF: Yes! Finally, someone gets it! No, you don’t have to repay debts if that means millions of people will suffer! But then he never really goes back to that. The closest he veers toward an actual policy recommendation is at the very end of the book, a short section entitled “Perhaps the world really does owe you a living” in which he very briefly suggests—doesn’t even argue for, just suggests—that perhaps people do deserve a certain basic standard of living even if they aren’t working. He could have filled 50 pages arguing the ins and outs of a basic income with graphs and charts and citations of experimental data—but no, he just spends a few paragraphs proposing the idea and then ends the book. (I guess I’ll have to write that chapter myself; I think it would go well in The End of Economics, which I hope to get back to writing in a few months—while I also hope to finally publish my already-written book The Mathematics of Tears and Joy.)

If you want to learn about the history of money and debt over the last 5000 years, this is a good book to do so—and that is, after all, what the title said it would be. But if you’re looking for advice on how to improve our current economic system for the benefit of all humanity, you’ll need to look elsewhere.

And so in the grand economic tradition of reducing complex systems into a single numeric utility value, I rate Debt: The First 5000 Years a 3 out of 5.

 Who are the job creators?

JDN 2456956 PDT 11:30.

For about 20 years now, conservatives have opposed any economic measures that might redistribute wealth from the rich as hurting “job creators” and thereby damaging the economy. This has become so common that the phrase “job creator” has become a euphemism for “rich person”; indeed, when Paul Ryan was asked to define “rich” he stumbled over himself and ended up with “job creators”. A few years ago, John Boehner gave a speech saying that ‘the job creators are on strike’. During his presidential campaign, Mitt Romney said Obama was ‘waging war on job creators’.

If you get the impression that the “job creator” narrative is used more often now than ever, you’re not imagining things; the term was used almost as many times in a single month of Obama’s presidency than it was in George W. Bush’s entire second term.

This narrative is not just wrong; it’s utterly ludicrous. The vision seems to be something like this: Out there somewhere, beyond the view of ordinary mortals, there lives a race of beings known as Job Creators. Ours is not to judge them, not to influence them; ours is only to appease them so that they might look upon us with favor and bestow upon us our much-needed Jobs. Without these Jobs, we will surely die, and so all other concerns are secondary: We must appease the Job Creators.

Businesses don’t create jobs because they feel like it, or because they love us, or because we have gone through the appropriate appeasement rituals. They don’t create jobs because their taxes are low or because they have extra money lying around. They create jobs because they see profit in it. They create jobs because the marginal revenue of hiring an additional worker exceeds the marginal cost.

And of course they’ll gladly destroy jobs for the exact same reasons; if they think the marginal cost exceeds the marginal revenue, out come the pink slips. If demand for the product has fallen, if the raw materials have become more expensive, or if new technology has allowed some of the labor to be cheaply automated, workers will be laid off in the interests of the company. In fact, sometimes it won’t even be in the interests of the company; corporate executives are lately in the habit of using layoffs and stock buybacks to artificially boost the value of their stock options so they can exercise them, pocket the money, and run away as the company comes crashing to the ground. Because of market deregulation and the ridiculous theory of “shareholder value” (as though shareholders are the only ones who matter!), our stock market has changed from a system of value creation to a system of value extraction.

What actually creates jobs? Demand. If the demand for their product exceeds the company’s capacity to produce it, they will hire more people in order to produce more of the product. The marginal revenue has to go up, or companies will have no reason to hire new workers. (The marginal cost could also go down, but then you get low-paying jobs, which isn’t really what we’re aiming for.) They will continue hiring more people up until the point at which it costs more to hire someone than they’d make from selling the products that person could make for them.

What if they don’t have enough money? They’ll borrow it. As long as they know they are going to make a profit from that worker, they will gladly borrow money in order to hire them. Indeed, corporations do this sort of thing all the time. If banks stop lending, that’s a big problem—it’s called a credit crunchand it’s a major part of just about any financial crisis. But that isn’t because rich people don’t have enough money, it’s because our banking system is fundamentally defective and corrupt. Yes, fixing the banking system would create jobs in a number of different ways. (The biggest three I can think of: There would be more credit for real businesses to fund investment, more credit for individuals to increase demand, and labor effort that is currently wasted on useless financial speculation would be once again returned to real production.) But that’s not what Paul Ryan and his ilk are talking about—indeed, Paul Ryan seems to think that we should undo the meager reforms we’ve already made. Unless we fundamentally change the financial system, the way to create jobs would be to create demand.

And what decides demand? Well, a lot of things I suppose; preferences, technologies, cultural norms, fads, advertising, and so on. But when you’re looking at short-run changes like the business cycle, the driving factor in most cases is actually quite simple: How much money does the middle class have to spend? The middle class is where most of the consumer spending comes from, and if the middle class has money to spend we will buy products. If we don’t have money to spend—we’re out of work, or we have too much debt to pay—then we won’t buy products. It’s not that we suddenly stopped wanting products; the utility value of those products to us is unchanged. The problem is that we simply can’t afford them anymore. This is what happens in a recession: After some sort of shock to the economy, the middle class stops being able to spend, which reduces demand. That causes corporations to lay off workers, which creates unemployment, which reduces demand even further. To correct for the lost demand, prices are supposed to go down (deflation); but this doesn’t actually work, for two reasons.

First, people absolutely hate seeing their wages go down; even if there is a legitimate economic reason, people still have a sense that they are being exploited by their employers (and sometimes they are). This is called downward nominal wage rigidity.

Second, when prices go down, the real value of debt doesn’t go down; it goes up. Your loans are denominated in dollars, not apples; so reducing the price of apples means that you actually owe more apples than you did before. Since debt is usually one of the big things holding back spending by the middle class in the first place, deflation doesn’t correct the imbalance; it makes it worse. This is called debt deflation. Maybe we shouldn’t call it that, since the problem isn’t the prices, it’s the debt. In 2008, the first thing that happened wasn’t that prices in general went down, which is what we normally mean by “deflation”; it was that housing prices went down, and so suddenly people owed vastly more on their mortgages than they had before, and many of them couldn’t afford to pay. It wasn’t a drop in prices so much as a rise in the real value of debt. (I actually think one of the reasons there is no successful comprehensive theory of the cause of business cycles is that there isn’t a single comprehensive cause of business cycles. It’s usually some form of financial crisis followed by debt deflation—and these are the ones to be worried about, 1929 and 2008—but that isn’t always what happens. In 2001, we actually had an unanticipated negative real economic shock—the 9/11 attacks. In 1973 we had a different kind of real economic shock when OPEC raised oil prices at the same time as the US hit peak oil. We should probably be distinguishing between financial recession and real recession.)

Notice how in this entire discussion of what drives aggregate demand, I have never mentioned rich people getting free money; I haven’t even mentioned tax rates. If you have the simplistic view “taxes are bad” (or the totally insane, yet still common, view “taxation is slavery”), then you’re going to look for excuses to lower taxes whenever you can. If you specifically love rich people more than poor people, you’re going to look for excuses to lower taxes on the rich and raise them on the poor (and there is really no other way to interpret Mitt Romney’s infamous “47%” comments). But none of this has anything to do with aggregate demand and job creation. It is pure ideology and has no basis in economics.

Indeed, there’s little reason to think that a tax on corporate profits or capital income would change hiring decisions at all. When we talk about the potential distortions of income taxes, we really have to be talking about labor income, because labor can actually be disincentivized. Say you’re making $15 an hour and not paying any taxes, but your tax rate is suddenly raised to 40%. You can see that after taxes your real wage is now only $9, and maybe you’ll decide that it’s just not worth it to work those hours. This is because you pay a real cost to work—it’s hard, it’s stressful, it’s frustrating, it takes up time.

Capital income can’t be disincentivized. You can have relative incentives, if you tax certain kinds of capital more than others. But if you tax all capital income at the same rate, the incentives remain exactly as they were before: Seek the highest return on investment. Your only costs were financial, and your only benefits are financial. Yes, you’ll be unhappy that your after-tax return on investment has gone down; but it won’t change your investment decisions. If you previously had the choice between investment A yielding 5% return and investment B yielding a 10% return, you’d choose B. Now you pay a 40% tax on capital income; you now have a choice between a 3% real return on A and a 6% real return on B—you’re still going to choose B. That’s probably why high marginal tax rates on income don’t reduce job growth—because most high incomes are capital incomes of one form or another; even when a CEO reports ordinary income it’s really a due to profits and stock options, it’s not like he was paid a wage for work he did.

To be fair, it does get more complicated when you include borrowing and interest rates (now you have the option of lending your money at interest or borrowing more from someone else, which may be taxed differently), and because it’s so easy to move money across borders you can have a relative incentive even when tax rates within a given nation are all the same. Don’t take this literally as saying that you can do whatever you want with taxes on capital income. But in fact you can do quite a lot, because you can change the real rate of return and have no incentive effect as long as you don’t change the relative rate of return. That’s different from wages, for which the real value of the wage can have a direct effect on employers and employees. (The only way to have the same effect on workers would be to somehow lower the real cost of working—make working easier or more fun—which actually sounds like a great idea if you can do it.) The people who are constantly telling us that workers need to tighten their belts but we mustn’t dare tax the “job creators” have the whole situation exactly backwards.

There’s something else I should bring up as well. In everything I’ve said above, I have taken as given the assumption that we need jobs. For many people, probably most Americans in fact, this is an unquestioned assumption, seemingly so obvious as to be self-evident; of course we need jobs, right? But no, actually, we don’t; what we need is production and distribution of wealth. We need to make food and clothing and houses—those are truly basic needs. We could even say we “need” (or at least want) to make televisions and computers and cars. As individuals and as a society we benefit from having these goods. And in our present capitalist economy, the way that we produce and distribute goods is through a system of jobs—you are paid to make goods, and then you can use that money to buy other goods. Don’t get me wrong; this system works pretty well, and for the most part I want to make small adjustments and reforms around the edges rather than throw the whole thing out. Thus far, other systems have not worked as well; when we have attempted to centrally plan production and distribution, the best-case scenario has been inefficiency and the worst-case scenario has been mass starvation.

But we should also be open to the possibility of other systems that are better than capitalism. We should be open to the possibility of a culture like, well, The Culture (and if you haven’t read any Iain Banks novels you should; I’d probably start with Player of Games), in which artificial intelligence and automation allows central planning to finally achieve efficient production and distribution. We should be open to the possibility of a culture like the Federation (and don’t tell me you haven’t seen Star Trek!), in which resources are so plentiful that anyone can have whatever they want, and people work not because they have to, but because they want to—it gives them meaning and purpose in their lives. Fanciful? Perhaps. But lightspeed worldwide communication and landing robots on other planets would have seemed pretty fanciful a century ago.
Capitalism is really an Industrial Era system. It was designed in, and for, a world in which the most important determinants of production are machines, raw materials, and labor hours. But we don’t live in that world anymore. The most important determinants of production are now ideas; software, research, patents, copyrights. Microsoft, Google, and Amazon don’t make things at all, they make ideas; Sony, IBM, Apple, and Toshiba make things, but those things are primarily for the production and dissemination of ideas. Ideas are just as valuable as things—if not more so—but they obey different rules.

Capitalism was designed for a world of rival, excludable goods with increasing marginal cost. Rival, meaning that if one person has it, someone else can’t have it anymore. We speak of piracy as “stealing”, but that’s totally wrong; if you steal something I have, I don’t have it anymore. If you pirate something I have, I still have it. If I gave you my computer, I wouldn’t have it anymore; but I can give you the ideas in this blog post and then we’ll both have them. Excludable, meaning that there is a way to prevent someone else from getting it if you don’t want them to. And increasing marginal cost, meaning that the more you make, the more it costs to make each one. Under these conditions, you get a very nice equilibrium that is efficient under competition.

But ideas are nonrival, they have nearly zero marginal cost, and we are increasingly finding that they aren’t even very excludable; DRM is astonishingly ineffective. Under these conditions, your nice efficient equilibrium completely evaporates. There can be many different equilibria, or no equilibrium at all; and the results are almost always inefficient. We have shoehorned capitalism onto an economy that it was not designed to deal with. Capitalism was designed for the Industrial Era; but we are now in the Information Era.

Indeed, you can see this in all our neoclassical growth models: K is physical capital—machines—and L is labor, and sometimes it is augmented with N—natural resources. But these typically only explain about 50% of the variation in economic output, so we add an extra term, A, which goes by many names: “productivity”, “efficiency”, “technology”; I think the most informative one is actually “the Solow residual”. It’s the residual; it’s the part we can’t explain, dare I say, the part capitalism isn’t designed to explain. It is, in short, made of ideas. One of my thesis papers is actually about this “total factor productivity”, and how a major component of it is made up of one class of ideas in particular: Corruption. Corruption isn’t a thing, some object in space. It’s a cultural norm, a systemic idea that permeates the thoughts and actions of the whole society. It affects what we do, whom we trust, how the rules are made, and how well we follow those rules. You can even think of capitalism as an idea, a system, a culture—and a good part of “productivity” can be accounted for by “market orientation”, which is to say how capitalist a nation is. I would like to see someday a new model that actually includes these factors as terms in the equation, instead of throwing them all together in the mysterious A that we don’t understand.

With this in mind, we should be asking ourselves whether we need jobs at all, because jobs are a system designed for the production of physical goods in the Industrial Era. Now that we live in the Information Era and most of our production is in the form of ideas, do we still need jobs? Does everyone need a job? If you’re trying to make cars for a million people, it may not take a million people to do it, but it’s going to take thousands. But if you’re trying to design a car for a million people, or make a computer game about cars for a million people to play, that can be done with a lot fewer people. Ideas can be made by a few and then disseminated to the world. General Motors has 200,000 employees (and used to have about twice as many in the 1970s); Blizzard Entertainment has less than 5,000. It’s not because they produce for fewer people; GM sells about 3 million cars a year, and Starcraft sold over 11 million copies. Starcraft came out in 1998, so I added up how many cars GM sold in the US since 1998: 61 million. That’s still 3.28 employees per thousand cars sold, but only 0.45 employees per thousand computer games sold.

Still, I don’t have a detailed account of what this new jobless economic system might look like. For now, it’s probably best if people have jobs. But if we really want to create jobs, we need to increase aggregate demand. That most likely means either reducing debt or giving more money to consumers. It certainly doesn’t have anything to do with tax cuts for the rich.

And really, this is pretty obvious; if you stop and think for a minute about why businesses create jobs, you realize that it has to do with demand for products, not how nice the government treats them or how much extra cash they have laying around. I actually have trouble believing that the people who say “job creators” unironically actually believe the words they are saying. Do they honestly think that rich people create jobs out of sheer brilliance and benevolence, but are constrained by how much money they have and “go on strike” if the government doesn’t kowtow to them?

The only way I can see that they could actually believe this sort of thing would be if they read so much Ayn Rand that it totally infested their brains and rendered them incapable of thinking outside that framework. Perhaps Krugman is right, and Rand Paul really does believe that he is John Galt. Maybe they really do honestly believe that this is how economics works—in which case it’s no wonder that our economy is in trouble. Indeed, the marvel is that it works at all.

What are the limits to growth?

JDN 2456941 PDT 12:25.

Paul Krugman recently wrote a column about the “limits to growth” community, and as usual, it’s good stuff; his example of how steamships substituted more ships for less fuel is quite compelling. But there’s a much stronger argument to made against “limits to growth”, and I thought I’d make it here.

The basic idea, most famously propounded by Jay Forrester but still with many proponents today (and actually owing quite a bit to Thomas Malthus), is this: There’s only so much stuff in the world. If we keep adding more people and trying to give people higher standards of living, we’re going to exhaust all the stuff, and then we’ll be in big trouble.

This argument seems intuitively reasonable, but turns out to be economically naïve. It can take several specific forms, from the basically reasonable to the utterly ridiculous. On the former end is “peak oil”, the point at which we reach a maximum rate of oil extraction. We’re actually past that point in most places, and it won’t be long before the whole world crosses that line. So yes, we really are running out of oil, and we need to transition to other fuels as quickly as possible. On the latter end is the original Mathusian argument (we now have much more food per person worldwide than they did in Malthus’s time—that’s why ending world hunger is a realistic option now), and, sadly, the argument Mark Buchanan made a few days ago. No, you don’t always need more energy to produce more economic output—as Krugman’s example cleverly demonstrates. You can use other methods to improve your energy efficiency, and that doesn’t necessarily require new technology.

Here’s the part that Krugman missed: Even if we need more energy, there’s plenty of room at the top. The total amount of sunlight that hits the Earth is about 1.3 kW/m^2, and the Earth has a surface area of about 500 million km^2, which is 5e14 m^2. That means that if we could somehow capture all the sunlight that hits the Earth, we’d have 6.5e17 W, which is 5.7e18 kilowatt-hours per year. Total world energy consumption is about 140,000 terawatt-hours per year, which is 1.4e14 kilowatt-hours per year. That means we could increase energy consumption by a factor of one thousand just using Earth-based solar power (Covering the oceans with synthetic algae? A fleet of high-altitude balloons covered in high-efficiency solar panels?). That’s not including fission power, which is already economically efficient, or fusion power, which has passed break-even and may soon become economically feasible as well. Fusion power is only limited by the size of your reactor and your quantity of deuterium, and deuterium is found in ocean water (about 33 milligrams per liter), not to mention permeating all of outer space. If we can figure out how to fuse ordinary hydrogen, well now our fuel is literally the most abundant substance in the universe.

And what if we move beyond the Earth? What if we somehow captured not just the solar energy that hits the Earth, but the totality of solar energy that the Sun itself releases? That figure is about 1e31 joules per day, which is 1e27 kilowatt-hours per day, or seven trillion times as much energy as we currently consume. It is literally enough to annihilate entire planets, which the Sun would certainly do if you put a planet near enough to it. A theoretical construct to capture all this energy is called a Dyson Sphere, and the ability to construct one officially makes you a Type 2 Kardashev Civilization. (We currently stand at about Type 0.7. Building that worldwide solar network would raise us to Type 1.)

Can we actually capture all that energy with our current technology? Of course not. Indeed, we probably won’t have that technology for centuries if not millennia. But if your claim—as Mark Buchanan’s was—is about fundamental physical limits, then you should be talking about Dyson Spheres. If you’re not, then we are really talking about practical economic limits.

Are there practical economic limits to growth? Of course there are; indeed, they are what actually constrains growth in the real world. That’s why the US can’t grow above 2% and China won’t be growing at 7% much longer. (I am rather disturbed by the fact that many of the Chinese nationals I know don’t appreciate this; they seem to believe the propaganda that this rapid growth is something fundamentally better about the Chinese system, rather than the simple economic fact that it’s easier to grow rapidly when you are starting very small. I had a conversation with a man the other day who honestly seemed to think that Macau could sustain its 12% annual GDP growth—driven by gambling, no less! Zero real productivity!—into the indefinite future. Don’t get me wrong, I’m thrilled that China is growing so fast and lifting so many people out of poverty. But no remotely credible economist believes they can sustain this growth forever. The best-case scenario is to follow the pattern of Korea, rising from Third World to First World status in a few generations. Korea grew astonishingly fast from about 1950 to 1990, but now that they’ve made it, their growth rate is only 3%.)

There is also a reasonable argument to be made about the economic tradeoffs involved in fighting climate change and natural resource depletion. While the people of Brazil may like to have more firewood and space for farming, the fact is the rest of need that Amazon in order to breathe. While any given fisherman may be rational in the amount of fish he catches, worldwide we are running out of fish. And while we Americans may love our low gas prices (and become furious when they rise even slightly), the fact is, our oil subsidies are costing hundreds of billions of dollars and endangering millions of lives.

We may in fact have to bear some short-term cost in economic output in order to ensure long-term environmental sustainability (though to return to Krugman, that cost may be a lot less than many people think!). Economic growth does slow down as you reach high standards of living, and it may even continue to slow down as technology begins to reach diminishing returns (though this is much harder to forecast). So yes, in that sense there are limits to growth. But the really fundamental limits aren’t something we have to worry about for at least a thousand years. Right now, it’s just a question of good economic policy.