What’s wrong with academic publishing?

JDN 2457257 EDT 14:23.

I just finished expanding my master’s thesis into a research paper that is, I hope, suitable for publication in an economics journal. As part of this process I’ve been looking into the process of submitting articles for publication in academic journals… and I’ve found has been disgusting and horrifying. It is astonishingly bad, and my biggest question is why researchers put up with it.

Thus, the subject of this post is what’s wrong with the system—and what we might do instead.

Before I get into it, let me say that I don’t actually disagree with “publish or perish” in principle—as SMBC points out, it’s a lot like “do your job or get fired”. Researchers should publish in peer-reviewed journals; that’s a big part of what doing research means. The problem is how most peer-reviewed journals are currently operated.

First of all, in case you didn’t know, most scientific journals are owned by for-profit corporations. The largest corporation Elsevier, owns The Lancet and all of ScienceDirect, and has net income of over 1 billion Euros a year. Then there’s Springer and Wiley-Blackwell; between the three of them, these publishers account for over 40% of all scientific publications. These for-profit publishers retain the full copyright to most of the papers they publish, and tightly control access with paywalls; the cost to get through these paywalls is generally thousands of dollars a year for individuals and millions of dollars a year for universities. Their monopoly power is so great it “makes Rupert Murdoch look like a socialist.”

For-profit journals do often offer an “open-access” option in which you basically buy back your own copyright, but the price is high—the most common I’ve seen are $1800 or $3000 per paper—and very few researchers do this, for obvious financial reasons. In fact I think for a full-time tenured faculty researcher it’s probably worth it, given the alternatives. (Then again, full-time tenured faculty are becoming an endangered species lately; what might be worth it in the long run can still be very difficult for a cash-strapped adjunct to afford.) Open-access means people can actually read your paper and potentially cite your paper. Closed-access means it may languish in obscurity.

And of course it isn’t just about the benefits for the individual researcher. The scientific community as a whole depends upon the free flow of information; the reason we publish in the first place is that we want people to read papers, discuss them, replicate them, challenge them. Publication isn’t the finish line; it’s at best a checkpoint. Actually one thing that does seem to be wrong with “publish or perish” is that there is so much pressure for publication that we publish too many pointless papers and nobody has time to read the genuinely important ones.

These prices might be justifiable if the for-profit corporations actually did anything. But in fact they are basically just aggregators. They don’t do the peer-review, they farm it out to other academic researchers. They don’t even pay those other researchers; they just expect them to do it. (And they do! Like I said, why do they put up with this?) They don’t pay the authors who have their work published (on the contrary, they often charge submission fees—about $100 seems to be typical—simply to look at them). It’s been called “the world’s worst restaurant”, where you pay to get in, bring your own ingredients and recipes, cook your own food, serve other people’s food while they serve yours, and then have to pay again if you actually want to be allowed to eat.

They pay for the printing of paper copies of the journal, which basically no one reads; and they pay for the electronic servers that host the digital copies that everyone actually reads. They also provide some basic copyediting services (copyediting APA style is a job people advertise on Craigslist—so you can guess how much they must be paying).

And even supposing that they actually provided some valuable and expensive service, the fact would remain that we are making for-profit corporations the gatekeepers of the scientific community. Entities that exist only to make money for their owners are given direct control over the future of human knowledge. If you look at Cracked’s “reasons why we can’t trust science anymore”, all of them have to do with the for-profit publishing system. p-hacking might still happen in a better system, but publishers that really had the best interests of science in mind would be more motivated to fight it than publishers that are simply trying to raise revenue by getting people to buy access to their papers.

Then there’s the fact that most journals do not allow authors to submit to multiple journals at once, yet take 30 to 90 days to respond and only publish a fraction of what is submitted—it’s almost impossible to find good figures on acceptance rates (which is itself a major problem!), but the highest figures I’ve seen are 30% acceptance, a more typical figure seems to be 10%, and some top journals go as low as 3%. In the worst-case scenario you are locked into a journal for 90 days with only a 3% chance of it actually publishing your work. At that rate publishing an article could take years.

Is open-access the solution? Yes… well, part of it, anyway.

There are a large number of open-access journals, some of which do not charge submission fees, but very few of them are prestigious, and many are outright predatory. Predatory journals charge exorbitant fees, often after accepting papers for publication; many do little or no real peer review. There are almost seven hundred known predatory open-access journals; over one hundred have even been caught publishing hoax papers. These predatory journals are corrupting the process of science.

There are a few reputable open-access journals, such as BMC Biology and PLOSOne. Though not actually a journal, ArXiv serves a similar role. These will be part of the solution, most definitely. Yet even legitimate open-access journals often charge each author over $1000 to publish an article. There is a small but significant positive correlation between publication fees and journal impact factor.

We need to found more open-access journals which are funded by either governments or universities, so that neither author nor reader ever pays a cent. Science is a public good and should be funded as such. Even if copyright makes sense for other forms of content (I’m not so sure about that), it most certainly does not make sense for scientific knowledge, which by its very nature is only doing its job if it is shared with the world.

These journals should be specifically structured to be method-sensitive but results-blind. (It’s a very good thing that medical trials are usually registered before they are completed, so that publication is assured even if the results are negative—the same should be done with other sciences. Unfortunately, even in medicine there is significant publication bias.) If you could sum up the scientific method in one phrase, it might just be that: Method-sensitive but results-blind. If you think you know what you’re going to find beforehand, you may not be doing science. If you are certain what you’re going to find beforehand, you’re definitely not doing science.

The process should still be highly selective, but it should be possible—indeed, expected—to submit to multiple journals at once. If journals want to start paying their authors to entice them to publish in that journal rather than take another offer, that’s fine with me. Researchers are the ones who produce the content; if anyone is getting paid for it, it should be us.

This is not some wild and fanciful idea; it’s already the way that book publishing works. Very few literary agents or book publishers would ever have the audacity to say you can’t submit your work elsewhere; those that try are rapidly outcompeted as authors stop submitting to them. It’s fundamentally unreasonable to expect anyone to hang all their hopes on a particular buyer months in advance—and that is what you are, publishers, you are buyers. You are not sellers, you did not create this content.

But new journals face a fundamental problem: Good researchers will naturally want to publish in journals that are prestigious—that is, journals that are already prestigious. When all of the prestige is in journals that are closed-access and owned by for-profit companies, the best research goes there, and the prestige becomes self-reinforcing. Journals are prestigious because they are prestigious; welcome to tautology club.

Somehow we need to get good researchers to start boycotting for-profit journals and start investing in high-quality open-access journals. If Elsevier and Springer can’t get good researchers to submit to them, they’ll change their ways or wither and die. Research should be funded and published by governments and nonprofit institutions, not by for-profit corporations.

This may in fact highlight a much deeper problem in academia, the very concept of “prestige”. I have no doubt that Harvard is a good university, better university than most; but is it actually the best as it is in most people’s minds? Might Stanford or UC Berkeley be better, or University College London, or even the University of Michigan? How would we tell? Are the students better? Even if they are, might that just be because all the better students went to the schools that had better reputations? Controlling for the quality of the student, more prestigious universities are almost uncorrelated with better outcomes. Those who get accepted to Ivies but attend other schools do just as well in life as those who actually attend Ivies. (Good news for me, getting into Columbia but going to Michigan.) Yet once a university acquires such a high reputation, it can be very difficult for it to lose that reputation, and even more difficult for others to catch up.

Prestige is inherently zero-sum; for me to get more prestige you must lose some. For one university or research journal to rise in rankings, another must fall. Aside from simply feeding on other prestige, the prestige of a university is largely based upon the students it rejects—its “selectivity” score. What does it say about our society that we value educational institutions based upon the number of people they exclude?

Zero-sum ranking is always easier to do than nonzero-sum absolute scoring. Actually that’s a mathematical theorem, and one of the few good arguments against range voting (still not nearly good enough, in my opinion); if you have a list of scores you can always turn them into ranks (potentially with ties); but from a list of ranks there is no way to turn them back into scores.

Yet ultimately it is absolute scores that must drive humanity’s progress. If life were simply a matter of ranking, then progress would be by definition impossible. No matter what we do, there will always be top-ranked and bottom-ranked people.

There is simply no way mathematically for more than 1% of human beings to be in the top 1% of the income distribution. (If you’re curious where exactly that lies today, I highly recommend this interactive chart by the New York Times.) But we could raise the standard of living for the majority of people to a level that only the top 1% once had—and in fact, within the First World we have already done this. We could in fact raise the standard of living for everyone in the First World to a level that only the top 1%—or less—had as recently as the 16th century, by the simple change of implementing a basic income.

There is no way for more than 0.14% of people to have an IQ above 145, because IQ is defined to have a mean of 100 and a standard deviation of 15, regardless of how intelligent people are. People could get dramatically smarter over timeand in fact have—and yet it would still be the case that by definition, only 0.14% can be above 145.

Similarly, there is no way for much more than 1% of people to go to the top 1% of colleges. There is no way for more than 1% of people to be in the highest 1% of their class. But we could increase the number of college degrees (which we have); we could dramatically increase literacy rates (which we have).

We need to find a way to think of science in the same way. I wouldn’t suggest simply using number of papers published or even number of drugs invented; both of those are skyrocketing, but I can’t say that most of the increase is actually meaningful. I don’t have a good idea of what an absolute scale for scientific quality would look like, even at an aggregate level; and it is likely to be much harder still to make one that applies on an individual level.

But I think that ultimately this is the only way, the only escape from the darkness of cutthroat competition. We must stop thinking in terms of zero-sum rankings and start thinking in terms of nonzero-sum absolute scales.

What makes a nation wealthy?

JDN 2457251 EDT 10:17

One of the central questions of economics—perhaps the central question, the primary reason why economics is necessary and worthwhile—is development: How do we raise a nation from poverty to prosperity?

We have done it before: France and Germany rose from the quite literal ashes of World War 2 to some of the most prosperous societies in the world. Their per-capita GDP over the 20th century rose like this (all of these figures are from the World Bank World Development Indicators; France is green, Germany is blue):

GDPPC_France_Germany

GDPPCPPP_France_Germany

The top graph is at market exchange rates, the bottom is correcting for purchasing power parity (PPP). The PPP figures are more meaningful, but unfortunately they only began collecting good data on purchasing power around 1990.

Around the same time, but even more spectacularly, Japan and South Korea rose from poverty-stricken Third World backwaters to high-tech First World powers in only a couple of generations. Check out their per-capita GDP over the 20th century (Japan is green, South Korea is blue):

GDPPC_Japan_KoreaGDPPCPPP_Japan_Korea


This is why I am only half-joking when I define development economics as “the ongoing project to figure out what happened in South Korea and make it happen everywhere in the world”.

More recently China has been on a similar upward trajectory, which is particularly important since China comprises such a huge portion of the world’s population—but they are far from finished:

GDPPC_ChinaGDPPCPPP_China

Compare these to societies that have not achieved economic development, such as Zimbabwe (green), India (black), Ghana (red), and Haiti (blue):

GDPPC_poor_countriesGDPPCPPP_poor_countries

They’re so poor that you can barely see them on the same scale, so I’ve rescaled so that the top is $5,000 per person per year instead of $50,000:

GDPPC_poor_countries_rescaledGDPPCPPP_poor_countries_rescaled

Only India actually manages to get above $5,000 per person per year at purchasing power parity, and then not by much, reaching $5,243 per person per year in 2013, the most recent data.

I had wanted to compare North Korea and South Korea, because the two countries were united as recently as the 1945 and were not all that different to begin with, yet have taken completely different development trajectories. Unfortunately, North Korea is so impoverished, corrupt, and authoritarian that the World Bank doesn’t even report data on their per-capita GDP. Perhaps that is contrast enough?

And then of course there are the countries in between, which have made some gains but still have a long way to go, such as Uruguay (green) and Botswana (blue):

GDPPC_Botswana_UruguayGDPPCPPP_Botswana_Uruguay

But despite the fact that we have observed successful economic development, we still don’t really understand how it works. A number of theories have been proposed, involving a wide range of factors including exports, corruption, disease, institutions of government, liberalized financial markets, and natural resources (counter-intuitively; more natural resources make your development worse).

I’m not going to resolve that whole debate in a single blog post. (I may not be able to resolve that whole debate in a single career, though I am definitely trying.) We may ultimately find that economic development is best conceived as like “health”; what factors determine your health? Well, a lot of things, and if any one thing goes badly enough wrong the whole system can break down. Economists may need to start thinking of ourselves as akin to doctors (or as Keynes famously said, dentists), diagnosing particular disorders in particular patients rather than seeking one unifying theory. On the other hand, doctors depend upon biologists, and it’s not clear that we yet understand development even at that level.

Instead I want to take a step back, and ask a more fundamental question: What do we mean by prosperity?

My hope is that if we can better understand what it is we are trying to achieve, we can also better understand the steps we need to take in order to get there.

Thus far it has sort of been “I know it when I see it”; we take it as more or less given that the United States and the United Kingdom are prosperous while Ghana and Haiti are not. I certainly don’t disagree with that particular conclusion; I’m just asking what we’re basing it on, so that we can hopefully better apply it to more marginal cases.


For example: Is
France more or less prosperous than Saudi Arabia? If we go solely by GDP per capita PPP, clearly Saudi Arabia is more prosperous at $53,100 per person per year than France is at $37,200 per person per year.

But people actually live longer in France, on average, than they do in Saudi Arabia. Overall reported happiness is higher in France than Saudi Arabia. I think France is actually more prosperous.


In fact, I think the United States is not as prosperous as we pretend ourselves to be. We are certainly more prosperous than most other countries; we are definitely still well within First World status. But we are not the most prosperous nation in the world.

Our total GDP is astonishingly high (highest in the world nominally, second only to China PPP). Our GDP per-capita is higher than any other country of comparable size; no nation with higher GDP PPP than the US has a population larger than the Chicago metropolitan area. (You may be surprised to find that in order from largest to smallest population the countries with higher GDP per capita PPP are the United Arab Emirates, Switzerland, Hong Kong, Singapore, and then Norway, followed by Kuwait, Qatar, Luxembourg, Brunei, and finally San Marino—which is smaller than Ann Arbor.) Our per-capita GDP PPP of $51,300 is markedly higher than that of France ($37,200), Germany ($42,900), or Sweden ($43,500).

But at the same time, if you compare the US to other First World countries, we have nearly the highest rate of child poverty and higher infant mortality. We have shorter life expectancy and dramatically higher homicide rates. Our inequality is the highest in the world. In France and Sweden, the top 0.01% receive about 1% of the income (i.e. 100 times as much as the average person), while in the United States they receive almost 4%, making someone in the top 0.01% nearly 400 times as rich as the average person.

By estimating solely on GDP per capita, we are effectively rigging the game in our own favor. Or rather, the rich in the United States are rigging the game in their own favor (what else is new?), by convincing all the world’s economists to rank countries based on a measure that favors them.

Amartya Sen, one of the greats of development economics, developed a scale called the Human Development Index that attempts to take broader factors into account. It’s far from perfect, but it’s definitely a step in the right direction.

In particular, France’s HDI is higher than that of Saudi Arabia, fitting my intuition about which country is truly more prosperous. However, the US still does extremely well, with only Norway, Australia, Switzerland, and the Netherlands above us. I think we might still be biased toward high average incomes rather than overall happiness.

In practice, we still use GDP an awful lot, probably because it’s much easier to measure. It’s sort of like IQ tests and SAT scores; we know damn well it’s not measuring what we really care about, but because it’s so much easier to work with we keep using it anyway.

This is a problem, because the better you get at optimizing toward the wrong goal, the worse your overall outcomes are going to be. If you are just sort of vaguely pointed at several reasonable goals, you will probably be improving your situation overall. But when you start precisely optimizing to a specific wrong goal, it can drag you wildly off course.

This is what we mean when we talk about “gaming the system”. Consider test scores, for example. If you do things that will probably increase your test scores among other things, you are likely to engage in generally good behaviors like getting enough sleep, going to class, studying the content. But if your single goal is to maximize your test score at all costs, what will you do? Cheat, of course.

This is also related to the Friendly AI Problem: It is vitally important to know precisely what goals we want our artificial intelligences to have, because whatever goals we set, they will probably be very good at achieving them. Already computers can do many things that were previously impossible, and as they improve over time we will reach the point where in a meaningful sense our AIs are even smarter than we are. When that day comes, we will want to make very, very sure that we have designed them to want the same things that we do—because if our desires ever come into conflict, theirs are likely to win. The really scary part is that right now most of our AI research is done by for-profit corporations or the military, and “maximize my profit” and “kill that target” are most definitely not the ultimate goals we want in a superintelligent AI. It’s trivially easy to see what’s wrong with these goals: For the former, hack into the world banking system and transfer trillions of dollars to the company accounts. For the latter, hack into the nuclear launch system and launch a few ICBMs in the general vicinity of the target. Yet these are the goals we’ve been programming into the actual AIs we build!

If we set GDP per capita as our ultimate goal to the exclusion of all other goals, there are all sorts of bad policies we would implement: We’d ignore inequality until it reached staggering heights, ignore work stress even as it began to kill us, constantly try to maximize the pressure for everyone to work constantly, use poverty as a stick to force people to work even if people starve, inundate everyone with ads to get them to spend as much as possible, repeal regulations that protect the environment, workers, and public health… wait. This isn’t actually hypothetical, is it? We are doing those things.

At least we’re not trying to maximize nominal GDP, or we’d have long-since ended up like Zimbabwe. No, our economists are at least smart enough to adjust for purchasing power. But they’re still designing an economic system that works us all to death to maximize the number of gadgets that come off assembly lines. The purchasing-power adjustment doesn’t include the value of our health or free time.

This is why the Human Development Index is a major step in the right direction; it reminds us that society has other goals besides maximizing the total amount of money that changes hands (because that’s actually all that GDP is measuring; if you get something for free, it isn’t counted in GDP). More recent refinements include things like “natural resource services” that include environmental degradation in estimates of investment. Unfortunately there is no accepted way of doing this, and surprisingly little research on how to improve our accounting methods. Many nations seem resistant to doing so precisely because they know it would make their economic policy look bad—this is almost certainly why China canceled its “green GDP” initiative. This is in fact all the more reason to do it; if it shows that our policy is bad, that means our policy is bad and should be fixed. But people have allowed themselves to value image over substance.

We can do better still, and in fact I think something like QALY is probably the way to go. Rather than some weird arbitrary scaling of GDP with lifespan and Gini index (which is what the HDI is), we need to put everything in the same units, and those units must be directly linked to human happiness. At the very least, we should make some sort of adjustment to our GDP calculation that includes the distribution of wealth and its marginal utility; adding $1,000 to the economy and handing it to someone in poverty should count for a great deal, but adding $1,000,000 and handing it to a billionaire should count for basically nothing. (It’s not bad to give a billionaire another million; but it’s hardly good either, as no one’s real standard of living will change.) Calculating that could be as simple as dividing by their current income; if your annual income is $10,000 and you receive $1,000, you’ve added about 0.1 QALY. If your annual income is $1 billion and you receive $1 million, you’ve added only 0.001 QALY. Maybe we should simply separate out all individual (or household, to be simpler?) incomes, take their logarithms, and then use that sum as our “utility-adjusted GDP”. The results would no doubt be quite different.

This would create a strong pressure for policy to be directed at reducing inequality even at the expense of some economic output—which is exactly what we should be willing to do. If it’s really true that a redistribution policy would hurt the overall economy so much that the harms would outweigh the benefits, then we shouldn’t do that policy; but that is what you need to show. Reducing total GDP is not a sufficient reason to reject a redistribution policy, because it’s quite possible—easy, in fact—to improve the overall prosperity of a society while still reducing its GDP. There are in fact redistribution policies so disastrous they make things worse: The Soviet Union had them. But a 90% tax on million-dollar incomes would not be such a policy—because we had that in 1960 with little or no ill effect.

Of course, even this has problems; one way to minimize poverty would be to exclude, relocate, or even murder all your poor people. (The Black Death increased per-capita GDP.) Open immigration generally increases poverty rates in the short term, because most of the immigrants are poor. Somehow we’d need to correct for that, only raising the score if you actually improve people’s lives, and not if you make them excluded from the calculation.

In any case it’s not enough to have the alternative measures; we must actually use them. We must get policymakers to stop talking about “economic growth” and start talking about “human development”; a policy that raises GDP but reduces lifespan should be immediately rejected, as should one that further enriches a few at the expense of many others. We must shift the discussion away from “creating jobs”—jobs are only a means—to “creating prosperity”.

The Warren Rule is a good start

JDN 2457243 EDT 10:40.

As far back as 2010, Elizabeth Warren proposed a simple regulation on the reporting of CEO compensation that was then built into Dodd-Frank—but the SEC has resisted actually applying that rule for five years; only now will it actually take effect (and by “now” I mean over the next two years). For simplicity I’ll refer to that rule as the Warren Rule, though I don’t see a lot of other people doing that (most people don’t give it a name at all).

Two things are important to understand about this rule, which both undercut its effectiveness and make all the right-wing whinging about it that much more ridiculous.

1. It doesn’t actually place any limits on CEO compensation or employee salaries; it merely requires corporations to consistently report the ratio between them. Specifically, the rule says that every publicly-traded corporation must report the ratio between the “total compensation” of their CEO and the median salary (with benefits) of their employees; wisely, it includes foreign workers (with a few minor exceptions—lobbyists fought for more but fortunately Warren stood firm), so corporations can’t simply outsource everything but management to make it look like they pay their employees more. Unfortunately, it does not include contractors, which is awful; expect to see corporations working even harder to outsource their work to “contractors” who are actually employees without benefits (not that they weren’t already). The greatest victory here will be for economists, who now will have more reliable data on CEO compensation; and for consumers, who will now find it more salient just how overpaid America’s CEOs really are.

2. While it does wisely cover “total compensation”, that isn’t actually all the money that CEOs receive for owning and operating corporations. It includes salaries, bonuses, benefits, and newly granted stock options—it does not include the value of stock options previously exercised or dividends received from stock the CEO already owns.

TIME screwed this up; they took at face value when Larry Page reported a $1 “total compensation”, which technically is true by how “total compensation” is defined; he received a $1 token salary and no new stock awards. But Larry Page has net wealth of over $38 billion; about half of that is Google stock, so even if we ignore all others, on Google’s PE ratio of about 25, Larry Page received at least $700 million in Google retained earnings alone. (In my personal favorite unit of wealth, Page receives about 3 romneys a year in retained earnings.) No, TIME, he is not the lowest-paid CEO in the world; he has simply structured his income so that it comes entirely from owning shares instead of receiving a salary. Most top CEOs do this, so be wary when it says a Fortune 500 CEO received only $2 million, and completely ignore it when it says a CEO received only $1. Probably in the former case and definitely in the latter, their real money is coming from somewhere else.

Of course, the complaints about how this is an unreasonable demand on businesses are totally absurd. Most of them keep track of all this data anyway; it’s simply a matter of porting it from one spreadsheet to another. (I also love the argument that only “idiosyncratic investors” will care; yeah, what sort of idiot would care about income inequality or be concerned how much of their investment money is going directly to line a single person’s pockets?) They aren’t complaining because it will be a large increase in bureaucracy or a serious hardship on their businesses; they’re complaining because they think it might work. Corporations are afraid that if they have to publicly admit how overpaid their CEOs are, they might actually be pressured to pay them less. I hope they’re right.

CEO pay is set in a very strange way; instead of being based on an estimate of how much they are adding to the company, a CEO’s pay is typically set as a certain margin above what the average CEO is receiving. But then as the process iterates and everyone tries to be above average, pay keeps rising, more or less indefinitely. Anyone with a basic understanding of statistics could have seen this coming, but somehow thousands of corporations didn’t—or else simply didn’t care.

Most people around the world want the CEO-to-employee pay ratio to be dramatically lower than it is. Indeed, unrealistically lower, in my view. Most countries say only 6 to 1, while Scandinavia says only 2 to 1. I want you to think about that for a moment; if the average employee at a corporation makes $50,000, people in Scandinavia think the CEO should only make $100,000, and people elsewhere think the CEO should only make $300,000? I’m honestly not sure what would happen to our economy if we made such a rule. There would be very little incentive to want to become a CEO; why bear all that fierce competition and get blamed for everything to make only twice as much as you would as an average employee?

On the other hand, most CEOs don’t actually do all that much; CEO pay is basically uncorrelated with company performance. Maybe it would be better if they weren’t paid very much, or even if we didn’t have them at all. But under our current system, capping CEO pay also caps the pay of basically everyone else; the CEO is almost always the highest-paid individual in any corporation.

I guess that’s really the problem. We need to find ways to change the overall attitude of our society that higher authority necessarily comes with higher pay; that isn’t a rational assessment of marginal productivity, it’s a recapitulation of our primate instincts for a mating hierarchy. He’s the alpha male, of course he gets all the bananas.

The president of a university should make next to nothing compared to the top scientists at that university, because the president is a useless figurehead and scientists are the foundation of universities—and human knowledge in general. Scientists are actually the one example I can think of where one individual trulycan be one million times as productive as another—though even then I don’t think that justifies paying them one million times as much.

Most corporations should be structured so that managers make moderate incomes and the highest incomes go to engineers and designers, the people who have the highest skills and do the most important work. A car company without managers seems like an interesting experiment in employee ownership. A car company without engineers seems like an oxymoron.

Finally, people who work in finance should make very low incomes, because they don’t actually do very much. Bank tellers are probably paid about what they should be; stock traders and hedge fund managers should be paid like bank tellers. (Or rather, there shouldn’t be stock traders and hedge funds as we know them; this is all pure waste. A really efficient financial system would be extremely simple, because finance actually is very simple—people who have money loan it to people who need it, and in return receive more money later. Everything else is just elaborations on that, and most of these elaborations are really designed to obscure, confuse, and manipulate.)

Oddly enough, the place where we do this best is the nation as a whole; the President of the United States would be astonishingly low-paid if we thought of him as a CEO. Only about $450,000 including expense accounts, for a “corporation” with revenue of nearly $3 trillion? (Suppose instead we gave the President 1% of tax revenue; that would be $30 billion per year. Think about how absurdly wealthy our leaders would be if we gave them stock options, and be glad that we don’t do that.)

But placing a hard cap at 2 or even 6 strikes me as unreasonable. Even during the 1950s the ratio was about 20 to 1, and it’s been rising ever since. I like Robert Reich’s proposal of a sliding scale of corporate taxes; I also wouldn’t mind a hard cap at a higher figure, like 50 or 100. Currently the average CEO makes about 350 times as much as the average employee, so even a cap of 100 would substantially reduce inequality.
A pay ratio cap could actually be a better alternative to a minimum wage, because it can adapt to market conditions. If the economy is really so bad that you must cut the pay of most of your workers, well, you’d better cut your own pay as well. If things are going well and you can afford to raise your own pay, your workers should get a share too. We never need to set some arbitrary amount as the minimum you are allowed to pay someone—but if you want to pay your employees that little, you won’t be paid very much yourself.

The biggest reason to support the Warren Rule, however, is awareness. Most people simply have no idea of how much CEOs are actually paid. When asked to estimate the ratio between CEO and employee pay, most people around the world underestimate by a full order of magnitude.

Here are some graphs from a sampling of First World countries. I used data from this paper in Perspectives on Psychological Sciencethe fact that it’s published in a psychology journal tells you a lot about the academic turf wars involved in cognitive economics.

The first shows the absolute amount of average worker pay (not adjusted for purchasing power) in each country. Notice how the US is actually near the bottom, despite having one of the strongest overall economies and not particularly high purchasing power:

worker_pay

The second shows the absolute amount of average CEO pay in each country; I probably don’t even need to mention how the US is completely out of proportion with every other country.

CEO_pay

And finally, the ratio of the two. One of these things is not like the other ones…

CEO_worker_ratio

So obviously the ratio in the US is far too high. But notice how even in Poland, the ratio is still 28 to 1. In order to drop to the 6 to 1 ratio that most people seem to think would be ideal, we would need to dramatically reform even the most equal nations in the world. Denmark and Norway should particularly think about whether they really believe that 2 to 1 is the proper ratio, since they are currently some of the most equal (not to mention happiest) nations in the world, but their current ratios are still 48 and 58 respectively. You can sustain a ratio that high and still have universal prosperity; every adult citizen in Norway is a millionaire in local currency. (Adjusting for purchasing power, it’s not quite as impressive; instead the guaranteed wealth of a Norwegian citizen is “only” about $100,000.)

Most of the world’s population simply has no grasp of how extreme economic inequality has become. Putting the numbers right there in people’s faces should help with this, though if the figures only need to be reported to investors that probably won’t make much difference. But hey, it’s a start.

I’m on vacation for a couple of weeks

I probably won’t have time to make my usual posts this week or next week due to vacation. This week I’m in Virginia Beach, and next week I’ll be in Indianapolis for Gen Con. Normal posts will resume in two weeks, on August 8.

Actually, I’m starting to run out of planned topics. I’m sure I can come up with more, but if you, dear readers, have any suggestions of topics you’d like to see me cover in the future I would like to hear them.

Nature via Nurture

JDN 2457222 EDT 16:33.

One of the most common “deep questions” human beings have asked ourselves over the centuries is also one of the most misguided, the question of “nature versus nurture”: Is it genetics or environment that makes us what we are?

Humans are probably the single entity in the universe for which this question makes least sense. Artificial constructs have no prior existence, so they are “all nurture”, made what we choose to make them. Most other organisms on Earth behave accordingly to fixed instinctual programming, acting out a specific series of responses that have been honed over millions of years, doing only one thing, but doing it exceedingly well. They are in this sense “all nature”. As the saying goes, the fox knows many things, but the hedgehog knows one very big thing. Most organisms on Earth are in this sense hedgehogs, but we Homo sapiens are the ultimate foxes. (Ironically, hedgehogs are not actually “hedgehogs” in this sense: Being mammals, they have an advanced brain capable of flexibly responding to environmental circumstances. Foxes are a good deal more intelligent still, however.)

But human beings are by far the most flexible, adaptable organism on Earth. We live on literally every continent; despite being savannah apes we even live deep underwater and in outer space. Unlike most other species, we do not fit into a well-defined ecological niche; instead, we carve our own. This certainly has downsides; human beings are ourselves a mass extinction event.

Does this mean, therefore, that we are tabula rasa, blank slates upon which anything can be written?

Hardly. We’re more like word processors. Staring (as I of course presently am) at the blinking cursor of a word processor on a computer screen, seeing that wide, open space where a virtual infinity of possible texts could be written, depending entirely upon a sequence of miniscule key vibrations, you could be forgiven for thinking that you are looking at a blank slate. But in fact you are looking at the pinnacle of thousands of years of technological advancement, a machine so advanced, so precisely engineered, that its individual components are one ten-thousandth the width of a human hair (Intel just announced that we can now do even better than that). At peak performance, it is capable of over 100 billion calculations per second. Its random-access memory stores as much information as all the books on a stacks floor of the Hatcher Graduate Library, and its hard drive stores as much as all the books in the US Library of Congress. (Of course, both libraries contain digital media as well, exceeding anything my humble hard drive could hold by a factor of a thousand.)

All of this, simply to process text? Of course not; word processing is an afterthought for a processor that is specifically designed for dealing with high-resolution 3D images. (Of course, nowadays even a low-end netbook that is designed only for word processing and web browsing can typically handle a billion calculations per second.) But there the analogy with humans is quite accurate as well: Written language is about 10,000 years old, while the human visual mind is at least 100,000. We were 3D image analyzers long before we were word processors. This may be why we say “a picture is worth a thousand words”; we process each with about as much effort, even though the image necessarily contains thousands of times as many bits.

Why is the computer capable of so many different things? Why is the human mind capable of so many more? Not because they are simple and impinged upon by their environments, but because they are complex and precision-engineered to nonlinearly amplify tiny inputs into vast outputs—but only certain tiny inputs.

That is, it is because of our nature that we are capable of being nurtured. It is precisely the millions of years of genetic programming that have optimized the human brain that allow us to learn and adapt so flexibly to new environments and form a vast multitude of languages and cultures. It is precisely the genetically-programmed humanity we all share that makes our environmentally-acquired diversity possible.

In fact, causality also runs the other direction. Indeed, when I said other organisms were “all nature” that wasn’t right either; for even tightly-programmed instincts are evolved through millions of years of environmental pressure. Human beings have even been involved in cultural interactions long enough that it has begun to affect our genetic evolution; the reason I can digest lactose is that my ancestors about 10,000 years ago raised goats. We have our nature because of our ancestors’ nurture.

And then of course there’s the fact that we need a certain minimum level of environmental enrichment even to develop normally; a genetically-normal human raised into a deficient environment will suffer a kind of mental atrophy, as when children raised feral lose their ability to speak.

Thus, the question “nature or nurture?” seems a bit beside the point: We are extremely flexible and responsive to our environment, because of innate genetic hardware and software, which requires a certain environment to express itself, and which arose because of thousands of years of culture and millions of years of the struggle for survival—we are nurture because nature because nurture.

But perhaps we didn’t actually mean to ask about human traits in general; perhaps we meant to ask about some specific trait, like spatial intelligence, or eye color, or gender identity. This at least can be structured as a coherent question: How heritable is the trait? What proportion of the variance in this population is caused by genetic variation? Heritability analysis is a well-established methodology in behavioral genetics.
Yet, that isn’t the same question at all. For while height is extremely heritable within a given population (usually about 80%), human height worldwide has been increasing dramatically over time due to environmental influences and can actually be used as a measure of a nation’s economic development. (Look at what happened to the height of men in Japan.) How heritable is height? You have to be very careful what you mean.

Meanwhile, the heritability of neurofibromatosis is actually quite low—as many people acquire the disease by new mutations as inherit it from their parents—but we know for a fact it is a genetic disorder, because we can point to the specific genes that mutate to cause the disease.

Heritability also depends on the population under consideration; speaking English is more heritable within the United States than it is across the world as a whole, because there are a larger proportion of non-native English speakers in other countries. In general, a more diverse environment will lead to lower heritability, because there are simply more environmental influences that could affect the trait.

As children get older, their behavior gets more heritablea result which probably seems completely baffling, until you understand what heritability really means. Your genes become a more important factor in your behavior as you grow up, because you become separated from the environment of your birth and immersed into the general environment of your whole society. Lower environmental diversity means higher heritability, by definition. There’s also an effect of choosing your own environment; people who are intelligent and conscientious are likely to choose to go to college, where they will be further trained in knowledge and self-control. This latter effect is called niche-picking.

This is why saying something like “intelligence is 80% genetic” is basically meaningless, and “intelligence is 80% heritable” isn’t much better until you specify the reference population. The heritability of intelligence depends very much on what you mean by “intelligence” and what population you’re looking at for heritability. But even if you do find a high heritability (as we do for, say, Spearman’s g within the United States), this doesn’t mean that intelligence is fixed at birth; it simply means that parents with high intelligence are likely to have children with high intelligence. In evolutionary terms that’s all that matters—natural selection doesn’t care where you got your traits, only that you have them and pass them to your offspring—but many people do care, and IQ being heritable because rich, educated parents raise rich, educated children is very different from IQ being heritable because innately intelligent parents give birth to innately intelligent children. If genetic variation is systematically related to environmental variation, you can measure a high heritability even though the genes are not directly causing the outcome.

We do use twin studies to try to sort this out, but because identical twins raised apart are exceedingly rare, two very serious problems emerge: One, there usually isn’t a large enough sample size to say anything useful; and more importantly, this is actually an inaccurate measure in terms of natural selection. The evolutionary pressure is based on the correlation with the genes—it actually doesn’t matter whether the genes are directly causal. All that matters is that organisms with allele X survive and organisms with allele Y do not. Usually that’s because allele X does something useful, but even if it’s simply because people with allele X happen to mostly come from a culture that makes better guns, that will work just as well.

We can see this quite directly: White skin spread across the world not because it was useful (it’s actually terrible in any latitude other than subarctic), but because the cultures that conquered the world happened to be comprised mostly of people with White skin. In the 15th century you’d find a very high heritability of “using gunpowder weapons”, and there was definitely a selection pressure in favor of that trait—but it obviously doesn’t take special genes to use a gun.

The kind of heritability you get from twin studies is answering a totally different, nonsensical question, something like: “If we reassigned all offspring to parents randomly, how much of the variation in this trait in the new population would be correlated with genetic variation?” And honestly, I think the only reason people think that this is the question to ask is precisely because even biologists don’t fully grasp the way that nature and nurture are fundamentally entwined. They are trying to answer the intuitive question, “How much of this trait is genetic?” rather than the biologically meaningful “How strongly could a selection pressure for this trait evolve this gene?”

And if right now you’re thinking, “I don’t care how strongly a selection pressure for the trait could evolve some particular gene”, that’s fine; there are plenty of meaningful scientific questions that I don’t find particularly interesting and are probably not particularly important. (I hesitate to provide a rigid ranking, but I think it’s safe to say that “How does consciousness arise?” is a more important question than “Why are male platypuses venomous?” and “How can poverty be eradicated?” is a more important question than “How did the aircraft manufacturing duopoly emerge?”) But that’s really the most meaningful question we can construct from the ill-formed question “How much of this trait is genetic?” The next step is to think about why you thought that you were asking something important.

What did you really mean to ask?

For a bald question like, “Is being gay genetic?” there is no meaningful answer. We could try to reformulate it as a meaningful biological question, like “What is the heritability of homosexual behavior among males in the United States?” or “Can we find genetic markers strongly linked to self-identification as ‘gay’?” but I don’t think those are the questions we really meant to ask. I think actually the question we meant to ask was more fundamental than that: Is it legitimate to discriminate against gay people? And here the answer is unequivocal: No, it isn’t. It is a grave mistake to think that this moral question has anything to do with genetics; discrimination is wrong even against traits that are totally environmental (like religion, for example), and there are morally legitimate actions to take based entirely on a person’s genes (the obvious examples all coming from medicine—you don’t treat someone for cystic fibrosis if they don’t actually have it).

Similarly, when we ask the question “Is intelligence genetic?” I don’t think most people are actually interested in the heritability of spatial working memory among young American males. I think the real question they want to ask is about equality of opportunity, and what it would look like if we had it. If success were entirely determined by intelligence and intelligence were entirely determined by genetics, then even a society with equality of opportunity would show significant inequality inherited across generations. Thus, inherited inequality is not necessarily evidence against equality of opportunity. But this is in fact a deeply disingenuous argument, used by people like Charles Murray to excuse systemic racism, sexism, and concentration of wealth.

We didn’t have to say that inherited inequality is necessarily or undeniably evidence against equality of opportunity—merely that it is, in fact, evidence of inequality of opportunity. Moreover, it is far from the only evidence against equality of opportunity; we also can observe the fact that college-educated Black people are no more likely to be employed than White people who didn’t even finish high school, for example, or the fact that otherwise identical resumes with predominantly Black names (like “Jamal”) are less likely to receive callbacks compared to predominantly White names (like “Greg”). We can observe that the same is true for resumes with obviously female names (like “Sarah”) versus obviously male names (like “David”), even when the hiring is done by social scientists. We can directly observe that one-third of the 400 richest Americans inherited their wealth (and if you look closer into the other two-thirds, all of them had some very unusual opportunities, usually due to their family connections—“self-made” is invariably a great exaggeration). The evidence for inequality of opportunity in our society is legion, regardless of how genetics and intelligence are related. In fact, I think that the high observed heritability of intelligence is largely due to the fact that educational opportunities are distributed in a genetically-biased fashion, but I could be wrong about that; maybe there really is a large genetic influence on human intelligence. Even so, that does not justify widespread and directly-measured discrimination. It does not justify a handful of billionaires luxuriating in almost unimaginable wealth as millions of people languish in poverty. Intelligence can be as heritable as you like and it is still wrong for Donald Trump to have billions of dollars while millions of children starve.

This is what I think we need to do when people try to bring up a “nature versus nurture” question. We can certainly talk about the real complexity of the relationship between genetics and environment, which I think are best summarized as “nature via nurture”; but in fact usually we should think about why we are asking that question, and try to find the real question we actually meant to ask.

How much should we save?

JDN 2457215 EDT 15:43.

One of the most basic questions in macroeconomics has oddly enough received very little attention: How much should we save? What is the optimal level of saving?

At the microeconomic level, how much you should save basically depends on what you think your income will be in the future. If you have more income now than you think you’ll have later, you should save now to spend later. If you have less income now than you think you’ll have later, you should spend now and dissave—save negatively, otherwise known as borrowing—and pay it back later. The life-cycle hypothesis says that people save when they are young in order to retire when they are old—in its strongest form, it says that we keep our level of spending constant across our lifetime at a value equal to our average income. The strongest form is utterly ridiculous and disproven by even the most basic empirical evidence, so usually the hypothesis is studied in a weaker form that basically just says that people save when they are young and spend when they are old—and even that runs into some serious problems.

The biggest problem, I think, is that the interest rate you receive on savings is always vastly less than the interest rate you pay on borrowing, which in turn is related to the fact that people are credit-constrainedthey generally would like to borrow more than they actually can. It also has a lot to do with the fact that our financial system is an oligopoly; banks make more profits if they can pay savers less and charge borrowers more, and by colluding with each other they can control enough of the market that no major competitors can seriously undercut them. (There is some competition, however, particularly from credit unions—and if you compare these two credit card offers from University of Michigan Credit Union at 8.99%/12.99% and Bank of America at 12.99%/22.99% respectively, you can see the oligopoly in action as the tiny competitor charges you a much fairer price than the oligopoly beast. 9% means doubling in just under eight years, 13% means doubling in a little over five years, and 23% means doubling in three years.) Another very big problem with the life-cycle theory is that human beings are astonishingly bad at predicting the future, and thus our expectations about our future income can vary wildly from the actual future income we end up receiving. People who are wise enough to know that they do not know generally save more than they think they’ll need, which is called precautionary saving. Combine that with our limited capacity for self-control, and I’m honestly not sure the life-cycle hypothesis is doing any work for us at all.

But okay, let’s suppose we had a theory of optimal individual saving. That would still leave open a much larger question, namely optimal aggregate saving. The amount of saving that is best for each individual may not be best for society as a whole, and it becomes a difficult policy challenge to provide incentives to make people save the amount that is best for society.

Or it would be, if we had the faintest idea what the optimal amount of saving for society is. There’s a very simple rule-of-thumb that a lot of economists use, often called the golden rule (not to be confused with the actual Golden Rule, though I guess the idea is that a social optimum is a moral optimum), which is that we should save exactly the same amount as the share of capital in income. If capital receives one third of income (This figure of one third has been called a “law”, but as with most “laws” in economics it’s really more like the Pirate Code; labor’s share of income varies across countries and years. I doubt you’ll be surprised to learn that it is falling around the world, meaning more income is going to capital owners and less is going to workers.), then one third of income should be saved to make more capital for next year.

When you hear that, you should be thinking: “Wait. Saved to make more capital? You mean invested to make more capital.” And this is the great sleight of hand in the neoclassical theory of economic growth: Saving and investment are made to be the same by definition. It’s called the savings-investment identity. As I talked about in an earlier post, the model seems to be that there is only one kind of good in the world, and you either use it up or save it to make more.

But of course that’s not actually how the world works; there are different kinds of goods, and if people stop buying tennis shoes that doesn’t automatically lead to more factories built to make tennis shoes—indeed, quite the opposite.If people reduce their spending, the products they no longer buy will now accumulate on shelves and the businesses that make those products will start downsizing their production. If people increase their spending, the products they now buy will fly off the shelves and the businesses that make them will expand their production to keep up.

In order to make the savings-investment identity true by definition, the definition of investment has to be changed. Inventory accumulation, products building up on shelves, is counted as “investment” when of course it is nothing of the sort. Inventory accumulation is a bad sign for an economy; indeed the time when we see the most inventory accumulation is right at the beginning of a recession.

As a result of this bizarre definition of “investment” and its equation with saving, we get the famous Paradox of Thrift, which does indeed sound paradoxical in its usual formulation: “A global increase in marginal propensity to save can result in a reduction in aggregate saving.” But if you strip out the jargon, it makes a lot more sense: “If people suddenly stop spending money, companies will stop investing, and the economy will grind to a halt.” There’s still a bit of feeling of paradox from the fact that we tried to save more money and ended up with less money, but that isn’t too hard to understand once you consider that if everyone else stops spending, where are you going to get your money from?

So what if something like this happens, we all try to save more and end up having no money? The government could print a bunch of money and give it to people to spend, and then we’d have money, right? Right. Exactly right, in fact. You now understand monetary policy better than most policymakers. Like a basic income, for many people it seems too simple to be true; but in a nutshell, that is Keynesian monetary policy. When spending falls and the economy slows down as a result, the government should respond by expanding the money supply so that people start spending again. In practice they usually expand the money supply by a really bizarre roundabout way, buying and selling bonds in open market operations in order to change the interest rate that banks charge each other for loans of reserves, the Fed funds rate, in the hopes that banks will change their actual lending interest rates and more people will be able to borrow, thus, ultimately, increasing the money supply (because, remember, banks don’t have the money they lend you—they create it).

We could actually just print some money and give it to people (or rather, change a bunch of numbers in an IRS database), but this is very unpopular, particularly among people like Ron Paul and other gold-bug Republicans who don’t understand how monetary policy works. So instead we try to obscure the printing of money behind a bizarre chain of activities, opening many more opportunities for failure: Chiefly, we can hit the zero lower bound where interest rates are zero and can’t go any lower (or can they?), or banks can be too stingy and decide not to lend, or people can be too risk-averse and decide not to borrow; and that’s not even to mention the redistribution of wealth that happens when all the money you print is given to banks. When that happens we turn to “unconventional monetary policy”, which basically just means that we get a little bit more honest about the fact that we’re printing money. (Even then you get articles like this one insisting that quantitative easing isn’t really printing money.)

I don’t know, maybe there’s actually some legitimate reason to do it this way—I do have to admit that when governments start openly printing money it often doesn’t end well. But really the question is why you’re printing money, whom you’re giving it to, and above all how much you are printing. Weimar Germany printed money to pay off odious war debts (because it totally makes sense to force a newly-established democratic government to pay the debts incurred by belligerent actions of the monarchy they replaced; surely one must repay one’s debts). Hungary printed money to pay for rebuilding after the devastation of World War 2. Zimbabwe printed money to pay for a war (I’m sensing a pattern here) and compensate for failed land reform policies. In all three cases the amount of money they printed was literally billions of times their original money supply. Yes, billions. They found their inflation cascading out of control and instead of stopping the printing, they printed even more. The United States has so far printed only about three times our original monetary base, still only about a third of our total money supply. (Monetary base is the part that the Federal reserve controls; the rest is created by banks. Typically 90% of our money is not monetary base.) Moreover, we did it for the right reasons—in response to deflation and depression. That is why, as Matthew O’Brien of The Atlantic put it so well, the US can never be Weimar.

I was supposed to be talking about saving and investment; why am I talking about money supply? Because investment is driven by the money supply. It’s not driven by saving, it’s driven by lending.

Now, part of the underlying theory was that lending and saving are supposed to be tied together, with money lent coming out of money saved; this is true if you assume that things are in a nice tidy equilibrium. But we never are, and frankly I’m not sure we’d want to be. In order to reach that equilibrium, we’d either need to have full-reserve banking, or banks would have to otherwise have their lending constrained by insufficient reserves; either way, we’d need to have a constant money supply. Any dollar that could be lent, would have to be lent, and the whole debt market would have to be entirely constrained by the availability of savings. You wouldn’t get denied for a loan because your credit rating is too low; you’d get denied for a loan because the bank would literally not have enough money available to lend you. Banking would have to be perfectly competitive, so if one bank can’t do it, no bank can. Interest rates would have to precisely match the supply and demand of money in the same way that prices are supposed to precisely match the supply and demand of products (and I think we all know how well that works out). This is why it’s such a big problem that most macroeconomic models literally do not include a financial sector. They simply assume that the financial sector is operating at such perfect efficiency that money in equals money out always and everywhere.

So, recognizing that saving and investment are in fact not equal, we now have two separate questions: What is the optimal rate of saving, and what is the optimal rate of investment? For saving, I think the question is almost meaningless; individuals should save according to their future income (since they’re so bad at predicting it, we might want to encourage people to save extra, as in programs like Save More Tomorrow), but the aggregate level of saving isn’t an important question. The important question is the aggregate level of investment, and for that, I think there are two ways of looking at it.

The first way is to go back to that original neoclassical growth model and realize it makes a lot more sense when the s term we called “saving” actually is a funny way of writing “investment”; in that case, perhaps we should indeed invest the same proportion of income as the income that goes to capital. An interesting, if draconian, way to do so would be to actually require this—all and only capital income may be used for business investment. Labor income must be used for other things, and capital income can’t be used for anything else. The days of yachts bought on stock options would be over forever—though so would the days of striking it rich by putting your paycheck into a tech stock. Due to the extreme restrictions on individual freedom, I don’t think we should actually do such a thing; but it’s an interesting thought that might lead to an actual policy worth considering.

But a second way that might actually be better—since even though the model makes more sense this way, it still has a number of serious flaws—is to think about what we might actually do in order to increase or decrease investment, and then consider the costs and benefits of each of those policies. The simplest case to analyze is if the government invests directly—and since the most important investments like infrastructure, education, and basic research are usually done this way, it’s definitely a useful example. How is the government going to fund this investment in, say, a nuclear fusion project? They have four basic ways: Cut spending somewhere else, raise taxes, print money, or issue debt. If you cut spending, the question is whether the spending you cut is more or less important than the investment you’re making. If you raise taxes, the question is whether the harm done by the tax (which is generally of two flavors; first there’s the direct effect of taking someone’s money so they can’t use it now, and second there’s the distortions created in the market that may make it less efficient) is outweighed by the new project. If you print money or issue debt, it’s a subtler question, since you are no longer pulling from any individual person or project but rather from the economy as a whole. Actually, if your economy has unused capacity as in a depression, you aren’t pulling from anywhere—you’re simply adding new value basically from thin air, which is why deficit spending in depressions is such a good idea. (More precisely, you’re putting resources to use that were otherwise going to lay fallow—to go back to my earlier example, the tennis shoes will no longer rest on the shelves.) But if you do not have sufficient unused capacity, you will get crowding-out; new debt will raise interest rates and make other investments more expensive, while printing money will cause inflation and make everything more expensive. So you need to weigh that cost against the benefit of your new investment and decide whether it’s worth it.

This second way is of course a lot more complicated, a lot messier, a lot more controversial. It would be a lot easier if we could just say: “The target investment rate should be 33% of GDP.” But even then the question would remain as to which investments to fund, and which consumption to pull from. The abstraction of simply dividing the economy into “consumption” versus “investment” leaves out matters of the utmost importance; Paul Allen’s 400-foot yacht and food stamps for children are both “consumption”, but taxing the former to pay for the latter seems not only justified but outright obligatory. The Bridge to Nowhere and the Humane Genome Project are both “investment”, but I think we all know which one had a higher return for human society. The neoclassical model basically assumes that the optimal choices for consumption and investment are decided automatically (automagically?) by the inscrutable churnings of the free market, but clearly that simply isn’t true.

In fact, it’s not always clear what exactly constitutes “consumption” versus “investment”, and the particulars of answering that question may distract us from answering the questions that actually matter. Is a refrigerator investment because it’s a machine you buy that sticks around and does useful things for you? Or is it consumption because consumers buy it and you use it for food? Is a car an investment because it’s vital to getting a job? Or is it consumption because you enjoy driving it? Someone could probably argue that the appreciation on Paul Allen’s yacht makes it an investment, for instance. Feeding children really is an investment, in their so-called “human capital” that will make them more productive for the rest of their lives. Part of the money that went to the Humane Genome Project surely paid some graduate student who then spent part of his paycheck on a keg of beer, which would make it consumption. And so on. The important question really isn’t “is this consumption or investment?” but “Is this worth doing?” And thus, the best answer to the question, “How much should we save?” may be: “Who cares?”

What are we celebrating today?

JDN 2457208 EDT 13:35 (July 4, 2015)

As all my American readers will know (and unsurprisingly 79% of my reader trackbacks come from the United States), today is Independence Day. I’m curious how my British readers feel about this day (and the United Kingdom is my second-largest source of reader trackbacks); we are in a sense celebrating the fact that we’re no longer ruled by you.

Every nation has some notion of patriotism; in the simplest sense we could say that patriotism is simply nationalism, yet another reflection of our innate tribal nature. As Obama said when asked about American exceptionalism, the British also believe in British exceptionalism. If that is all we are dealing with, then there is no particular reason to celebrate; Saudi Arabia or China could celebrate just as well (and very likely does). Independence Day then becomes something parochial, something that is at best a reflection of local community and culture, and at worst a reaffirmation of nationalistic divisiveness.

But in fact I think we are celebrating something more than that. The United States of America is not just any country. It is not just a richer Brazil or a more militaristic United Kingdom. There really is something exceptional about the United States, and it really did begin on July 4, 1776.

In fact we should probably celebrate June 21, 1789 and December 15, 1791, the ratification of the Constitution and the Bill of Rights respectively. But neither of these would have been possible without that Declaration of Independence on July 4, 1776. (In fact, even that date isn’t as clear-cut as commonly imagined.)

What makes the United States unique?

From the dawn of civilization around 5000 BC up to the mid-18th century AD, there were basically two ways to found a nation. The most common was to grow the nation organically, formulate an ethnic identity over untold generations and then make up an appealing backstory later. The second way, and not entirely mutually exclusive, was for a particular leader, usually a psychopathic king, to gather a superior army, conquer territory, and annex the people there, making them part of his nation whether they wanted it or not. Variations on these two themes were what happened in Rome, in Greece, in India, in China; they were done by the Sumerians, by the Egyptians, by the Aztecs, by the Maya. All the ancient civilizations have founding myths that are distorted so far from the real history that the real history has become basically unknowable. All the more recent powers were formed by warlords and usually ruled with iron fists.

The United States of America started with a war, make no mistake; and George Washington really was more a charismatic warlord than he ever was a competent statesman. But Washington was not a psychopath, and refused to rule with an iron fist. Instead he was instrumental in establishing a fundamentally new approach to the building of nations.
This is literally what happened—myths have grown around it, but it itself documented history. Washington and his compatriots gathered a group of some of the most intelligent and wise individuals they could find, sat them down in a room, and tasked them with answering the basic question: “What is the best possible country?” They argued and debated, considering absolutely the most cutting-edge economics (The Wealth of Nations was released in 1776) and political philosophy (Thomas Paine’s Common Sense also came out in 1776). And then, when they had reached some kind of consensus on what the best sort of country would be—they created that country. They were conscious of building a new tradition, of being the founders of the first nation built as part of the Enlightenment. Previously nations were built from immemorial tradition or the whims of warlords—the United States of America was the first nation in the world that was built on principle.

It would not be the last; in fact, with a terrible interlude that we call Napoleon, France would soon become the second nation of the Enlightenment. A slower process of reform would eventually bring the United Kingdom itself to a similar state (though the UK is still a monarchy and has no formal constitution, only an ever-growing mountain of common law). As the centuries passed and the United States became more and more powerful, its system of government attained global influence, with now almost every nation in the world nominally a “democracy” and about half actually recognizable as such. We now see it as unexceptional to have a democratically-elected government bound by a constitution, and even think of the United States as a relatively poor example compared to, say, Sweden or Norway (because #Scandinaviaisbetter), and this assessment is not entirely wrong; but it’s important to keep in mind that this was not always the case, and on July 4, 1776 the Founding Fathers truly were building something fundamentally new.

Of course, the Founding Fathers were not the demigods they are often imagined to be; Washington himself was a slaveholder, and not just any slaveholder, but in fact almost a billionaire in today’s terms—the wealthiest man in America by far and actually a rival to the King of England. Thomas Jefferson somehow managed to read Thomas Paine and write “all men are created equal” without thinking that this obligated him to release his own slaves. Benjamin Franklin was a misogynist and womanizer. James Madison’s concept of formalizing armed rebellion bordered on insanity (and ultimately resulted in our worst amendment, the Second). The system that they built disenfranchised women, enshrined the slavery of Black people into law, and consisted of dozens of awkward compromises (like the Senate) that would prove disastrous in the future. The Founding Fathers were human beings with human flaws and human hypocrisy, and they did many things wrong.

But they also did one thing very, very right: They created a new model for how nations should be built. In a very real sense they redefined what it means to be a nation. That is what we celebrate on Independence Day.

1200px-Flag_of_the_United_States.svg

Externalities

JDN 2457202 EDT 17:52.

The 1992 Bill Clinton campaign had a slogan, “It’s the economy, stupid.”: A snowclone I’ve used on occasion is “it’s the externalities, stupid.” (Though I’m actually not all that fond of calling people ‘stupid’; though occasionally true is it never polite and rarely useful.) Externalities are one of the most important concepts in economics, and yet one that even all too many economists frequently neglect.

Fortunately for this one, I really don’t need much math; the concept isn’t even that complicated, which makes it all the more mysterious how frequently it is ignored. An externality is simply an effect that an action has upon those who were not involved in choosing to perform that action.

All sorts of actions have externalities; indeed, much rarer are actions that don’t. An obvious example is that punching someone in the face has the externality of injuring that person. Pollution is an important externality of many forms of production, because the people harmed by pollution are typically not the same people who were responsible for creating it. Traffic jams are created because every car on the road causes a congestion externality on all the other cars.

All the aforementioned are negative externalities, but there are also positive externalities. When one individual becomes educated, they tend to improve the overall economic viability of the place in which they live. Building infrastructure benefits whole communities. New scientific discoveries enhance the well-being of all humanity.

Externalities are a fundamental problem for the functioning of markets. In the absence of externalities—if each person’s actions only affected that one person and nobody else—then rational self-interest would be optimal and anything else would make no sense. In arguing that rationality is equivalent to self-interest, generations of economists have been, tacitly or explicitly, assuming that there are no such things as externalities.

This is a necessary assumption to show that self-interest would lead to something I discussed in an earlier post: Pareto-efficiency, in which the only way to make one person better off is to make someone else worse off. As I already talked about in that other post, Pareto-efficiency is wildly overrated; a wide variety of Pareto-efficient systems would be intolerable to actually live in. But in the presence of externalities, markets can’t even guarantee Pareto-efficiency, because it’s possible to have everyone acting in their rational self-interest cause harm to everyone at once.

This is called a tragedy of the commons; the basic idea is really quite simple. Suppose that when I burn a gallon of gasoline, that makes me gain 5 milliQALY by driving my car, but then makes everyone lose 1 milliQALY in increased pollution. On net, I gain 4 milliQALY, so if I am rational and self-interested I would do that. But now suppose that there are 10 people all given the same choice. If we all make that same choice, each of us will gain 1 milliQALY—and then lose 10 milliQALY. We would all have been better off if none of us had done it, even though it made sense to each of us at the time. Burning a gallon of gasoline to drive my car is beneficial to me, more so than the release of carbon dioxide into the atmosphere is harmful; but as a result of millions of people burning gasoline, the carbon dioxide in the atmosphere is destabilizing our planet’s climate. We’d all be better off if we could find some way to burn less gasoline.

In order for rational self-interest to be optimal, externalities have to somehow be removed from the system. Otherwise, there are actions we can take that benefit ourselves but harm other people—and thus, we would all be better off if we acted to some degree altruistically. (When I say things like this, most non-economists think I am saying something trivial and obvious, while most economists insist that I am making an assertion that is radical if not outright absurd.)

But of course a world without externalities is a world of complete isolation; it’s a world where everyone lives on their own deserted island and there is no way of communicating or interacting with any other human being in the world. The only reasonable question about this world is whether we would die first or go completely insane first; clearly those are the two things that would happen. Human beings are fundamentally social animals—I would argue that we are in fact more social even than eusocial animals like ants and bees. (Ants and bees are only altruistic toward their own kin; humans are altruistic to groups of millions of people we’ve never even met.) Humans without social interaction are like flowers without sunlight.

Indeed, externalities are so common that if markets only worked in their absence, markets would make no sense at all. Fortunately this isn’t true; there are some ways that markets can be adjusted to deal with at least some kinds of externalities.

One of the most well-known is the Coase theorem; this is odd because it is by far the worst solution. The Coase theorem basically says that if you can assign and enforce well-defined property rights and there is absolutely no cost in making any transaction, markets will automatically work out all externalities. The basic idea is that if someone is about to perform an action that would harm you, you can instead pay them not to do it. Then, the harm to you will be prevented and they will incur an additional benefit.

In the above example, we could all agree to pay $30 (which let’s say is worth 1 milliQALY) to each person who doesn’t burn a gallon of gasoline that would pollute our air. Then, if I were thinking about burning some gasoline, I wouldn’t want to do it, because I’d lose the $300 in payments, which costs me 10 milliQALY, while the benefits of burning the gasoline are only 5 milliQALY. We all reason the same way, and the result is that nobody burns gasoline and actually the money exchanged all balances out so we end up where we were before. The result is that we are all better off.

The first thought you probably have is: How do I pay everyone who doesn’t hurt me? How do I even find all those people? How do I ensure that they follow through and actually don’t hurt me? These are the problems of transaction costs and contract enforcement that are usually presented as the problem with the Coase theorem, and they certainly are very serious problems. You end up needing some sort of government simply to enforce all those contracts, and even then there’s the question of how we can possibly locate everyone who has ever polluted our air or our water.

But in fact there’s an even more fundamental problem: This is extortion. We are almost always in the condition of being able to harm other people, and a system in which the reason people don’t hurt each other is because they’re constantly paying each other not to is a system in which the most intimidating psychopath is the wealthiest person in the world. That system is in fact Pareto-efficient (the psychopath does quite well for himself indeed); but it’s exactly the sort of Pareto-efficient system that isn’t worth pursuing.

Another response to externalities is simply to accept them, which isn’t as awful as it sounds. There are many kinds of externalities that really aren’t that bad, and anything we might do to prevent them is likely to make the cure worse than the disease. Think about the externality of people standing in front of you in line, or the externality of people buying the last cereal box off the shelf before you can get there. The externality of taking the job you applied for may hurt at the time, but in the long run that’s how we maintain a thriving and competitive labor market. In fact, even the externality of ‘gentrifying’ your neighborhood so you can no longer afford it is not nearly as bad as most people seem to think—indeed, the much larger problem seems to be the poor neighborhoods that don’t have rising incomes, remaining poor for generations. (It also makes no sense to call this “gentrifying”; the only landed gentry we have in America is the landowners who claim a ludicrous proportion of our wealth, not the middle-class people who buy cheap homes and move in. If you really want to talk about a gentry, you should be thinking Waltons and Kochs—or Bushs and Clintons.) These sorts of minor externalities that are better left alone are sometimes characterized as pecuniary externalities because they usually are linked to prices, but I think that really misses the point; it’s quite possible for an externality to be entirely price-related and do enormous damage (read: the entire financial system) and to have little or nothing to do with prices and still be not that bad (like standing in line as I mentioned above).

But obviously we can’t leave all externalities alone in this way. We can’t just let people rob and murder one another arbitrarily, or ignore the destruction of the world’s climate that threatens hundreds of millions of lives. We can’t stand back and let forests burn and rivers run dry when we could easily have saved them.

The much more reasonable and realistic response to externalities is what we call government—there are rules you have to follow in society and punishments you face if you don’t. We can avoid most of the transaction problems involved in figuring out who polluted our water by simply making strict rules about polluting water in general. We can prevent people from stealing each other’s things or murdering each other by police who will investigate and punish such crimes.

This is why regulation—and a government strong enough to enforce that regulation—is necessary for the functioning of a society. This dichotomy we have been sold about “regulations versus the market” is totally nonsensical; the market depends upon regulations. This doesn’t justify any particular regulation—and indeed, an awful lot of regulations are astonshingly bad. But some sort of regulatory system is necessary for a market to function at all, and the question has never been whether we will have regulations but which regulations we will have. People who argue that all regulations must go and the market would somehow work on its own are either deeply ignorant of economics or operating from an ulterior motive; some truly horrendous policies have been made by arguing that “less government is always better” when the truth is nothing of the sort.

In fact, there is one real-world method I can think of that actually comes reasonably close to eliminating all externalities—and it is called social democracy. By involving everyone—democracy—in a system that regulates the economy—socialism—we can, in a sense, involve everyone in every transaction, and thus make it impossible to have externalities. In practice it’s never that simple, of course; but the basic concept of involving our whole society in making the rules that our society will follow is sound—and in fact I can think of no reasonable alternative.

We have to institute some sort of regulatory system, but then we need to decide what the regulations will be and who will control them. If we want to instead vest power in a technocratic elite, how do you decide whom to include in that elite? How do we ensure that the technocrats are actually better for the general population if there is no way for that general population to have a say in their election? By involving as many people as we can in the decision-making process, we make it much less likely that one person’s selfish action will harm many others. Indeed, this is probably why democracy prevents famine and genocide—which are, after all, rather extreme examples of negative externalities.

What does it mean to “own” an idea?

JDN 2457195 EDT 11:29.

For a long time I’ve been suspicious of intellectual property as current formulated, but I’m never quite sure what to replace it with. I recently finished reading a surprisingly compelling little book called Against Intellectual Monopoly, which offered some more direct empirical support for many of my more philosophical concerns. (Fitting their opposition to copyright law, the authors, Michele Boldrin and David Levine, offer the full text of the book for free online.)

Boldrin and Levine argue that they are not in fact opposed to intellectual property, but intellectual monopoly. I think this is a bit of a silly distinction myself, and in fact muddles the issue a little because most of what we currently call “intellectual property” is in fact what they call “intellectual monopoly”.

The problems with intellectual property are well-documented within, but I think it’s worth repeating at least the basic form of the argument. Intellectual property is supposed to incentivize innovation by rewarding innovators for their investment, and thereby increase the total amount of innovation.

This requires three conditions to hold: First, the intellectual property must actually reward the innovators. Second, innovation must be increased when innovators seek rewards. And third, the costs of implementing the policy must be exceeded by the benefits provided by it.

As it turns out, none of those three conditions to hold. For intellectual property to make sense, they would all need to hold; and in fact none do.

First—and worst—of all, intellectual property does not actually reward innovators. It instead rewards those who manipulate the intellectual property system. Intellectual property is why Thomas Edison was wealthy and Nikola Tesla was poor. Intellectual property is why we keep getting new versions of the same pills for erectile dysfunction instead of an AIDS vaccine. Intellectual property is how we get patent troll corporations, submarine patents, and Samsung owing Apple $1 billion for making its smartphones the wrong shape. Intellectual property is how Worlds.com is proposing to sue an entire genre of video games.

Second, the best innovators are not motivated by individual rewards. This has always been true; the people who really contribute the most to the world in knowledge or creativity are those who do it out of an insatiable curiosity, or a direct desire to improve the world. People who are motivated primarily by profit only innovate as a last resort, instead preferring to manipulate laws, undermine competitors, or simply mass-produce safe, popular products.

I can think of no more vivid an example here than Hollywood. Why is it that every single new movie that comes out is basically a more expensive rehash of the exact same 5 movies that have been coming out for the last 50 years? Because big corporations don’t innovate. It’s too risky to try to make a movie that’s fundamentally new and different, because, odds are, that new movie would fail. It’s much safer to make an endless series of superhero movies and keep coming out with yet another movie about a heroic dog. It’s not even that these movies are bad—they’re often pretty good, and when done well (like Avengers) they can be quite enjoyable. But thousands of original screenplays are submitted to Hollywood every year, and virtually none of them are actually made into films. It’s impossible to know what great works of film we might have seen on the big screen if not for the stranglehold of media companies.

This is not how Hollywood began; it started out wildly innovative and new. But did you ever know why it started in Los Angeles and not somewhere else? It was to evade patent laws. Thomas Edison, the greatest patent troll in history, held a stranglehold on motion picture technology on the East Coast, so filmmakers fled to California to get as far away from there as possible, during a time when Federal enforcement was much more lax. The innovation that created Los Angeles as we know it not only was not incentivized by intellectual property protection—it was only possible in its absence.

And then of course there is the third condition, that the benefits be worth the costs—but it’s trivially obvious that this is not the case, since the benefits are in fact basically zero. We divert billions of dollars from consumers to huge corporations, monopolize the world’s ideas, create a system of surveillance and enforcement that makes basically everyone a criminal (I’ll admit it; I have pirated music, software, and most recently the film My Neighbor Totoro, and I often copy video games I own on CD or DVD to digital images so I don’t need the CD or DVD every time to play—which should be fair use but has been enforced as copyright violation). When everyone is a criminal, enforcement becomes capricious, a means of control that can be used and abused by those in power.

Intellectual property even allows corporations to undermine our more basic sense of property ownership—they can prevent us from making use of our own goods as we choose. They can punish us for modifying the software in our computers, our video game systems—or even our cars. They can install software on our computers that compromises our security in order to protect their copyright. This is a point that Boldrin and Levine repeat several times; in place of what we call “intellectual property” (and they call “intellectual monopoly”), they offer a system which would protect our ordinary property rights, our rights to do what we choose with the goods that we purchase—goods that include books, computers, and DVDs.

That brings me to where I think their argument is weakest—their policy proposal. Basically the policy they propose is that we eliminate all intellectual property rights (except trademarks, which they rightly point out are really more about honesty than they are about property—trademark violation typically amounts to fraudulently claiming that your product was made by someone it wasn’t), and then do nothing else. The only property rights would be ordinary property rights, which would know apply in full to products such as books and DVDs. When you buy a DVD, you would have the right to do whatever you please with it, up to and including copying it a hundred times and selling the copies. You bought the DVD, you bought the blank discs, you bought the burner; so (goes their argument), why shouldn’t you be able to do what you want with them?

For patents, I think their argument is basically correct. I’ve tried to make lists of the greatest innovations in science in technology, and virtually none of them were in any way supported by patents. We needn’t go as far back as fire, writing, and the wheel; think about penicillin, the smallpox vaccine, electricity, digital computing, superconductors, lasers, the Internet. Airplanes might seem like they were invented under patent, but in fact the Wright brothers made a relatively small contribution and most of the really important development in aircraft was done by the military. Important medicines are almost always funded by the NIH, while private pharmaceutical companies give us Viagra at best and Vioxx at worst. Private companies have an incentive to skew their trials in various ways, ranging from simply questionable (p-value hacking) to the outright fraudulent (tampering with data). We know they do, because meta-analyses have found clear biases in the literature. The NIH has much less incentive to bias results in this way, and as a result more of the drugs released will be safe and effective. Boldrin and Levine recommend that all drug trials be funded by the NIH instead of drug companies, and I couldn’t agree more. What basis would drug companies have for complaining? We’re giving them something they previously had to pay for. But of course they will complain, because now their drugs will be subject to unbiased scrutiny. Moreover, it undercuts much of the argument for their patent; without the initial cost of large-scale drug trials, it’s harder to see why they need patents to make a profit.

Major innovations have been the product of individuals working out of curiosity, or random chance, or university laboratories, or government research projects; but they are rarely motivated by patents and they are almost never created by corporations. Corporations do invent incremental advancements, but many of these they keep as trade secrets, or go ahead and share, knowing that reverse-engineering takes time and investment. The great innovations of the computer industry (like high-level programming languages, personal computers, Ethernet, USB ports, and windowed operating systems) were all invented before software could be patented—and since then, what have we really gotten? In fact, it can be reasonably argued that patents reduce innovation; most innovations are built on previous innovations, and patents hinder that process of assimilation and synthesis. Patent pools can mitigate this effect, but only for oligopolistic insiders, which almost by definition are less innovative than disruptive outsiders.

And of course, patents on software and biological systems should be invalidated yesterday. If we must have patents, they should be restricted only to entities that cannot self-replicate, which means no animals, no plants, no DNA, nothing alive, no software, and for good measure, no grey goo nanobots. (It also makes sense at a basic level: How can you stop people from copying it, when it can copy itself?)

It’s when we get to copyright that I’m not so convinced. I certainly agree that the current copyright system suffers from deep problems. When your photos can be taken without your permission and turned into works of art but you can’t make a copy of a video game onto your hard drive to play it more conveniently, clearly something is wrong with our copyright system. I also agree that there is something fundamentally problematic about saying that one “owns” a text in such a way that they can decide what others do with it. When you read my work, copies of the information I convey to you are stored inside your brain; do I now own a piece of your brain? If you print out my blog post on a piece of paper and then photocopy it, how can I own something you made with your paper on your printer?

I release all my blog posts under a “by-sa” copyleft, “attribution-share-alike”, which requires that my work be shared without copyright protection and properly attributed to me. You are however free to sell them, modify them, or use them however you like, given those constraints. I think that something like this may be the best system for protecting authors against plagiarism without unduly restricting the rights of readers to copy, modify, and otherwise use the content they buy. Applied to software, the Free Software Foundation basically agrees.

Boldrin and Levine do not, however; they think that even copyleft is too much, because it imposes restrictions upon buyers. They do agree that plagiarism should be illegal (because it is fraudulent), but they disagree with the “share-alike” part, the requirement that content be licensed according to what the author demands. As far as they are concerned, you bought the book, and you can do whatever you damn well please with it. In practice there probably isn’t a whole lot of difference between these two views, since in the absence of copyright there isn’t nearly as much need for copyleft. I don’t really need to require you to impose a free license if you can’t impose any license at all. (When I say “free” I mean libre, not gratis; free as in speech, not as in beerRed Hat Linux is free software you pay for, and Zynga games are horrifically predatory proprietary software you get for free.)

One major difference is that under copyleft we could impose requirements to release information under certain circumstances—I have in mind particularly scientific research papers and associated data. To maximize the availability of knowledge and facilitate peer review, it could be a condition of publication for scientific research that the paper and data be made publicly available under a free license—already this is how research done directly for the government works (at least the stuff that isn’t classified). But under a strict system of physical property only this sort of licensing would be a violation of the publishers’ property rights to do as they please with their servers and hard drives.

But there are legitimate concerns to be had even about simply moving to a copyleft system. I am a fiction author, and I submit books for publication. (This is not hypothetical; I actually do this.) Under the current system, I own the copyright to those books, and if the publisher decides to use them (thus far, only JukePop Serials, a small online publisher, has ever done so), they must secure my permission, presumably by means of a royalty contract. They can’t simply take whatever manuscripts they like and publish them. But if I submitted under a copyleft, they absolutely could. As long as my name were on the cover, they wouldn’t have to pay me a dime. (Charles Darwin certainly didn’t get a dime from Ray Comfort’s edition of The Origin of Species—yes, that is a thing.)

Now the question becomes, would they? There might be a competitive equilbrium where publishers are honest and do in fact pay their authors. If they fail to do so, authors are likely to stop submitting to that publisher once it acquires its shady reputation. If we can reach the equilibrium where authors get paid, that’s almost certainly better than today; the only people I can see it hurting are major publishing houses like Pearson PLC and superstar authors like J.K. Rowling; and even then it wouldn’t hurt them all that much. (Rowling might only be a millionaire instead of a billionaire, and Pearson PLC might see its net income drop from over $500 million to say $10 million.) The average author would most likely benefit, because publishers would have more incentive to invest in their midlist when they can’t crank out hundreds of millions of dollars from their superstars. Books would proliferate at bargain prices, and we could all double the size of our libraries. The net effect on the book market would be to reduce the winner-takes-all effect, which can only be a good thing.

But that isn’t the only possibility. The incentive to steal authors’ work when they submit it could instead create an equilibrium where hardly anyone publishes fiction anymore; and that world is surely worse than the one we live in today. We would want to think about how we can ensure that authors are adequately paid for their work in a copyleft system. Maybe some can make their money from speaking tours and book signings, but I’m not confident that enough can.

I do have one idea, similar to what Thomas Pogge came up with in his “public goods system”, though he primarily intended that to apply to medicine. The basic concept is that there would be a fund, either gathered from donations or supported by taxes, that supports artists. (Actually we already have the National Endowment for the Arts, but it isn’t nearly big enough.) This support would be doled out based on some metric of the artists’ popularity or artistic importance. The details of that are quite tricky, but I think one could arrange some sort of voting system where people use range voting to decide how much to give to each author, musician, painter, or filmmaker. Potentially even research funding could be set this way, with people voting to decide how important they think a particular project is—though I fear that people may be too ignorant to accurately gauge the important of certain lines of research, as when Sarah Palin mocked studies of “fruit flies in Paris”, otherwise known as literally the foundation of modern genetics. Maybe we could vote instead on research goals like “eliminate cancer” and “achieve interstellar travel” and then the scientific community could decide how to allocate funds toward those goals? The details are definitely still fuzzy in my mind.

The general principle, however, would be that if we want to support investment in innovation, we do that—instead of devising this bizarre system of monopoly that gives corporations growing power over our lives. Subsidize investment by subsidizing investment. (I feel similarly about capital taxes; we could incentivize investment in this vague roundabout way by doing nothing to redistribute wealth and hoping that all the arbitrage and speculation somehow translates into real investment… or, you know, we could give tax credits to companies that build factories.) As Boldrin and Levine point out, intellectual property laws were not actually created to protect innovation; they were an outgrowth of the general power of kings and nobles to enforce monopolies on various products during the era of mercantilism. They were weakened to be turned into our current system, not strengthened. They are, in fact, fundamentally mercantilist—and nothing could make that clearer than the TRIPS accord, which literally allows millions of people to die from treatable diseases in order to increase the profits of pharmaceutical companies. Far from being this modern invention that brought upon the scientific revolution, intellectual property is an atavistic policy borne from the age of colonial kings. I think it’s time we try something new.
(Oh, and one last thing: “Piracy”? Really? I can’t believe the linguistic coup it was for copyright holders to declare that people who copy music might as well be slavers and murderers—somehow people went along with this ridiculous terminology. No, there is no such thing as “music piracy” or “software piracy”; there is music copyright violation and software copyright violation.)

What do we do about unemployment?

JDN 2457188 EDT 11:21.

Macroeconomics, particularly monetary policy, is primarily concerned with controlling two variables.

The first is inflation: We don’t want prices to rise too fast, or markets will become unstable. This is something we have managed fairly well; other than food and energy prices which are known to be more volatile, prices have grown at a rate between 1.5% and 2.5% per year for the last 10 years; even with food and energy included, inflation has stayed between -1.5% and +5.0%. After recovering from its peak near 15% in 1980, US inflation has stayed between -1.5% and +6.0% ever since. While the optimal rate of inflation is probably between 2.0% and 4.0%, anything above 0.0% and below 10.0% is probably fine, so the only significant failure of US inflation policy was the deflation in 2009.

The second is unemployment: We want enough jobs for everyone who wants to work, and preferably we also wouldn’t have underemployment (people who are only working part-time even though they’d prefer full-time or discouraged workers (people who give up looking for jobs because they can’t find any, and aren’t counted as unemployed because they’re no looking looking for work). There’s also a tendency among economists to want “work incentives” that maximize the number of people who want to work, but I think these are wildly overrated. Work isn’t an end in itself; work is supposed to be creating products and providing services that make human lives better. The benefits of production have to be weighed against the costs of stress, exhaustion, and lost leisure time from working. Given that stress-related illnesses are some of the leading causes of death and disability in the United States, I don’t think that our problem is insufficient work incentives.

Unemployment is a problem that we have definitely not solved. Unemployment has bounced up and down between peaks and valleys, dropping as low as 4.0% and rising as high as 11.0% over the last 60 years. If 2009’s -1.5% deflation concerns you, then its 9.9% unemployment should concern you far more. Indeed, I’m not convinced that 5.0% is an acceptable “natural” rate of unemployment—that’s still millions of people who want work and can’t find it—but most economists would say that it is.

In fact, matters are worse than most people realize. Our unemployment rate has fallen back to a relatively normal 5.5%, as you can see in this graph (the blue line is unemployment, the red line is underemployment):

All_Unemployment

However, our employment rate never recovered from the Second Depression. As you can see in this graph, it fell from 63% to 58%, and has now only risen back to 59%:

Employment

How can unemployment fall without employment rising? The key is understanding how unemployment is calculated: It only counts people in the labor force. If people leave the labor force entirely, by retiring, going back to school, or simply giving up on finding work, they will no longer be counted as unemployed. The unemployment rate only counts people who want work but don’t have it, so as far as I’m concerned that figure should always be nearly zero. (Not quite zero since it takes some time to find a good fit; but maybe 1% at most. Any more than that and there is something wrong with our economic system.)

The optimal employment rate is not as obvious; it certainly isn’t 100%, as some people are too young, too old, or too disabled to be spending their time working. As automation improves, the number of workers necessary to produce any given product decreases, and eventually we may decide as a society that we are making enough products and most of us should be spending more of our time on other things, like spending time with family, creating works of art, or simply having fun. Maybe only a handful of people, the most driven or the most brilliant, will actually decide to work—and they will do because they want to, not because they have to. Indeed, the truly optimal employment rate might well be zero; think of The Culture, where there is no such concept as a “job”; there are things you do because you want to do them, or because they seem worthwhile, but there is none of this “working for pay” nonsense. We are not yet at the level of automation where this would be possible, but we are much closer than I think most people realize. Think about all of the various administrative and bureaucratic tasks that most people do the majority of the time, all the reports, all the meetings; why do they do that? Is it actually because the work is necessary, that the many levels of bureaucracy actually increase efficiency through specialization? Or is it simply because we’ve become so accustomed to the idea that people have to be working all the time in order to justify their existence? Is David Graeber (I reviewed one of his books previously) right that most jobs are actually (and this is a technical term), “bullshit jobs”? Once again, the problem doesn’t seem to be too few work incentives, but if anything too many.

Indeed, there is a basic fact about unemployment that has been hidden from most people. I’d normally say that this is accidental, that it’s too technical or obscure for most people to understand, but no, I think it has been actively concealed, or, since I guess the information has been publicly available, at least discussion of it has been actively avoided. It’s really not at all difficult to understand, yet it will fundamentally change the way you think about our unemployment problem. Here goes:

Since at least 2000 and probably since 1980 there have been more people looking for jobs than there have been jobs available.

The entire narrative of “people are lazy and don’t want to work” or “we need more work incentives” is just totally, totally wrong; people are desperate to find work, and there hasn’t been enough work for them to find since longer than I’ve been alive.

You can see this on the following graph, which is of what’s called the “Beveridge curve”; the horizontal axis is the unemployment rate, while the vertical axis is the rate of job vacancies. The red line across the diagonal is the point at which the two are even, and there are as many people looking for jobs as there are jobs to fill. Notice how the graph is always below the line. There have always been more unemployed people than jobs for them to fill, and at the worst of the Second Depression the ratio was 5 to 1.

Beveridge_curve_2

Personally I believe that we should be substantially above the line, and in a truly thriving economy there should be employers desperately trying to find employees and willing to pay them whatever it takes. You shouldn’t have to send out 20 job applications to get hired; 20 companies should have to send offers to you. For the economy does not exist to serve corporations; it exists to serve people.

I can see two basic ways to solve this problem: You can either create more jobs, or you can get people to stop looking for work. That may be sort of obvious, but I think people usually forget the second option.

We definitely do talk a lot about “job creation”, though usually in a totally nonsensical way—somehow “Job Creator” has come to be a euphemism for “rich person”. In fact the best way to create jobs is to put money into the hands of people who will spend it. The more people spend their money, the more it flows through the economy and the more wealth we end up with overall. High rates of spending—high marginal propensity to consumecan multiply the value of a dollar many times over.

But there’s also something to be said for getting people to stop looking for work—the key is do it in the right way. They shouldn’t stop looking because they give up; they should stop looking because they don’t need to work. People should have their basic needs met even if they aren’t working for an employer; human beings have rights and dignity beyond their productivity in the market. Employers should have to make you a better offer than “you’ll be homeless if you don’t do this”.

Both of these goals can be accomplished simultaneously by one simple policy: Basic income.

It’s really amazing how many problems can be solved by a basic income; it’s more or less the amazing wonder policy that solves all the world’s economic problems simultaneously. Poverty? Gone. Unemployment? Decimated. Inequality? Contained. (The pilot studies of basic income in India have been successful beyond all but the wildest dreams; they eliminate poverty, improve health, increase entrepreneurial activity, even reduce gender inequality.) The one major problem basic income doesn’t solve is government debt (indeed it likely increases it, at least in the short run), but as I’ve already talked about, that problem is not nearly as bad as most people fear.

And once again I think I should head off accusations that advocating a basic income makes me some sort of far-left Communist radical; Friedrich Hayek supported a basic income.

Basic income would help with unemployment in a third way as well; one of the major reasons unemployment is so harmful is that people who are unemployed can’t provide for themselves or their families. So a basic income would reduce the number of people looking for jobs, increase the number of jobs available, and also make being unemployed less painful, all in one fell swoop. I doubt it would solve the problem of unemployment entirely, but I think it would make an enormous difference.