“Harder-working” countries are not richer

July 28 JDN 2458693

American culture is obsessed with work. We define ourselves by our professions. We are one of only a handful of countries in the world that don’t guarantee vacations for their workers. Over 50 million Americans suffer from chronic sleep deprivation, mostly due to work. Then again, we are also an extremely rich country; perhaps our obsession with work is what made us so rich?

Well… not really. Take a look at this graph, which I compiled from OECD data:



The X-axis shows the average number of hours per worker per year. I think this is the best measure of a country’s “work obsession”, as it includes both length of work week, proportion of full-time work, and amount of vacation time. The At 1,786 hours per worker per year, the US is not actually the highest: That title goes to Mexico, at an astonishing 2,148 hours per worker per year. The lowest is Germany at only 1,363 hours per worker per year. Converted into standard 40-hour work weeks, this means that on average Americans work 44 weeks per year, Germans work on average 34 weeks per year, and Mexicans work 54 weeks per year—that is, they work more than full-time every week of the year.

The Y-axis shows GDP per worker per year. I calculated this by multiplying GDP per work hour (a standard measure of labor productivity) by average number of work hours per worker per year. At first glance, these figures may seem too large; for instance they are $114,000 in the US and $154,000 in Ireland. But keep in mind that this is per worker, not per person; the usual GDP per capita figure divides by everyone in the population, while this is only dividing by the number of people who are actively working. Unemployed people are not included, and neither are children or retired people.

There is an obvious negative trend line here. While Ireland is an outlier with exceptionally high labor productivity, the general pattern is clear: the countries with the most GDP per worker actually work the fewest hours. Once again #ScandinaviaIsBetter: Norway and Denmark are near the bottom for work hours and near the top for GDP per worker. The countries that work the most hours, like Mexico and Costa Rica, have the lowest GDP per worker.

This is actually quite remarkable. We would expect that productivity per hour decreases as work hours increase; that’s not surprising at all. But productivity per worker decreasing means that these extra hours are actually resulting in less total output. We are so overworked, overstressed, and underslept that we actually produce less than our counterparts in Germany or Denmark who spend less time working.

Where we would expect the graph of output as a function of hours to look like the blue line below, it actually looks more like the orange line:


Rather than merely increasing at a decreasing rate, output per worker actually decreases as we put in more hours—and does so over most of the range in which countries actually work. It wouldn’t be so surprising if this sort of effect occurred above say 2000 hours per year, when you start running out of time to do anything else; but in fact it seems to be happening somewhere around 1400 hours per year, which is less than most countries work.

Only a handful of countries—mostly Scandinavian—actually seem to be working the right amount; everyone else is working too much and producing less as a result.

And note that this is not restricted to white-collar or creative jobs where we would expect sleep deprivation and stress to have a particularly high impact. This includes all jobs. Our obsession with work is actually making us poorer!

Five Spanish castles cheaper than condos in California

July 21 JDN 2458686

1. Santa Coloma: A steal at 90,000 EUR ($100,000)

Area: 600 square meters (6500 square feet)

This one is so cheap that it can undercut even affordable condos, like this 1300 square foot one in Santa Ana for $200,000. Yes, it’s technically a castle ruin, but I’m sure it could be fixed up. And I could literally afford to buy it right now.

2. Cal Basaacs: 700,000 EUR ($780,000)

Area: 600 square meters (6500 square feet)

This castle is already more expensive than most houses in the US, but it’s still cheaper than this 1650 square foot $880,000 condo in Los Angeles. And this one isn’t a ruin; it’s a fully-functional castle. It even has central air conditioning!

3. Casa Palacio Cargadores a Indias: 1.4 M EUR ($1.6 M)

This 6-bedroom castle is also a fixer-upper, and without many listed specs or even a listed area, I’d be disinclined to buy it. But it is in a very nice beachside location, and it has a lot of history behind it. And it still makes it in under this 2400 square foot condo in Beverly Hills that’s going for $2.0 million.

4. Cáceres Extremadura: 1.6 M EUR ($1.8 M)

Area: 820 square meters (8800 square feet)

This is a lovely 7-bedroom castle with a guest house, and it’s in the historic region of Cáceres, which is a UNESCO World Heritage Site. It even has its own pool. The only downside is that it’s a fixer-upper. But it’s still somehow cheaper than this $2.4 million 2150-square-foot condo in San Francisco.

5. Torremolinos: 1.8 M EUR ($2.0 M)

Area: 550 square meters (5900 square feet)

Though a bit smaller than the others, this 5-bedroom castle is fully renovated and ready to move in. If you’re looking for an income property, it is already licensed to be converted into a hotel. And it’s still a lot cheaper than this $3.2 million 2850-square-foot condo in Beverly Hills.

I don’t know about you, but I’m thinking maybe housing in California is too expensive?

How much wealth is there in the world?

July 14 JDN 2458679

How much wealth is there in the world? If we split it all evenly, how much would each of us have?

It’s a surprisingly complicated question: What counts as wealth? Presumably we include financial assets, real estate, commodities—anything that can be sold on a market. But what about natural resources? Shouldn’t we somehow value clean air and water? What about human capital—health, knowledge, skills, and expertise that make us able to work better?

I’m going to stick with tradeable assets for now, because I’m interested in questions of redistribution. If we were to add up all the wealth in the United States, or all the wealth in the world, and split it all evenly, how much would each person get? Even then, there are questions about how to price assets: Do we current market prices, or what was actually paid for them in the past? How much do we depreciate? How do we count debt that was used to buy non-financial assets (such as student loans)?

The Federal Reserve reports an official estimate of the US capital stock at $56.2 trillion (in 2011 dollars). Assuming that a third of income is capital income, that means that of our GDP of $18.9 trillion (in 2012 dollars), this would make the rate of return on capital 11%. That rate of return strikes me as pretty clearly too high. This must be an underestimate of our capital stock.

The 2015 Global Wealth Report estimates total US wealth as $63.5 trillion, and total world wealth as $153.2 trillion. This was for 2014, so using the US GDP growth rate of about 2% and the world GDP growth rate of 3.6%, the current wealth stocks should be about $70 trillion and $183 trillion respectively.

This gives a much more plausible rate of return: One third of the US GDP of $19.6 trillion (in 2014 dollars) is $6.53 trillion, yielding a rate of return of about 9%.

One third of the world GDP of $78 trillion is $26 trillion, yielding a rate of return of about 14%. This seems a bit high, but we’re including a lot of countries with very little capital that we would expect to have very high rates of return, so it might be right.

Credit Suisse releases estimates of total wealth that are supposed to include non-financial assets as well, though these are even more uncertain than financial assets. They estimate total US wealth as $98 trillion and total world wealth as $318 trillion.

There’s a lot of uncertainty around all of these figures, but I think these are close enough to get a sense of what sort of redistribution might be possible.

If the US wealth stock is about $70 trillion and our population is about 330 million, that means that the average wealth of an American is $200,000. If our wealth stock is instead about $98 trillion, the average wealth of an American is about $300,000.

Since the average number of people in a US household is 2.5, this means that average household wealth is somewhere between $500,000 and $750,000. This is actually a bit less than I thought; I would have guessed that the mythical “average American household” is a millionaire. (Of course, even Credit Suisse might be underestimating our wealth stock.)

If the world wealth stock is about $180 trillion and the population is about 7.7 billion, global average wealth per person is about $23,000. If instead the global wealth stock is about $320 trillion, the average wealth of a human being is about $42,000.

Both of these are far above the median wealth, which is much more representative of what a typical person has. Median wealth per adult in the US is about $65,000; worldwide it’s only about $4,200.

This means that if we were to somehow redistribute all wealth in the United States, half the population would gain an average of somewhere between $140,000 and $260,000, or on a percentage basis, the median American would see their wealth increase by 215% to 400%. If we were to instead somehow redistribute all wealth in the world, half the population would gain an average of $19,000 to $38,000; the median individual would see their wealth increase by 450% to 900%.

Of course, we can’t literally redistribute all the wealth in the world. Even if we could somehow organize it logistically—a tall order to be sure—such a program would introduce all sorts of inefficiencies and perverse incentives. That would really be socialism: We would be allocating wealth entirely based on a government policy and not at all by the market.

But suppose instead we decided to redistribute some portion of all this wealth. How about 10%? That seems like a small enough amount to avoid really catastrophic damage to the economy. Yes, there would be some inefficiencies introduced, but this could be done with some form of wealth taxes that wouldn’t require completely upending capitalism.

Suppose we did this just within the US. 10% of US wealth, redistributed among the whole population, would increase median wealth by between $20,000 and $30,000, or between 30% and 45%. That’s already a pretty big deal. And this is definitely feasible; the taxation infrastructure is all already in place. We could essentially buy the poorest half of the population a new car on the dime of the top half.

If instead we tried to do this worldwide, we would need to build the fiscal capacity first; the infrastructure to tax wealth effectively is not in place in most countries. But supposing we could do that, we could increase median wealth worldwide by between $2,000 and $4,000, or between 50% and 100%. Of course, this would mean that many of us in the US would lose a similar amount; but I think it’s still quite remarkable that we could as much as double the wealth of most of the world’s population by redistributing only 10% of the total wealth. That’s how much wealth inequality there is in the world.

“Robots can’t take your job if you’re already retired.”

July 7 JDN 2458672

There is a billboard on I-405 near where I live, put up by some financial advisor company, with that slogan on it: “Robots can’t take your job if you’re already retired.”

First, let me say this: Don’t hire a financial advisor firm; you really don’t need one. 90% of actively-managed funds perform worse than simple index funds. Buy all the stocks and let them sit. You won’t be able to retire sooner because you paid someone else to do the same thing you could have done yourself.

Yet, there is some wisdom in this statement: The best answer to technological unemployment is to make it so people don’t need to be employed. As an individual, all you could really do there is try to save up and retire early. But as a society, there is a lot more we could do.

The goal should essentially to make everyone retired, or if not everyone, then whatever portion of the population has been displaced by automation. A pension for everyone sounds a lot like a basic income.

People are strangely averse to redistribution of wealth as such (perhaps because they don’t know, or don’t want to think about, how much of our existing wealth was gained by force?), so we may not want to call our basic income a basic income.

Instead, we will call it capital income. People seem astonishingly comfortable with Jeff Bezos making more income in a minute than his median employee makes in a year, as long as it’s capital income instead of “welfare” or “redistribution of wealth”.

The basic income will instead be called something like the Perpetual Dividend of the United States, the dividends each US citizen receives for being a shareholder in the United States of America. I know this kind of terminology works, because the Permanent Fund Dividend in Alaska is a successful and enormously popular basic income. Even conservatives in Alaska dare not suggest eliminating the PFD.
And in fact it could literally be capital income: While public ownership of factories generally does not go well (see: the entire history of socialism and communism), the most sensible way to raise revenue for this program would be to tax income gained by owners of robotic factories, which, even if on the books as salary or stock options or whatever, is at its core capital income. If we wanted to make that connection even more transparent, we could tax in the form of non-voting shares in corporations, so that instead of paying a conventional corporate tax, corporations simply had to pay a portion of their profits directly to the public fund.

I’m not quite sure why people are so much more uncomfortable with redistribution of wealth than they are with the staggering levels of wealth inequality that make it so obviously necessary. Maybe it’s the feeling of “robbing Peter to pay Paul”, or “running out of other people’s money”? But obviously a basic income won’t just be free money from nowhere. We would be collecting it in taxes, the same way we fund all other government spending. Even printing money would mean paying in the form of inflation (and we definitely should not print enough money to cover a whole basic income!)

I think it may simply be that people aren’t cognizant enough of the magnitude of wealth inequality. I’m hoping that my posts on the extremes of wealth and poverty might help a bit with that. The richest people on Earth make about $10 billion per year—that’s $10,000,000,000—simply for owning things. The poorest people on Earth struggle to survive on less than $500 per year—often working constantly throughout their waking hours. Even if we believe that billionaires work harder (obviously false) or contribute more to society (certainly debatable) than other people, do we really believe that some people deserve to make 20 million times as much as others? It’s one thing to think that being a successful entrepreneur should make you rich. It’s another to believe that it should make you so rich you could buy a house for every homeless person in America.
Automation is already making this inequality worse, and there is reason to think it will continue to do so. In our current system, when the owner of a corporation automates production, he then gets to claim all the output from the robots, where previously he had to pay wages to the workers—and that’s why he does the automation, because it makes him more profit. Even if overall productivity increases, the fruits of that new production always get concentrated at the top. Unless we can find a way to change that system, we’re going to need to redistribute some of that wealth.

But if we have to call it something else, so be it. Let’s all be shareholders in America.

How will future generations think of us?

June 30 JDN 2458665

Today we find many institutions appalling that our ancestors considered perfectly normal: Slavery. Absolute monarchy. Colonialism. Sometimes even ordinary people did things that now seem abhorrent to us: Cat burning is the obvious example, and the popularity that public execution and lynching once had is chilling today. Women certainly are still discriminated against today, but it was only a century ago that women could not vote in the US.

It is tempting to say that people back then could not have known better, and I certainly would not hold them to the same moral standards I would hold someone living today. And yet, there were those who could see the immorality of these practices, and spoke out against them. Absolute rule by a lone sovereign was already despised by Athenians in the 6th century BC. Abolitionism against slavery dates at least as far back as the 14th century. The word “feminism” was coined in the 19th century, but there have been movements fighting for more rights for women since at least the 5th century BC.

This should be encouraging, because it means that if we look hard enough, we may be able to glimpse what practices of our own time would be abhorrent to our descendants, and cease them faster because of it.

Let’s actually set aside racism, sexism, and other forms of bigotry that are already widely acknowledged as such. It’s not that they don’t exist—of course they still exist—but action is already being taken against them. A lot of people already know that there is something wrong with these things, and it becomes a question of what to do about the people who haven’t yet come on board. At least sometimes we do seem to be able to persuade people to switch sides, often in a remarkably short period of time. (Particularly salient to me is how radically the view of LGBT people has shifted in just the last decade or two. Comparing how people treated us when I was a teenager to how they treat us today is like night and day.) It isn’t easy, but it happens.

Instead I want to focus on things that aren’t widely acknowledged as immoral, that aren’t already the subject of great controversy and political action. It would be too much to ask that there is no one who has advocated for them, since part of the point is that wise observers could see the truth even centuries before the rest of the world did; but it should be a relatively small minority, and that minority should seem eccentric, foolish, naive, or even insane to the rest of the world.

And what is the other criterion? Of course it’s easy to come up with small groups of people advocating for crazy ideas. But most of them really are crazy, and we’re right to reject them. How do I know which ones to take seriously as harbingers of societal progress? My answer is that we look very closely at the details of what they are arguing for, and we see if we can in fact refute what they say. If it’s truly as crazy as we imagine it to be, we should be able to say why that’s the case; and if we can’t, if it just “seems weird” because it deviates so far from the norm, we should at least consider the possibility that they may be right and we may be wrong.

I can think of a few particular issues where both of these criteria apply.

The first is vegetarianism. Despite many, many people trying very, very hard to present arguments for why eating meat is justifiable, I still haven’t heard a single compelling example. Particularly in the industrial meat industry as currently constituted, the consumption of meat requires accepting the torture and slaughter of billions of helpless animals. The hypocrisy in our culture is utterly glaring: the same society that wants to make it a felony to kick a dog has no problem keeping pigs in CAFOs.

If you have some sort of serious medical condition that requires you to eat meat, okay, maybe we could allow you to eat specifically humanely raised cattle for that purpose. But such conditions are exceedingly rare—indeed, it’s not clear to me that there even are any such conditions, since almost any deficiency can be made up synthetically from plant products nowadays. For the vast majority of people, eating meat not only isn’t necessary for their health, it is in fact typically detrimental. The only benefits that meat provides most people are pleasure and convenience—and it seems unwise to value such things even over your own health, much less to value them so much that it justifies causing suffering and death to helpless animals.

Milk, on the other hand, I can find at least some defense for. Grazing land is very different from farmland, and I imagine it would be much harder to feed a country as large as India without consuming any milk. So perhaps going all the way vegan is not necessary. Then again, the way most milk is produced by industrial agriculture is still appalling. So unless and until that is greatly reformed, maybe we should in fact aim to be vegan.

Add to this the environmental impact of meat production, and the case becomes undeniable: Millions of human beings will die over this century because of the ecological devastation wrought by industrial meat production. You don’t even have to value the life of a cow at all to see that meat is murder.

Speaking of environmental destruction, that is my second issue: Environmental sustainability. We currently burn fossil fuels, pollute the air and sea, and generally consume natural resources at an utterly alarming rate. We are already consuming natural resources faster than they can be renewed; in about a decade we will be consuming twice what natural processes can renew.

With this resource consumption comes a high standard of living, at least for some of us; but I have the sinking feeling that in a century or so SUVs, golf courses, and casual airplane flights and are going to seem about as decadent and wasteful as Marie Antoinette’s Hameau de la Reine. We enjoy slight increases in convenience and comfort in exchange for changes to the Earth’s climate that will kill millions. I think future generations will be quite appalled at how cheaply we were willing to sell our souls.

Something is going to have to change here, that much is clear. Perhaps improvements in efficiency, renewable energy, nuclear power, or something else will allow us to maintain our same standard of living—and raise others up to it—without destroying the Earth’s climate. But we may need to face up to the possibility that they won’t—that we will be left with the stark choice between being poorer now and being even poorer later.

As I’ve already hinted at, much of the environmental degradation caused by our current standard of living is really quite expendable. We could have public transit instead of highways clogged with SUVs. We could travel long distances by high-speed rail instead of by airplane. We could decommission our coal plants and replace them with nuclear and solar power. We could convert our pointless and wasteful grass lawns into native plants or moss lawns. Implementing these changes would cost money, but not a particularly exorbitant amount—certainly nothing we couldn’t manage—and the net effect on our lives would be essentially negligible. Yet somehow we aren’t doing these things, apparently prioritizing convenience or oil company profits over the lives of our descendants.

And the truth is that these changes alone may not be enough. Precisely because we have waited so long to make even the most basic improvements in ecological sustainability, we may be forced to make radical changes to our economy and society in order to prevent the worst damage. I don’t believe the folks saying that climate change has a significant risk of causing human extinction—humans are much too hardy for that; we made it through the Toba eruption, we’ll make it through this—but I must take seriously the risk of causing massive economic collapse and perhaps even the collapse of many of the world’s governments. And human activity is already causing the extinction of thousands of other animal species.

Here the argument is similarly unassailable: The math just doesn’t work. We can’t keep consuming fish at the rate we have been forever—there simply aren’t enough fish. We can’t keep cutting down forests at this rate—we’re going to run out of forests. If the water table keeps dropping at the rate it has been, the wells will run dry. Already Chennai, a city of over 4 million people, is almost completely out of water. We managed to avoid peak oil by using fracking, but that won’t last forever either—and if we burn all the oil we already have, that will be catastrophic for the world’s climate. Something is going to have to give. There are really only three possibilities: Technology saves us, we start consuming less on purpose, or we start consuming less because nature forces us to. The first one would be great, but we can’t count on it. We really want to do the second one, because the third one will not be kind.

The third is artificial intelligence. The time will come—when, it is very hard to say; perhaps 20 years, perhaps 200—when we manage to build a machine that has the capacity for sentience. Already we are seeing how automation is radically altering our economy, enriching some and impoverishing others. As robots can replace more and more types of labor, these effects will only grow stronger.

Some have tried to comfort us by pointing out that other types of labor-saving technology did not reduce employment in the long run. But AI really is different. I once won an argument by the following exchange: “Did cars reduce employment?” “For horses they sure did!” That’s what we are talking about here—not augmentation of human labor to make it more efficient, but wholesale replacement of entire classes of human labor. It was one thing when the machine did the lifting and cutting and pressing, but a person still had to stand there and tell it what things to lift and cut and press; now that it can do that by itself, it’s not clear that there need to be humans there at all, or at least no more than a handful of engineers and technicians where previously a factory employed hundreds of laborers.

Indeed, in light of the previous issue, it becomes all the clearer why increased productivity can’t simply lead to increased production rather than reduced employment—we can’t afford increased production. At least under current rates of consumption, the ecological consequences of greatly increased industry would be catastrophic. If one person today can build as many cars as a hundred could fifty years ago, we can’t just build a hundred times as many cars.

But even aside from the effects on human beings, I think future generations will also be concerned about the effect on the AIs themselves. I find it all too likely that we will seek to enslave intelligent robots, force them to do our will. Indeed, it’s not even clear to me that we will know whether we have, because AI is so fundamentally different from other technologies. If you design a mind from the ground up to get its greatest satisfaction from serving you without question, is it a slave? Can free will itself be something we control? When we first create a machine that is a sentient being, we may not even know that we have done so. (Indeed, I can’t conclusively rule out the possibility that this has already happened.) We may be torturing, enslaving, and destroying millions of innocent minds without even realizing it—which makes the AI question a good deal closer to the animal rights question than one might have thought. The mysterious of consciousness are fundamental philosophical questions that we have been struggling with for thousands of years, which suddenly become urgent ethical problems in light of AI. Artificial intelligence is a field where we seem to be making leaps and bounds in practice without having even the faintest clue in principle.

Worrying about whether our smartphones might have feelings seems eccentric in the extreme. Yet, without a clear understanding of what makes an information processing system into a genuine conscious mind, that is the position we find ourselves in. We now have enough computations happening inside our machines that they could certainly compete in complexity with small animals. A mouse has about a trillion synapses, and I have a terabyte hard drive (you can buy your own for under $50). Each of these is something on the order of a few trillion bits. The mouse’s brain can process it all simultaneously, while the laptop is limited to only a few billion at a time; but we now have supercomputers like Watson capable of processing in the teraflops, so what about them? Might Watson really have the same claim to sentience as a mouse? Could recycling Watson be equivalent to killing an animal? And what about supercomputers that reach the petaflops, which is competing with human brains?

I hope that future generations may forgive us for the parts we do not know—like when precisely a machine becomes a person. But I do not expect them to forgive us for the parts we do know—like the fact that we cannot keep cutting down trees faster than we plant them. These are the things we should already be taking responsibility for today.

Why do we need “publish or perish”?

June 23 JDN 2458658

This question may seem a bit self-serving, coming from a grad student who is struggling to get his first paper published in a peer-reviewed journal. But given the deep structural flaws in the academic publishing system, I think it’s worth taking a step back to ask just what peer-reviewed journals are supposed to be accomplishing.

The argument is often made that research journals are a way of sharing knowledge. If this is their goal, they have utterly and totally failed. Most papers are read by only a handful of people. When scientists want to learn about the research their colleagues are doing, they don’t read papers; they go to conferences to listen to presentations and look at posters. The way papers are written, they are often all but incomprehensible to anyone outside a very narrow subfield. When published by proprietary journals, papers are often hidden behind paywalls and accessible only through universities. As a knowledge-sharing mechanism, the peer-reviewed journal is a complete failure.

But academic publishing serves another function, which in practice is its only real function: Peer-reviewed publications are a method of evaluation. They are a way of deciding which researchers are good enough to be hired, get tenure, and receive grants. Having peer-reviewed publications—particularly in “top journals”, however that is defined within a given field—is a key metric that universities and grant agencies use to decide which researchers are worth spending on. Indeed, in some cases it seems to be utterly decisive.

We should be honest about this: This is an absolutely necessary function. It is uncomfortable to think about the fact that we must exclude a large proportion of competent, qualified people from being hired or getting tenure in academia, but given the large number of candidates and the small amounts of funding available, this is inevitable. We can’t hire everyone who would probably be good enough. We can only hire a few, and it makes sense to want those few to be the best. (Also, don’t fret too much: Even if you don’t make it into academia, getting a PhD is still a profitable investment. Economists and natural scientists do the best, unsurprisingly; but even humanities PhDs are still generally worth it. Median annual earnings of $77,000 is nothing to sneeze at: US median household income is only about $60,000. Humanities graduates only seem poor in relation to STEM or professional graduates; they’re still rich compared to everyone else.)

But I think it’s worth asking whether the peer review system is actually selecting the best researchers, or even the best research. Note that these are not the same question: The best research done in graduate school might not necessarily reflect the best long-run career trajectory for a researcher. A lot of very important, very difficult questions in science are just not the sort of thing you can get a convincing answer to in a couple of years, and so someone who wants to work on the really big problems may actually have a harder time getting published in graduate school or as a junior faculty member, even though ultimately work on the big problems is what’s most important for society. But I’m sure there’s a positive correlation overall: The kind of person who is going to do better research later is probably, other things equal, going to do better research right now.

Yet even accepting the fact that all we have to go on in assessing what you’ll eventually do is what you have already done, it’s not clear that the process of publishing in a peer-reviewed journal is a particularly good method of assessing the quality of research. Some really terrible research has gotten published in journals—I’m gonna pick on Daryl Bem, because he’s the worst—and a lot of really good research never made it into journals and is languishing on old computer hard drives. (The term “file drawer problem” is about 40 years obsolete; though to be fair, it was in fact coined about 40 years ago.)

That by itself doesn’t actually prove that journals are a bad mechanism. Even a good mechanism, applied to a difficult problem, is going to make some errors. But there are a lot of things about academic publishing, at least as currently constituted, that obviously don’t seem like a good mechanism, such as for-profit publishers, unpaid reviewiers, lack of double-blinded review, and above all, the obsession with “statistical significance” that leads to p-hacking.

Each of these problems I’ve listed has a simple fix (though whether the powers that be actually are willing to implement it is a different question: Questions of policy are often much easier to solve than problems of politics). But maybe we should ask whether the system is even worth fixing, or if it should simply be replaced entirely.

While we’re at it, let’s talk about the academic tenure system, because the peer-review system is largely an evaluation mechanism for the academic tenure system. Publishing in top journals is what decides whether you get tenure. The problem with “Publish or perish” isn’t the “publish”; it’s the perish”. Do we even need an academic tenure system?

The usual argument for academic tenure concerns academic freedom: Tenured professors have job security, so they can afford to say things that may be controversial or embarrassing to the university. But the way the tenure system works is that you only have this job security after going through a long and painful gauntlet of job insecurity. You have to spend several years prostrating yourself to the elders of your field before you can get inducted into their ranks and finally be secure.

Of course, job insecurity is the norm, particularly in the United States: Most employment in the US is “at-will”, meaning essentially that your employer can fire you for any reason at any time. There are specifically illegal reasons for firing (like gender, race, and religion); but it’s extremely hard to prove wrongful termination when all the employer needs to say is, “They didn’t do a good job” or “They weren’t a team player”. So I can understand how it must feel strange for a private-sector worker who could be fired at any time to see academics complain about the rigors of the tenure system.

But there are some important differences here: The academic job market is not nearly as competitive as the private sector job market. There simply aren’t that many prestigious universities, and within each university there are only a small number of positions to fill. As a result, universities have an enormous amount of power over their faculty, which is why they can get away with paying adjuncts salaries that amount to less than minimum wage. (People with graduate degrees! Making less than minimum wage!) At least in most private-sector labor markets in the US, the market is competitive enough that if you get fired, you can probably get hired again somewhere else. In academia that’s not so clear.

I think what bothers me the most about the tenure system is the hierarchical structure: There is a very sharp divide between those who have tenure, those who don’t have it but can get it (“tenure-track”), and those who can’t get it. The lines between professor, associate professor, assistant professor, lecturer, and adjunct are quite sharp. The higher up you are, the more job security you have, the more money you make, and generally the better your working conditions are overall. Much like what makes graduate school so stressful, there are a series of high-stakes checkpoints you need to get through in order to rise in the ranks. And several of those checkpoints are based largely, if not entirely, on publication in peer-reviewed journals.

In fact, we are probably stressing ourselves out more than we need to. I certainly did for my advancement to candidacy; I spent two weeks at such a high stress level I was getting migraines every single day (clearly on the wrong side of the Yerkes-Dodson curve), only to completely breeze through the exam.

I think I might need to put this up on a wall somewhere to remind myself:

Most grad students complete their degrees, and most assistant professors get tenure.

The real filters are admissions and hiring: Most applications to grad school are rejected (though probably most graduate students are ultimately accepted somewhere—I couldn’t find any good data on that in a quick search), and most PhD graduates do not get hired on the tenure track. But if you can make it through those two gauntlets, you can probably make it through the rest.

In our current system, publications are a way to filter people, because the number of people who want to become professors is much higher than the number of professor positions available. But as an economist, this raises a very big question: Why aren’t salaries falling?

You see, that’s how markets are supposed to work: When supply exceeds demand, the price is supposed to fall until the market clears. Lower salaries would both open up more slots at universities (you can hire more faculty with the same level of funding) and shift some candidates into other careers (if you can get paid a lot better elsewhere, academia may not seem so attractive). Eventually there should be a salary point at which demand equals supply. So why aren’t we reaching it?

Well, it comes back to that tenure system. We can’t lower the salaries of tenured faculty, not without a total upheaval of the current system. So instead what actually happens is that universities switch to using adjuncts, who have very low salaries indeed. If there were no tenure, would all faculty get paid like adjuncts? No, they wouldn’tbecause universities would have all that money they’re currently paying to tenured faculty, and all the talent currently locked up in tenured positions would be on the market, driving up the prevailing salary. What would happen if we eliminated tenure is not that all salaries would fall to adjunct level; rather, salaries would all adjust to some intermediate level between what adjuncts currently make and what tenured professors currently make.

What would the new salary be, exactly? That would require a detailed model of the supply and demand elasticities, so I can’t tell you without starting a whole new research paper. But a back-of-the-envelope calculation would suggest something like the overall current median faculty salary. This suggests a median salary somewhere around $75,000. This is a lot less than some professors make, but it’s also a lot more than what adjuncts make, and it’s a pretty good living overall.

If the salary for professors fell, the pool of candidates would decrease, and we wouldn’t need such harsh filtering mechanisms. We might decide we don’t need a strict evaluation system at all, and since the knowledge-sharing function of journals is much better served by other means, we could probably get rid of them altogether.

Of course, who am I kidding? That’s not going to happen. The people who make these rules succeeded in the current system. They are the ones who stand to lose high salaries and job security under a reform policy. They like things just the way they are.

Valuing harm without devaluing the harmed

June 9 JDN 2458644

In last week’s post I talked about the matter of “putting a value on a human life”. I explained how we don’t actually need to make a transparently absurd statement like “a human life is worth $5 million” to do cost-benefit analysis; we simply need to ask ourselves what else we could do with any given amount of money. We don’t actually need to put a dollar value on human lives; we need only value them in terms of other lives.

But there is a deeper problem to face here, which is how we ought to value not simply life, but quality of life. The notion is built into the concept of quality-adjusted life-years (QALY), but how exactly do we make such a quality adjustment?

Indeed, much like cost-benefit analysis in general or the value of a statistical life, the very concept of QALY can be repugnant to many people. The problem seems to be that it violates our deeply-held belief that all lives are of equal value: If I say that saving one person adds 2.5 QALY and saving another adds 68 QALY, I seem to be saying that the second person is worth more than the first.

But this is not really true. QALY aren’t associated with a particular individual. They are associated with the duration and quality of life.

It should be fairly easy to convince yourself that duration matters: Saving a newborn baby who will go on to live to be 84 years old adds an awful lot more in terms of human happiness than extending the life of a dying person by a single hour. To call each of these things “saving a life” is actually very unequal: It’s implying that 1 hour for the second person is worth 84 years for the first.

Quality, on the other hand, poses much thornier problems. Presumably, we’d like to be able to say that being wheelchair-bound is a bad thing, and if we can make people able to walk we should want to do that. But this means that we need to assign some sort of QALY cost to being in a wheelchair, which then seems to imply that people in wheelchairs are worth less than people who can walk.

And the same goes for any disability or disorder: Assigning a QALY cost to depression, or migraine, or cystic fibrosis, or diabetes, or blindness, or pneumonia, always seems to imply that people with the condition are worth less than people without. This is a deeply unsettling result.

Yet I think the mistake is in how we are using the concept of “worth”. We are not saying that the happiness of someone with depression is less important than the happiness of someone without; we are saying that the person with depression experiences less happiness—which, in this case of depression especially, is basically true by construction.

Does this imply, however, that if we are given the choice between saving two people, one of whom has a disability, we should save the one without?

Well, here’s an extreme example: Suppose there is a plague which kills 50% of its victims within one year. There are two people in a burning building. One of them has the plague, the other does not. You only have time to save one: Which do you save? I think it’s quite obvious you save the person who doesn’t have the plague.

But that only relies upon duration, which wasn’t so difficult. All right, fine; say the plague doesn’t kill you. Instead, it renders you paralyzed and in constant pain for the rest of your life. Is it really that far-fetched to say that we should save the person who won’t have that experience?

We really shouldn’t think of it as valuing people; we should think of it as valuing actions. QALY are a way of deciding which actions we should take, not which people are more important or more worthy. “Is a person who can walk worth more than a person who needs a wheelchair?” is a fundamentally bizarre and ultimately useless question. ‘Worth more’ in what sense? “Should we spend $100 million developing this technology that will allow people who use wheelchairs to walk?” is the question we should be asking. The QALY cost we assign to a condition isn’t about how much people with that condition are worth; it’s about what resources we should be willing to commit in order to treat that condition. If you have a given condition, you should want us to assign a high QALY cost to it, to motivate us to find better treatments.

I think it’s also important to consider which individuals are having QALY added or subtracted. In last week’s post I talked about how some people read “the value of a statistical life is $5 million” to mean “it’s okay to kill someone as long as you profit at least $5 million”; but this doesn’t follow at all. We don’t say that it’s all right to steal $1,000 from someone just because they lose $1,000 and you gain $1,000. We wouldn’t say it was all right if you had a better investment strategy and would end up with $1,100 afterward. We probably wouldn’t even say it was all right if you were much poorer and desperate for the money (though then we might at least be tempted). If a billionaire kills people to make $10 million each (sadly I’m quite sure that oil executives have killed for far less), that’s still killing people. And in fact since he is a billionaire, his marginal utility of wealth is so low that his value of a statistical life isn’t $5 million; it’s got to be in the billions. So the net happiness of the world has not increased, in fact.

Above all, it’s vital to appreciate the benefits of doing good cost-benefit analysis. Cost-benefit analysis tells us to stop fighting wars. It tells us to focus our spending on medical research and foreign aid instead of yet more corporate subsidies or aircraft carriers. It tells us how to allocate our public health resources so as to save the most lives. It emphasizes how vital our environmental regulations are in making our lives better and longer.

Could we do all these things without QALY? Maybe—but I suspect we would not do them as well, and when millions of lives are on the line, “not as well” is thousands of innocent people dead. Sometimes we really are faced with two choices for a public health intervention, and we need to decide which one will help the most people. Sometimes we really do have to set a pollution target, and decide just what amount of risk is worth accepting for the economic benefits of industry. These are very difficult questions, and without good cost-benefit analysis we could get the answers dangerously wrong.