Where is the money going in academia?

Feb 19 JDN 2459995

A quandary for you:

My salary is £41,000.

Annual tuition for a full-time full-fee student in my department is £23,000.

I teach roughly the equivalent of one full-time course (about 1/2 of one and 1/4 of two others; this is typically counted as “teaching 3 courses”, but if I used that figure, it would underestimate the number of faculty needed).

Each student takes about 5 or 6 courses at a time.

Why do I have 200 students?

If you multiply this out, the 200 students I teach, divided by the 6 instructors they have at one time, times the £23,000 they are paying… I should be bringing in over £760,000 for the university. Why am I paid only 5% of that?

Granted, there are other costs a university must bear aside from paying instructors. There are facilities, and administration, and services. And most of my students are not full-fee paying; that £23,000 figure really only applies to international students.

Students from Scotland pay only £1,820, but there aren’t very many of them, and public funding is supposed to make up that difference. Even students from the rest of the UK pay £9,250. And surely the average tuition paid has got to be close to that? Yet if we multiply that out, £9,000 times 200 divided by 6, we’re still looking at £300,000. So I’m still getting only 14%.

Where is the rest going?

This isn’t specific to my university by any means. It seems to be a global phenomenon. The best data on this seems to be from the US.

According to salary.com, the median salary for an adjunct professor in the US is about $63,000. This actually sounds high, given what I’ve heard from other entry-level faculty. But okay, let’s take that as our figure. (My pay is below this average, though how much depends upon the strength of the pound against the dollar. Currently the pound is weak, so quite a bit.)

Yet average tuition for out-of-state students at public college is $23,000 per year.

This means that an adjunct professor in the US with 200 students takes in $760,000 but receives $63,000. Where does that other $700,000 go?

If you think that it’s just a matter of paying for buildings, service staff, and other costs of running a university, consider this: It wasn’t always this way.

Since 1970, inflation-adjusted salaries for US academic faculty at public universities have risen a paltry 3.1%. In other words, basically not at all.

This is considerably slower than the growth of real median household income, which has risen almost 40% in that same time.

Over the same interval, nominal tuition has risen by over 2000%; adjusted for inflation, this is a still-staggering increase of 250%.

In other words, over the last 50 years, college has gotten three times as expensive, but faculty are still paid basically the same. Where is all this extra money going?

Part of the explanation is that public funding for colleges has fallen over time, and higher tuition partly makes up the difference. But private school tuition has risen just as fast, and their faculty salaries haven’t kept up either.

In their annual budget report, the University of Edinburgh proudly declares that their income increased by 9% last year. Let me assure you, my salary did not. (In fact, inflation-adjusted, my salary went down.) And their EBITDA—earnings before interest, taxes, depreciation, and amortization—was £168 million. Of that, £92 million was lost to interest and depreciation, but they don’t pay taxes at all, so their real net income was about £76 million. In the report, they include price changes of their endowment and pension funds to try to make this number look smaller, ending up with only £37 million, but that’s basically fiction; these are just stock market price drops, and they will bounce back.

Using similar financial alchemy, they’ve been trying to cut our pensions lately, because they say they “are too expensive” (because the stock market went down—nevermind that it’ll bounce back in a year or two). Fortunately, the unions are fighting this pretty hard. I wish they’d also fight harder to make them put people like me on the tenure track.

Had that £76 million been distributed evenly between all 5,000 of us faculty, we’d each get an extra £15,600.

Well, then, that solves part of the mystery in perhaps the most obvious, corrupt way possible: They’re literally just hoarding it.

And Edinburgh is far from the worst offender here. No, that would be Harvard, who are sitting on over $50 billion in assets. Since they have 21,000 students, that is over $2 million per student. With even a moderate return on its endowment, Harvard wouldn’t need to charge tuition at all.

But even then, raising my salary to £56,000 wouldn’t explain why I need to teach 200 students. Even that is still only 19% of the £300,000 those students are bringing in. But hey, then at least the primary service for which those students are here for might actually account for one-fifth of what they’re paying!

Now let’s considers administrators. Median salary for a university administrator in the US is about $138,000—twice what adjunct professors make.


Since 1970, that same time interval when faculty salaries were rising a pitiful 3% and tuition was rising a staggering 250%, how much did chancellors’ salaries increase? Over 60%.

Of course, the number of administrators is not fixed. You might imagine that with technology allowing us to automate a lot of administrative tasks, the number of administrators could be reduced over time. If that’s what you thought happened, you would be very, very wrong. The number of university administrators in the US has more than doubled since the 1980s. This is far faster growth than the number of students—and quite frankly, why should the number of administrators even grow with the number of students? There is a clear economy of scale here, yet it doesn’t seem to matter.

Combine those two facts: 60% higher pay times twice as many administrators means that universities now spend at least 3 times as much on administration as they did 50 years ago. (Why, that’s just about the proportional increase in tuition! Coincidence? I think not.)

Edinburgh isn’t even so bad in this regard. They have 6,000 administrative staff versus 5,000 faculty. If that already sounds crazy—more admins than instructors?—consider that the University of Michigan has 7,000 faculty but 19,000 administrators.

Michigan is hardly exceptional in this regard: Illinois UC has 2,500 faculty but nearly 8,000 administrators, while Ohio State has 7,300 faculty and 27,000 administrators. UCLA is even worse, with only 4,000 faculty but 26,000 administrators—a ratio of 6 to 1. It’s not the UC system in general, though: My (other?) alma mater of UC Irvine somehow supports 5,600 faculty with only 6,400 administrators. Yes, that’s right; compared to UCLA, UCI has 40% more faculty but 76% fewer administrators. (As far as students? UCLA has 47,000 while UCI has 36,000.)

At last, I think we’ve solved the mystery! Where is all the money in academia going? Administrators.

They keep hiring more and more of them, and paying them higher and higher salaries. Meanwhile, they stop hiring tenure-track faculty and replace them with adjuncts that they can get away with paying less. And then, whatever they manage to save that way, they just squirrel away into the endowment.

A common right-wing talking point is that more institutions should be “run like a business”. Well, universities seem to have taken that to heart. Overpay your managers, underpay your actual workers, and pocket the savings.

Maybe we should forgive student debt after all.

May 8 JDN 2459708

President Biden has been promising some form of student debt relief since the start of his campaign, though so far all he has actually implemented is a series of no-interest deferments and some improvements to the existing forgiveness programs. (This is still significant—it has definitely helped a lot of people with cashflow during the pandemic.) Actual forgiveness for a large segment of the population remains elusive, and if it does happen, it’s unclear how extensive it will be in either intensity (amount forgiven) or scope (who is eligible).

I personally had been fine with this; while I have a substantial loan balance myself, I also have a PhD in economics, which—theoretically—should at some point entitle me to sufficient income to repay those loans.

Moreover, until recently I had been one of the few left-wing people I know to not be terribly enthusiastic about loan forgiveness. It struck me as a poor use of those government funds, because $1.75 trillion is an awful lot of money, and college graduates are a relatively privileged population. (And yes, it is valid to consider this a question of “spending”, because the US government is the least liquidity-constrained entity on Earth. In lieu of forgiving $1.75 trillion in debt, they could borrow $1.75 trillion in debt and use it to pay for whatever they want, and their ultimate budget balance would be basically the same in each case.)

But I say all this in the past tense because Krugman’s recent column has caused me to reconsider. He gives two strong reasons why debt forgiveness may actually be a good idea.

The first is that Congress is useless. Thanks to gerrymandering and the 40% or so of our population who keeps electing Republicans no matter how crazy they get, it’s all but impossible to pass useful legislation. The pandemic relief programs were the exception that proves the rule: Somehow those managed to get through, even though in any other context it’s clear that Congress would never have approved any kind of (non-military) program that spent that much money or helped that many poor people.

Student loans are the purview of the Department of Education, which is entirely under control of the Executive Branch, and therefore, ultimately, the President of the United States. So Biden could forgive student loans by executive order and there’s very little Congress could do to stop him. Even if that $1.75 trillion could be better spent, if it wasn’t going to be anyway, we may as well use it for this.

The second is that “college graduates” is too broad a category. Usually I’m on guard for this sort of thing, but in this case I faltered, and did not notice the fallacy of composition so many labor economists were making by lumping all college grads into the same economic category. Yes, some of us are doing well, but many are not. Within-group inequality matters.

A key insight here comes from carefully analyzing the college wage premium, which is the median income of college graduates, divided by the median income of high school graduates. This is an estimate of the overall value of a college education. It’s pretty large, as a matter of fact: It amounts to something like a doubling of your income, or about $1 million over one’s whole lifespan.

From about 1980-2000, wage inequality grew about as fast as today, and the college wage premium grew even faster. So it was plausible—if not necessarily correct—to believe that the wage inequality reflected the higher income and higher productivity of college grads. But since 2000, wage inequality has continued to grow, while the college wage premium has been utterly stagnant. Thus, higher inequality can no longer (if it ever could) be explained by the effects of college education.

Now some college graduates are definitely making a lot more money—such as those who went into finance. But it turns out that most are not. As Krugman points out, the 95th percentile of male college grads has seen a 25% increase in real (inflation-adjusted) income in the last 20 years, while the median male college grad has actually seen a slight decrease. (I’m not sure why Krugman restricted to males, so I’m curious how it looks if you include women. But probably not radically different?)

I still don’t think student loan forgiveness would be the best use of that (enormous sum of) money. But if it’s what’s politically feasible, it definitely could help a lot of people. And it would be easy enough to make it more progressive, by phasing out forgiveness for graduates with higher incomes.

And hey, it would certainly help me, so maybe I shouldn’t argue too strongly against it?

Sheepskin effect doesn’t prove much

Sep 20 JDN 2459113

The sheepskin effect is the observation that the increase in income from graduating from college after four years, relative going through college for three years, is much higher than the increase in income from simply going through college for three years instead of two.

It has been suggested that this provides strong evidence that education is primarily due to signaling, and doesn’t provide any actual value. In this post I’m going to show why this view is mistaken. The sheepskin effect in fact tells us very little about the true value of college. (Noah Smith actually made a pretty decent argument that it provides evidence against signaling!)

To see this, consider two very simple models.

In both models, we’ll assume that markets are competitive but productivity is not directly observable, so employers sort you based on your education level and then pay a wage equal to the average productivity of people at your education level, compensated for the cost of getting that education.

Model 1:

In this model, people all start with the same productivity, and are randomly assigned by their life circumstances to go to either 0, 1, 2, 3, or 4 years of college. College itself has no long-term cost.

The first year of college you learn a lot, the next couple of years you don’t learn much because you’re trying to find your way, and then in the last year of college you learn a lot of specialized skills that directly increase your productivity.

So this is your productivity after x years of college:

Years of collegeProductivity
010
117
222
325
431

We assumed that you’d get paid your productivity, so these are also your wages.

The increase in income each year goes from +7, to +5, to +3, then jumps up to +6. So if you compare the 4-year-minus-3-year gap (+6) with the 3-year-minus-2-year gap (+3), you get a sheepskin effect.

Model 2:

In this model, college is useless and provides no actual benefits. People vary in their intrinsic productivity, which is also directly correlated with the difficulty of making it through college.

In particular, there are five types of people:

TypeProductivityCost per year of college
0108
1116
2144
3193
4310

The wages for different levels of college education are as follows:

Years of collegeWage
010
117
222
325
431

Notice that these are exactly the same wages as in scenario 1. This is of course entirely intentional. In a moment I’ll show why this is a Nash equilibrium.

Consider the choice of how many years of college to attend. You know your type, so you know the cost of college to you. You want to maximize your net benefit, which is the wage you’ll get minus the total cost of going to college.

Let’s assume that if a given year of college isn’t worth it, you won’t try to continue past it and see if more would be.

For a type-0 person, they could get 10 by not going to college at all, or 17-(1)(8) = 9 by going for 1 year, so they stop.

For a type-1 person, they could get 10 by not going to college at all, or 17-(1)(6) = 11 by going for 1 year, or 22-(2)(6) = 10 by going for 2 years, so they stop.

Filling out all the possibilities yields this table:

Years \ Type01234
01010101010
1911131417
2
10141622
3

131925
4


1930

I’d actually like to point out that it was much harder to find numbers that allowed me to make the sheepskin effect work in the second model, where education was all signaling. In the model where education provides genuine benefit, all I need to do is posit that the last year of college is particularly valuable (perhaps because high-level specialized courses are more beneficial to productivity). I could pretty much vary that parameter however I wanted, and get whatever magnitude of sheepskin effect I chose.

For the signaling model, I had to carefully calibrate the parameters so that the costs and benefits lined up just right to make sure that each type chose exactly the amount of college I wanted them to choose while still getting the desired sheepskin effect. It took me about two hours of very frustrating fiddling just to get numbers that worked. And that’s with the assumption that someone who finds 2 years of college not worth it won’t consider trying for 4 years of college (which, given the numbers above, they actually might want to), as well as the assumption that when type-3 individuals are indifferent between staying and dropping out they drop out.

And yet the sheepskin effect is supposed to be evidence that the world works like the signaling model?

I’m sure a more sophisticated model could make the signaling explanation a little more robust. The biggest limitation of these models is that once you observe someone’s education level, you immediately know their true productivity, whether it came from college or not. Realistically we should be allowing for unobserved variation that can’t be sorted out by years of college.

Maybe it seems implausible that the last year of college is actually more beneficial to your productivity than the previous years. This is probably the intuition behind the idea that sheepskin effects are evidence of signaling rather than genuine learning.

So how about this model?

Model 3:

As in the second model, there are four types of people, types 0, 1, 2, 3, and 4. They all start with the same level of productivity, and they have the same cost of going to college; but they get different benefits from going to college.

The problem is, people don’t start out knowing what type they are. Nor can they observe their productivity directly. All they can do is observe their experience of going to college and then try to figure out what type they must be.

Type 0s don’t benefit from college at all, and they know they are type 0; so they don’t go to college.

Type 1s benefit a tiny amount from college (+1 productivity per year), but don’t realize they are type 1s until after one year of college.

Type 2s benefit a little from college (+2 productivity per year), but don’t realize they are type 2s until after two years of college.

Type 3s benefit a moderate amount from college (+3 productivity per year), but don’t realize they are type 3s until after three years of college.

Type 4s benefit a great deal from college (+5 productivity per year), but don’t realize they are type 4s until after three years of college.

What then will happen? Type 0s will not go to college. Type 1s will go one year and then drop out. Type 2s will go two years and then drop out. Type 3s will go three years and then drop out. And type 4s will actually graduate.

That results in the following before-and-after productivity:

TypeProductivity before collegeYears of collegeProductivity after college
010010
110111
210214
310319
410430

If each person is paid a wage equal to their productivity, there will be a huge sheepskin effect; wages only go up +1 for 1 year, +3 for 2 years, +5 for 3 years, but then they jump up to +11 for graduation. It appears that the benefit of that last year of college is more than the other three combined. But in fact it’s not; for any given individual, the benefits of college are the same each year. It’s just that college is more beneficial to the people who decided to stay longer.

And I could of course change that assumption too, making the early years more beneficial, or varying the distribution of types, or adding more uncertainty—and so on. But it’s really not hard at all to make a model where college is beneficial and you observe a large sheepskin effect.

In reality, I am confident that some of the observed benefit of college is due to sorting—not the same thing as signaling—rather than the direct benefits of education. The earnings advantage of going to a top-tier school may be as much about the selection of students as they are the actual quality of the education, since once you control for measures of student ability like GPA and test scores those benefits drop dramatically.

Moreover, I agree that it’s worth looking at this: Insofar as college is about sorting or signaling, it’s wasteful from a societal perspective, and we should be trying to find more efficient sorting mechanisms.

But I highly doubt that all the benefits of college are due to sorting or signaling; there definitely are a lot of important things that people learn in college, not just conventional academic knowledge like how to do calculus, but also broader skills like how to manage time, how to work in groups, and how to present ideas to others. Colleges also cultivate friendships and provide opportunities for networking and exposure to a diverse community. Judging by voting patterns, I’m going to go out on a limb and say that college also makes you a better citizen, which would be well worth it by itself.

The truth is, we don’t know exactly why college is beneficial. We certainly know that it is beneficial: Unemployment rates and median earnings are directly sorted by education level. Yes, even PhDs in philosophy and sociology have lower unemployment and higher incomes (on average) than the general population. (And of course PhDs in economics do better still.)

Caught between nepotism and credentialism

Feb 19, JDN 2457804

One of the more legitimate criticisms out there of we “urban elites” is our credentialismour tendency to decide a person’s value as an employee or even as a human being based solely upon their formal credentials. Randall Collins, an American sociologist, wrote a book called The Credential Society arguing that much of the class stratification in the United States is traceable to this credentialism—upper-middle-class White Anglo-Saxon Protestants go to the good high schools to get into the good colleges to get the good careers, and all along the way maintain subtle but significant barriers to keep everyone else out.

A related concern is that of credential inflation, where more and more people get a given credential (such as a high school diploma or a college degree), and it begins to lose value as a signal of status. It is often noted that a bachelor’s degree today “gets” you the same jobs that a high school diploma did two generations ago, and two generations hence you may need a master’s or even a PhD.

I consider this concern wildly overblown, however. First of all, they’re not actually the same jobs at all. Even our “menial” jobs of today require skills that most people didn’t have two generations ago—not simply those involving electronics and computers, but even quite basic literacy and numeracy. Yes, you could be a banker in the 1920s with a high school diploma, but plenty of bankers in the 1920s didn’t know algebra. What, you think they were arbitraging derivatives based on the Black-Scholes model?

The primary purpose of education should be to actually improve students’ abilities, not to signal their superior status. More people getting educated is good, not bad. If we really do need signals, we can devise better ones than making people pay tens of thousands of dollars in tuition and spending years taking classes. An expenditure of that magnitude should be accomplishing something, not just signaling. (And given the overwhelming positive correlation between a country’s educational attainment and its economic development, clearly education is actually accomplishing something.) Our higher educational standards have directly tied to higher technology and higher productivity. If indeed you need a PhD to be a janitor in 2050, it will be because in 2050 a “janitor” is actually the expert artificial intelligence engineer who commands an army of cleaning robots, not because credentials have “inflated”. Thinking that credentials “inflate” requires thinking that business managers must be very stupid, that they would exclude whole swaths of qualified candidates that they could pay less to do the same work. Only a complete moron would require a PhD to hire you for wielding a mop.

No, what concerns me is an over-emphasis on prestigious credentials over genuine competence. This is definitely a real issue in our society: Almost every US President went to an Ivy League university, yet several of them (George W. Bush, anyone?) clearly would not actually have been selected by such a university if their families had not been wealthy and well-connected. (Harvard’s application literally contains a question asking whether you are a “lineal or collateral descendant” of one of a handful of super-wealthy families.) Papers that contain errors so basic that I would probably get a failing grade as a grad student for them become internationally influential because they were written by famous economists with fancy degrees.

Ironically, it may be precisely because elite universities try not to give grades or special honors that so many of their students try so desperately to latch onto any bits of social status they can get their hands on. In this blog post, a former Yale law student comments on how, without grades or cum laude to define themselves, Yale students became fiercely competitive in the pettiest ways imaginable. Or it might just be a selection effect; to get into Yale you’ve probably got to be pretty competitive, so even if they don’t give out grades once you get there, you can take the student out of the honors track, but you can’t take the honors track out of the student.

But perhaps the biggest problem with credentialism is… I don’t see any viable alternatives!

We have to decide who is going to be hired for technical and professional positions somehow. It almost certainly can’t be everyone. And the most sensible way to do it would be to have a process people go through to get trained and evaluated on their skills in that profession—that is, a credential.

What else would we do? We could decide randomly, I suppose; well, good luck with that. Or we could try to pick people who don’t have qualifications (“anti-credentialism” I suppose), which would be systematically wrong. Or individual employers could hire individuals they know and trust on a personal level, which doesn’t seem quite so ridiculous—but we have a name for that too, and it’s nepotism.

Even anti-credentialism does exist, bafflingly enough. Many people voted for George W. Bush because they said he was “the kind of guy you can have a beer with”. That wasn’t true, of course; he was the spoiled child of a billionaire, a man who had never really worked a day in his life. But even if it had been true, so what? How is that a qualification to be the leader of the free world? And how many people voted for Trump precisely because he had no experience in government? This made sense to them somehow. (And, shockingly, he has no idea what he’s doing. Actually what is shocking is that he admits that.)

Nepotism of course happens all the time. In fact, nepotism is probably the default state for humans. The continual re-emergence of hereditary monarchy and feudalism around the world suggests that this is some sort of attractor state for human societies, that in the absence of strong institutional pressures toward some other system this is what people will generally settle into. And feudalism is nothing if not nepotistic; your position in life is almost entirely determined by your father’s position, and his father’s before that.

Formal credentials can put a stop to that. Of course, your ability to obtain the credential often depends upon your income and social status. But if you can get past those barriers and actually get the credential, you now have a way of pushing past at least some of the competitors who would have otherwise been hired on their family connections alone. The rise in college enrollments—and women actually now exceeding men in college enrollment rates—is one of the biggest reasons why the gender pay gap is rapidly closing among young workers. Nepotism and sexism that would otherwise have hired unqualified men is now overtaken by the superior credentials of qualified women.

Credentialism does still seem suboptimal… but from where I’m sitting, it seems like a second-best solution. We can’t actually observe people’s competence and ability directly, so we need credentials to provide an approximate measurement. We can certainly work to improve credentials—and for example, I am fiercely opposed to multiple-choice testing because it produces such meaningless credentials—but ultimately I don’t see any alternative to credentials.

The real crisis in education is access, not debt

Jan 8, JDN 2457762

A few weeks ago I tried to provide assurances that the “student debt crisis” is really not much of a crisis; there is a lot of debt, but it is being spent on a very good investment both for individuals and for society. Student debt is not that large in the scheme of things, and it more than pays for itself in the long run.

But this does not mean we are not in the midst of an education crisis. It’s simply not about debt.

The crisis I’m worried about involves access.

As you may recall, there are a substantial number of people with very small amounts of student debt, and they tend to be the most likely to default. The highest default rates are among the group of people with student debt greater than $0 but less than $5000.

So how is it that there are people with only $5,000 in student debt anyway? You can’t buy much college for $5,000 these days, as tuition prices have risen at an enormous rate: From 1983 to 2013, in inflation-adjusted dollars, average annual tuition rose from $7,286 at public institutions and $17,333 at private institutions to $15,640 at public institutions and $35,987 at private institutions—more than doubling in each case.

Enrollments are much higher, but this by itself should not raise tuition per student. So where is all the extra money going? Some of it is due to increases in public funding that have failed to keep up with higher enrollments; but a lot of it just seems to be going to higher pay for administrators and athletic coaches. This is definitely a problem; students should not be forced to subsidize the millions of dollars most universities lose on funding athletics—the NCAA, who if anything are surely biased in favor of athletics, found that the total net loss due to athletics spending at FBS universities was $17 million per year. Only a handful of schools actually turn a profit on athletics, all of them Division I. So it might be fair to speak of an “irresponsible college administration crisis”, administrators who heap wealth upon themselves and their beloved athletic programs while students struggle to pay their bills, or even a “college tuition crisis” where tuition keeps rising far beyond what is sustainable. But that’s not the same thing as a “student debt crisis”—just as the mortgage crisis we had in 2008 is distinct from the slow-burning housing price crisis we’ve been in since the 1980s. Making restrictions on mortgages tighter might prevent banks from being as predatory as they have been lately, but it won’t suddenly allow people to better afford houses.

And likewise, I’m much more worried about students who don’t go to college because they are afraid of this so-called “debt crisis”; they’re going to end up much worse off. As Eduardo Porter put it in the New York Times:

And yet Mr. Beltrán says he probably wouldn’t have gone to college full time if he hadn’t received a Pell grant and financial aid from New York State to defray the costs. He has also heard too many stories about people struggling under an unbearable burden of student loans to even consider going into debt. “Honestly, I don’t think I would have gone,” he said. “I couldn’t have done four years.”

And that would have been the wrong decision.

His reasoning is not unusual. The rising cost of college looms like an insurmountable obstacle for many low-income Americans hoping to get a higher education. The notion of a college education becoming a financial albatross around the neck of the nation’s youth is a growing meme across the culture. Some education experts now advise high school graduates that a college education may not be such a good investment after all. “Sticker price matters a lot,” said Lawrence Katz, a professor of Harvard University. “It is a deterrent.”

 

[…]

 

And the most perplexing part of this accounting is that regardless of cost, getting a degree is the best financial decision a young American can make.

According to the O.E.C.D.’s report, a college degree is worth $365,000 for the average American man after subtracting all its direct and indirect costs over a lifetime. For women — who still tend to earn less than men — it’s worth $185,000.

College graduates have higher employment rates and make more money. According to the O.E.C.D., a typical graduate from a four-year college earns 84 percent more than a high school graduate. A graduate from a community college makes 16 percent more.

A college education is more profitable in the United States than in pretty much every other advanced nation. Only Irish women get more for the investment: $185,960 net.

So, these students who have $5,000 or less in student debt; what does that mean? That amount couldn’t even pay for a single year at most universities, so how did that happen?

Well, they almost certainly went to community college; only a community college could provide you with a nontrivial amount of education for less than $5,000. But community colleges vary tremendously in their quality, and some have truly terrible matriculation rates. While most students who start at a four-year school do eventually get a bachelor’s degree (57% at public schools, 78% at private schools), only 17% of students who start at community college do. And once students drop out, they very rarely actually return to complete a degree.

Indeed, the only way to really have that little student debt is to drop out quickly. Most students who drop out do so chiefly for reasons that really aren’t all that surprising: Mostly, they can’t afford to pay their bills. “Unable to balance school and work” is the number 1 reported reason why students drop out of college.

In the American system, student loans are only designed to pay the direct expenses of education; they often don’t cover the real costs of housing, food, transportation and healthcare, and even when they do, they basically never cover the opportunity cost of education—the money you could be making if you were working full-time instead of going to college. For many poor students, simply breaking even on their own expenses isn’t good enough; they have families that need to be taken care of, and that means working full-time. Many of them even need to provide for their parents or grandparents who may be poor or disabled. Yet in the US system it is tacitly assumed that your parents will help you—so when you need to help them, what are you supposed to do? You give up on college and you get a job.

The most successful reforms for solving this problem have been comprehensive; they involved working to support students directly and intensively in all aspects of their lives, not just the direct financial costs of school itself.

Another option would be to do something more like what they do in Sweden, where there is also a lot of student debt, but for a very different reason. The direct cost of college is paid automatically by the government. Yet essentially all Swedish students have student debt, and total student debt in Sweden is much larger than other European countries and comparable to the United States; why? Because Sweden understands that you should also provide for the opportunity cost. In Sweden, students live fully self-sufficient on student loans, just as if they were working full-time. They are not expected to be supported by their parents.

The problem with American student loans, then, is not that they are too large—but that they are too small. They don’t provide for what students actually need, and thus don’t allow them to make the large investment in their education that would have paid off in the long run. Panic over student loans being too large could make the problem worse, if it causes us to reduce the amount of loanable funds available for students.

The lack of support for poor students isn’t the only problem. There are also huge barriers to education in the US based upon race. While Asian students do as well (if not better) than White students, Black and Latino students have substantially lower levels of educational attainment. Affirmative action programs can reduce these disparities, but they are unpopular and widely regarded as unfair, and not entirely without reason.

A better option—indeed one that should be a no-brainer in my opinion—is not to create counter-biases in favor of Black and Latino students (which is what affirmative action is), but to eliminate biases in favor of White students that we know exist. Chief among these are so-called “legacy admissions”, in which elite universities attract wealthy alumni donors by granting their children admission and funding regardless of whether they even remotely deserve it or would contribute anything academically to the university.

These “legacy admissions” are frankly un-American. They go against everything our nation supposedly stands for; in fact, they reek of feudalism. And unsurprisingly, they bias heavily in favor of White students—indeed, over 90 percent of legacy admits are White and Protestant. Athletic admissions are also contrary to the stated mission of the university, though their racial biases are more complicated (Black students are highly overrepresented in football and basketball admits, for example) and it is at least not inherently un-American to select students based upon their athletic talent as opposed to their academic talent.

But this by itself would not be enough; the gaps are clearly too large to close that way. Getting into college is only the start, and graduation rates are much worse for Black students than White students. Moreover, the education gap begins well before college—high school dropout rates are much higher among Black and Latino studentsas well.

In fact, even closing the education gap by itself would not be enough; racial biases permeate our whole society. Black individuals with college degrees are substantially more likely to be unemployed and have substantially lower wages on average than White individuals with college degrees—indeed, a bachelor’s degree gets a Black man a lower mean wage than a White man would get with only an associate’s degree.

Fortunately, the barriers against women in college education have largely been conquered. In fact, there are now more women in US undergraduate institutions than men. This is not to say that there are not barriers against women in society at large; women still make about 75% as much income as men on average, and even once you adjust for factors such as education and career choice they still only make about 95% as much. Moreover, these factors we’re controlling for are endogenous. Women don’t choose their careers in a vacuum, they choose them based upon a variety of social and cultural pressures. The fact that 93% of auto mechanics are men and 79% of clerical workers are women might reflect innate differences in preferences—but it could just as well reflect a variety of cultural biases or even outright discrimination. Quite likely, it’s some combination of these. So it is not obvious to me that the “adjusted” wage gap is actually a more accurate reflection of the treatment of women in our society than the “unadjusted” wage gap; the true level of bias is most likely somewhere in between the two figures.

Gender wage gaps vary substantially across age groups and between even quite similar countries: Middle-aged women in Germany make 28% less than middle-aged men, while in France that gap is only 19%. Young women in Latvia make 14% less than young men, but in Romania they make 1.1% more. This variation clearly shows that this is not purely the effect of some innate genetic difference in skills or preferences; it must be at least in large part the product of cultural pressures or policy choices.

Even within academia, women are less likely to be hired full-time instead of part-time, awarded tenure, or promoted to administrative positions. Moreover, this must be active discrimination in some form, because gaps in hiring and wage offers between men and women persist in randomized controlled experiments. You can literally present the exact same resume and get a different result depending on whether you attached a male name or a female name.

But at least when it comes to the particular question of getting bachelor’s degrees, we have achieved something approaching equality across gender, and that is no minor accomplishment. Most countries in the world still have more men than women graduating from college, and in some countries the difference is terrifyingly large. I found from World Bank data that in the Democratic Republic of Congo, only 3% of men go to college—and less than 1% of women do. Even in Germany, 29% of men graduate from college but only 19% of women do. Getting both of these figures over 30% and actually having women higher than men is a substantial achievement for which the United States should be proud.

Yet it still remains the case that Americans who are poor, Black, Native American, or Latino are substantially less likely to ever make it through college. Panic about student debt might well be making this problem worse, as someone whose family makes $15,000 per year is bound to hear $50,000 in debt as an overwhelming burden, even as you try to explain that it will eventually pay for itself seven times over.

We need to instead be talking about the barriers that are keeping people from attending college, and pressuring them to drop out once they do. Debt is not the problem. Even tuition is not really the problem. Access is the problem. College is an astonishingly good investment—but most people never get the chance to make it. That is what we need to change.

Student debt crisis? What student debt crisis?

Dec 18, JDN 2457741
As of this writing, I have over $99,000 in student loans. This is a good thing. It means that I was able to pay for my four years of college, and two years of a master’s program, in order to be able to start this coming five years of a PhD. When I have concluded these eleven years of postgraduate education and incurred six times the world per-capita income in debt, what then will become of me? Will I be left to live on the streets, destitute and overwhelmed by debt?

No. I’ll have a PhD. The average lifetime income of individuals with PhDs in the United States is $3.4 million. Indeed, the median annual income for economists in the US is almost exactly what I currently owe in debt—so if I save well, I could very well pay it off in just a few years. With an advanced degree in economics like mine, or similarly high-paying fields such as physics, medicine, and law one can expect the higher end of that scale, $4 million or more; with a degree in a less-lucrative field such as art, literature, history, or philosophy, one would have to settle for “only” say $3 million. The average lifetime income in the US for someone without any college education is only $1.2 million. So even in literature or history, a PhD is worth about $2 million in future income.

On average, an additional year of college results in a gain in lifetime future earnings of about 15% to 20%. Even when you adjust for interest rates and temporal discounting, this is a rate of return that would make any stock trader envious.

Fitting the law of diminishing returns, the rates of return on education in poor countries are even larger, often mind-bogglingly huge; the increase in lifetime income from a year of college education in Botswana was estimated at 38%. This implies that someone who graduates from college in Botswana earns four times as much money as someone who only finished high school.

We who pay $100,000 to receive an additional $2 to $3 million can hardly be called unfortunate.

Indeed, we are mind-bogglingly fortunate; we have been given an opportunity to better ourselves and the society we live in that is all but unprecedented in human history granted only to a privileged few even today. Right now, only about half of adults in the most educated countries in the world (Canada, Russia, Israel, Japan, Luxembourg, South Korea, and the United States) ever go to college. Only 30% of Americans ever earn a bachelor’s degree, and as recently as 1975 that figure was only 20%. Worldwide, the majority of people never graduate from high school. The average length of schooling in developing countries today is six yearsthat is, sixth grade—and this is an enormous improvement from the two years of average schooling found in developing countries in 1950.

If we look a bit further back in history, the improvements in education are even more staggering. In the United States in 1910, only 13.5% of people graduated high school, and only 2.7% completed a bachelor’s degree. There was no student debt crisis then, to be sure—because there were no college students.

Indeed, I have been underestimating the benefits of education thus far, because education is both a public and private good. The figures I’ve just given have been only the private financial return on education—the additional income received by an individual because they went to college. But there is also a non-financial return, such as the benefits of working in a more appealing or exciting career and the benefits of learning for its own sake. The reason so many people do go into history and literature instead of economics and physics very likely has to do with valuing these other aspects of education as highly as or even more highly than financial income, and it is entirely rational for people to do so. (An interesting survey question I’ve alas never seen asked: “How much money would we have to give you right now to convince you to quit working in philosophy for the rest of your life?”)

Yet even more important is the public return on education, the increased productivity and prosperity of our society as a result of greater education—and these returns are enormous. For every $1 spent on education in the US, the economy grows by an estimated $1.50. Public returns on college education worldwide are on the order of 10%-20% per year of education. This is over and above the 15-20% return already being made by the individuals going to school. This means that raising the average level of education in a country by just one year raises that country’s income by between 25% and 40%.

Indeed, perhaps the simplest way to understand the enormous social benefits of education is to note the strong correlation between education level and income level. This graph comes from the UN Human Development Report Data Explorer; it plots the HDI education index (which ranges from 0, least educated, to 1, most educated) and the per-capita GDP at purchasing power parity (on a log scale, so that each increment corresponds to a proportional increase in GDP); as you can see, educated countries tend to be rich countries, and vice-versa.

hdi_education_income_labeled

Of course, income drives education just as education drives income. But more detailed econometric studies generally (though not without some controversy) show the same basic result: The more educated a country’s people become, the richer that country becomes.

And indeed, the United States is a spectacularly rich country. The figure of “$1 trillion in college debt” sounds alarming (and has been used to such effect in many a news article, ranging from the New York Daily News, Slate, and POLITICO to USA Today and CNN all the way to Bloomberg, MarketWatch, and Business Insider, and even getting support from the Consumer Financial Protection Bureau and The Federal Reserve Bank of New York).

But the United States has a total GDP of over $18.6 trillion, and total net wealth somewhere around $84 trillion. Is it really so alarming that our nation’s most important investment would result in debt of less than two percent of our total nation’s wealth? Democracy Now asks who is getting rich off of $1.3 trillion in student debt? All of us—the students especially.

In fact, the probability of defaulting on student loans is inversely proportional to the amount of loans a student has. Students with over $100,000 in student debt default only 18% of the time, while students with less than $5,000 in student debt default 34% of the time. This should be shocking to those who think that we have a crisis of too much student debt; if student debt were an excess burden that is imposed upon us for little gain, default rates should rise as borrowing amounts increase, as we observe, for example, with credit cards: there is a positive correlation between carrying higher balances and being more likely to default. (This also raises doubts about the argument that higher debt loads should carry higher interest rates—why, if the default rate doesn’t go up?) But it makes perfect sense if you realize that college is an investment—indeed, almost certainly both the most profitable and the most socially responsible investment most people will ever have the opportunity to make. More debt means you had access to more credit to make a larger investment—and therefore your payoff was greater and you were more likely to be able to repay the debt.

Yes, job prospects were bad for college graduates right after the Great Recession—because it was right after the Great Recession, and job prospects were bad for everyone. Indeed, the unemployment rate for people with college degrees was substantially lower than for those without college degrees, all the way through the Second Depression. The New York Times has a nice little gadget where you can estimate the unemployment rate for college graduates; my hint for you is that I just said it’s lower, and I still guessed too high. There was variation across fields, of course; unsurprisingly computer science majors did extremely well and humanities majors did rather poorly. Underemployment was a big problem, but again, clearly because of the recession, not because going to college was a mistake. In fact, unemployment for college graduates (about 9%) has always been so much lower than unemployment for high school dropouts that the maximum unemployment rate for young college graduates is less than the minimum unemployment rate for young high school graduates (10%) over the entire period since the year 2000. Young high school dropouts have fared even worse; their minimum unemployment rate since 2000 was 18%, while their maximum was a terrifying Great Depression-level of 32%. Education isn’t just a good investment—it’s an astonishingly good investment.

There are a lot of things worth panicking about, now that Trump has been elected President. But student debt isn’t one of them. This is a very smart investment, made with a reasonable portion of our nation’s wealth. If you have student debt like I do, make sure you have enough—or otherwise you might not be able to pay it back.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

How (not) to talk about the defense budget

JDN 2457927 EDT 20:20.

This week on Facebook I ran into a couple of memes about the defense budget that I thought were worth addressing. While the core message that the United States spends too much on the military is sound, these particular memes are so massively misleading that I think it would be irresponsible to let them go unanswered.

Tax_dollars_meme

First of all, this graph is outdated; it appears to be from about five years ago. If you use nominal figures for just direct military spending, the budget has been cut from just under $700 billion (what this figure looks like) in 2010 to only about $600 billion today. If you include verterans’ benefits, again nominally, we haven’t been below $700 billion since 2007; today we are now above $800 billion. I think the most meaningful measure is actually military spending as percent of GDP, on which we’ve cut military spending from its peak of 4.7% of GDP in 2010 to 3.5% of GDP today.

It’s also a terrible way to draw a graph; using images instead of bars may be visually appealing, but it undermines the most important aspect of a bar graph, which is that you can easily visually compare relative magnitudes.

But the most important reason why this graph is misleading is that it uses only the so-called “discretionary budget”, which includes almost all military spending but only a small fraction of spending on healthcare and social services. This creates a wildly inflated sense of how much we spend on the military relatively to other priorities.

In particular, we’re excluding Medicare and Social Security, which are on the “mandatory budget”; each of these alone is comparable to total military spending. Here’s a very nice table of all US government spending broken down by category.

Let’s just look at federal spending for now. Including veterans’ benefits, we currently spend $814 billion per year on defense. On Social Security, we spend $959 billion. On healthcare, we spend $1,018 billion per year, of which $536 billion is Medicare.

We also spend $376 billion on social welfare programs and unemployment, along with $149 billion on education, $229 billion servicing the national debt, and $214 billion on everything else (such as police, transportation, and administration).

I’ve made you a graph that accurately reflects these relative quantities:

US_federal_spending

As you can see, the military is one of our major budget items, but the largest categories are actually pensions (i.e. Social Security) and healthcare (i.e. Medicare and Medicaid).

Given the right year and properly adjusted bars on the graph, the meme may strictly be accurate about the discretionary budget, but it gives an extremely distorted sense of our overall government spending.

The next meme is even worse:

Lee_Camp_meme

Again the figures aren’t strictly wrong if you use the right year, but we’re only looking at the federal discretionary budget. Since basically all military spending is federal and discretionary, but most education spending is mandatory and done at the state and local level, this is an even more misleading picture.

Total annual US military spending (including veteran benefits) is about $815 billion.
Total US education spending (at all levels) is about $922 billion.

Here’s an accurate graph of total US government spending at all levels:

US_total_spending

That is, we spend more on education than we do on the military, and dramatically more on healthcare.

However, the United States clearly does spend far too much on the military and probably too little on education; the proper comparison to make is to other countries.

Most other First World Countries spend dramatically more on education than they do on the military.

France, for example, spends about $160 billion per year on education, but only about $53 billion per year on the military—and France is actually a relatively militaristic country, with the 6th-highest total military spending in the world.

Germany spends about $172 billion per year on education, but only about about $44 billion on the military.

In absolute figures, the United States overwhelms all other countries in the world—we spend as much as at least the next 10 combined.

Using figures from the Stockholm International Peace Research Institute (SIPRI), the US spends $610 billion of the world’s total $1,776 billion, meaning that over a third of the world’s military spending is by the United States.

This is a graph of the top 15 largest military budgets in the world.

world_military_spending

One of these things is not like the other ones…

It probably makes the most sense to compare military spending as a portion of GDP, which makes the US no longer an outlier worldwide, but still very high by First World standards:

world_military_spending_GDP

If we do want to compare military spending to other forms of spending, I think we should do that in international perspective as well. Here is a graph of education spending versus military spending as a portion of GDP, in several First World countries (military from SIPRI and the CIA, and education from the UNDP):

world_military_education

Our education spending is about average (though somehow we do it so inefficiently that we don’t provide college for free, unlike Germany, France, Finland, Sweden, or Norway), but our military spending is by far the highest.

How about a meme about that?

What’s wrong with academic publishing?

JDN 2457257 EDT 14:23.

I just finished expanding my master’s thesis into a research paper that is, I hope, suitable for publication in an economics journal. As part of this process I’ve been looking into the process of submitting articles for publication in academic journals… and I’ve found has been disgusting and horrifying. It is astonishingly bad, and my biggest question is why researchers put up with it.

Thus, the subject of this post is what’s wrong with the system—and what we might do instead.

Before I get into it, let me say that I don’t actually disagree with “publish or perish” in principle—as SMBC points out, it’s a lot like “do your job or get fired”. Researchers should publish in peer-reviewed journals; that’s a big part of what doing research means. The problem is how most peer-reviewed journals are currently operated.

First of all, in case you didn’t know, most scientific journals are owned by for-profit corporations. The largest corporation Elsevier, owns The Lancet and all of ScienceDirect, and has net income of over 1 billion Euros a year. Then there’s Springer and Wiley-Blackwell; between the three of them, these publishers account for over 40% of all scientific publications. These for-profit publishers retain the full copyright to most of the papers they publish, and tightly control access with paywalls; the cost to get through these paywalls is generally thousands of dollars a year for individuals and millions of dollars a year for universities. Their monopoly power is so great it “makes Rupert Murdoch look like a socialist.”

For-profit journals do often offer an “open-access” option in which you basically buy back your own copyright, but the price is high—the most common I’ve seen are $1800 or $3000 per paper—and very few researchers do this, for obvious financial reasons. In fact I think for a full-time tenured faculty researcher it’s probably worth it, given the alternatives. (Then again, full-time tenured faculty are becoming an endangered species lately; what might be worth it in the long run can still be very difficult for a cash-strapped adjunct to afford.) Open-access means people can actually read your paper and potentially cite your paper. Closed-access means it may languish in obscurity.

And of course it isn’t just about the benefits for the individual researcher. The scientific community as a whole depends upon the free flow of information; the reason we publish in the first place is that we want people to read papers, discuss them, replicate them, challenge them. Publication isn’t the finish line; it’s at best a checkpoint. Actually one thing that does seem to be wrong with “publish or perish” is that there is so much pressure for publication that we publish too many pointless papers and nobody has time to read the genuinely important ones.

These prices might be justifiable if the for-profit corporations actually did anything. But in fact they are basically just aggregators. They don’t do the peer-review, they farm it out to other academic researchers. They don’t even pay those other researchers; they just expect them to do it. (And they do! Like I said, why do they put up with this?) They don’t pay the authors who have their work published (on the contrary, they often charge submission fees—about $100 seems to be typical—simply to look at them). It’s been called “the world’s worst restaurant”, where you pay to get in, bring your own ingredients and recipes, cook your own food, serve other people’s food while they serve yours, and then have to pay again if you actually want to be allowed to eat.

They pay for the printing of paper copies of the journal, which basically no one reads; and they pay for the electronic servers that host the digital copies that everyone actually reads. They also provide some basic copyediting services (copyediting APA style is a job people advertise on Craigslist—so you can guess how much they must be paying).

And even supposing that they actually provided some valuable and expensive service, the fact would remain that we are making for-profit corporations the gatekeepers of the scientific community. Entities that exist only to make money for their owners are given direct control over the future of human knowledge. If you look at Cracked’s “reasons why we can’t trust science anymore”, all of them have to do with the for-profit publishing system. p-hacking might still happen in a better system, but publishers that really had the best interests of science in mind would be more motivated to fight it than publishers that are simply trying to raise revenue by getting people to buy access to their papers.

Then there’s the fact that most journals do not allow authors to submit to multiple journals at once, yet take 30 to 90 days to respond and only publish a fraction of what is submitted—it’s almost impossible to find good figures on acceptance rates (which is itself a major problem!), but the highest figures I’ve seen are 30% acceptance, a more typical figure seems to be 10%, and some top journals go as low as 3%. In the worst-case scenario you are locked into a journal for 90 days with only a 3% chance of it actually publishing your work. At that rate publishing an article could take years.

Is open-access the solution? Yes… well, part of it, anyway.

There are a large number of open-access journals, some of which do not charge submission fees, but very few of them are prestigious, and many are outright predatory. Predatory journals charge exorbitant fees, often after accepting papers for publication; many do little or no real peer review. There are almost seven hundred known predatory open-access journals; over one hundred have even been caught publishing hoax papers. These predatory journals are corrupting the process of science.

There are a few reputable open-access journals, such as BMC Biology and PLOSOne. Though not actually a journal, ArXiv serves a similar role. These will be part of the solution, most definitely. Yet even legitimate open-access journals often charge each author over $1000 to publish an article. There is a small but significant positive correlation between publication fees and journal impact factor.

We need to found more open-access journals which are funded by either governments or universities, so that neither author nor reader ever pays a cent. Science is a public good and should be funded as such. Even if copyright makes sense for other forms of content (I’m not so sure about that), it most certainly does not make sense for scientific knowledge, which by its very nature is only doing its job if it is shared with the world.

These journals should be specifically structured to be method-sensitive but results-blind. (It’s a very good thing that medical trials are usually registered before they are completed, so that publication is assured even if the results are negative—the same should be done with other sciences. Unfortunately, even in medicine there is significant publication bias.) If you could sum up the scientific method in one phrase, it might just be that: Method-sensitive but results-blind. If you think you know what you’re going to find beforehand, you may not be doing science. If you are certain what you’re going to find beforehand, you’re definitely not doing science.

The process should still be highly selective, but it should be possible—indeed, expected—to submit to multiple journals at once. If journals want to start paying their authors to entice them to publish in that journal rather than take another offer, that’s fine with me. Researchers are the ones who produce the content; if anyone is getting paid for it, it should be us.

This is not some wild and fanciful idea; it’s already the way that book publishing works. Very few literary agents or book publishers would ever have the audacity to say you can’t submit your work elsewhere; those that try are rapidly outcompeted as authors stop submitting to them. It’s fundamentally unreasonable to expect anyone to hang all their hopes on a particular buyer months in advance—and that is what you are, publishers, you are buyers. You are not sellers, you did not create this content.

But new journals face a fundamental problem: Good researchers will naturally want to publish in journals that are prestigious—that is, journals that are already prestigious. When all of the prestige is in journals that are closed-access and owned by for-profit companies, the best research goes there, and the prestige becomes self-reinforcing. Journals are prestigious because they are prestigious; welcome to tautology club.

Somehow we need to get good researchers to start boycotting for-profit journals and start investing in high-quality open-access journals. If Elsevier and Springer can’t get good researchers to submit to them, they’ll change their ways or wither and die. Research should be funded and published by governments and nonprofit institutions, not by for-profit corporations.

This may in fact highlight a much deeper problem in academia, the very concept of “prestige”. I have no doubt that Harvard is a good university, better university than most; but is it actually the best as it is in most people’s minds? Might Stanford or UC Berkeley be better, or University College London, or even the University of Michigan? How would we tell? Are the students better? Even if they are, might that just be because all the better students went to the schools that had better reputations? Controlling for the quality of the student, more prestigious universities are almost uncorrelated with better outcomes. Those who get accepted to Ivies but attend other schools do just as well in life as those who actually attend Ivies. (Good news for me, getting into Columbia but going to Michigan.) Yet once a university acquires such a high reputation, it can be very difficult for it to lose that reputation, and even more difficult for others to catch up.

Prestige is inherently zero-sum; for me to get more prestige you must lose some. For one university or research journal to rise in rankings, another must fall. Aside from simply feeding on other prestige, the prestige of a university is largely based upon the students it rejects—its “selectivity” score. What does it say about our society that we value educational institutions based upon the number of people they exclude?

Zero-sum ranking is always easier to do than nonzero-sum absolute scoring. Actually that’s a mathematical theorem, and one of the few good arguments against range voting (still not nearly good enough, in my opinion); if you have a list of scores you can always turn them into ranks (potentially with ties); but from a list of ranks there is no way to turn them back into scores.

Yet ultimately it is absolute scores that must drive humanity’s progress. If life were simply a matter of ranking, then progress would be by definition impossible. No matter what we do, there will always be top-ranked and bottom-ranked people.

There is simply no way mathematically for more than 1% of human beings to be in the top 1% of the income distribution. (If you’re curious where exactly that lies today, I highly recommend this interactive chart by the New York Times.) But we could raise the standard of living for the majority of people to a level that only the top 1% once had—and in fact, within the First World we have already done this. We could in fact raise the standard of living for everyone in the First World to a level that only the top 1%—or less—had as recently as the 16th century, by the simple change of implementing a basic income.

There is no way for more than 0.14% of people to have an IQ above 145, because IQ is defined to have a mean of 100 and a standard deviation of 15, regardless of how intelligent people are. People could get dramatically smarter over timeand in fact have—and yet it would still be the case that by definition, only 0.14% can be above 145.

Similarly, there is no way for much more than 1% of people to go to the top 1% of colleges. There is no way for more than 1% of people to be in the highest 1% of their class. But we could increase the number of college degrees (which we have); we could dramatically increase literacy rates (which we have).

We need to find a way to think of science in the same way. I wouldn’t suggest simply using number of papers published or even number of drugs invented; both of those are skyrocketing, but I can’t say that most of the increase is actually meaningful. I don’t have a good idea of what an absolute scale for scientific quality would look like, even at an aggregate level; and it is likely to be much harder still to make one that applies on an individual level.

But I think that ultimately this is the only way, the only escape from the darkness of cutthroat competition. We must stop thinking in terms of zero-sum rankings and start thinking in terms of nonzero-sum absolute scales.

How is the economy doing?

JDN 2457033 EST 12:22.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

Why are our forecasts so bad? Some argue that the task is just inherently too difficult due to the chaotic system involved; but they used to say that about weather forecasts, and yet with satellites and computer models our forecasts are now far more accurate than they were 20 years ago. Others have argued that “politics always dominates over economics”, as though politics were somehow a fundamentally separate thing, forever exogenous, a parameter in our models that cannot be predicted. I have a number of economic aphorisms I’m trying to popularize; the one for this occasion is: “Nothing is exogenous.” (Maybe fundamental constants of physics? But actually many physicists think that those constants can be derived from even more fundamental laws.) My most common is “It’s the externalities, stupid.”; next is “It’s not the incentives, it’s the opportunities.”; and the last is “Human beings are 90% rational. But woe betide that other 10%.” In fact, it’s not quite true that all our macroeconomic forecasters are bad; a few, such as Krugman, are actually quite good. The Klein Award is given each year to the best macroeconomic forecasters, and the same names pop up too often for it to be completely random. (Sadly, one of the most common is Citigroup, meaning that our banksters know perfectly well what they’re doing when they destroy our economy—they just don’t care.) So in fact I think our failures of forecasting are not inevitable or permanent.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someone will probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”