My first AEA conference

Jan 13 JDN 2458497

The last couple of weeks have been a bit of a whirlwind for me. I submitted a grant proposal, I have another, much more complicated proposal due next week, I submitted a paper to a journal, and somewhere in there I went to the AEA conference for the first time.

Going to the conference made it quite clear that the race and gender disparities in economics are quite real: The vast majority of the attendees were middle-aged White males, all wearing one of either two outfits: Sportcoat and khakis, or suit and tie. (And almost all of the suits were grey or black and almost all of the shirts were white or pastel. Had you photographed in greyscale you’d only notice because the hotel carpets looked wrong.) In an upcoming post I’ll go into more detail about this problem, what seems to be causing it, and what might be done to fix it.

But for now I just want to talk about the conference itself, and moreover, the idea of having conferences—is this really the best way to organize ourselves as a profession?

One thing I really do like about the AEA conference is actually something that separates it from other professions: The job market for economics PhDs is a very formalized matching system designed to be efficient and minimize opportunities for bias. It should be a model for other job markets. All the interviews are conducted in rapid succession, at the conference itself, so that candidates can interview for positions all over the country or even abroad.

I wasn’t on the job market yet, but I will be in a few years. I wanted to see what it’s like before I have to run that gauntlet myself.

But then again, why did we need face-to-face interviews at all? What do they actually tell us?

It honestly seems like a face-to-face interview is optimized to maximize opportunities for discrimination. Do you know them personally? Nepotism opportunity. Are they male or female? Sexism opportunity. Are they in good health? Ableism opportunity. Do they seem gay, or mention a same-sex partner? Homophobia opportunity. Is their gender expression normative? Transphobia opportunity. How old are they? Ageism opportunity. Are they White? Racism opportunity. Do they have an accent? Nationalism opportunity. Do they wear fancy clothes? Classism opportunity. There are other forms of bias we don’t even have simple names for: Do they look pregnant? Do they wear a wedding band? Are they physically attractive? Are they tall?

You can construct your resume review system to not include any of this information, by excluding names, pictures, and personal information. But you literally can’t exclude all of this information from a face-to-face interview, and this is the only hiring mechanism that suffers from this fundamental flaw.

If it were really about proving your ability to do the job, they could send you a take-home exam (a lot of tech companies actually do this): Here’s a small sample project similar to what we want you to do, and a reasonable deadline in which to do it. Do it, and we’ll see if it’s good enough.

If they want to offer an opportunity for you to ask or answer specific questions, that could be done via text chat—which could be on the one hand end-to-end encrypted against eavesdropping and on the other hand leave a clear paper trail in case they try to ask you anything they shouldn’t. If they start asking about your sexual interests in the digital interview, you don’t just feel awkward and wonder if you should take the job: You have something to show in court.

Even if they’re interested in things like your social skills and presentation style, those aren’t measured well by interviews anyway. And they probably shouldn’t even be as relevant to hiring as they are.

With that in mind, maybe bringing all the PhD graduates in economics in the entire United States into one hotel for three days isn’t actually necessary. Maybe all these face-to-face interviews aren’t actually all that great, because their small potential benefits are outweighed by their enormous potential biases.

The rest of the conference is more like other academic conferences, which seems even less useful.

The conference format seems like a strange sort of formality, a ritual that we go through. It’s clearly not the optimal way to present ongoing research—though perhaps it’s better than publishing papers in journals, which is our current gold standard. A whole bunch of different people give you brief, superficial presentations of their research, which may be only tangentially related to anything you’re interested in, and you barely even have time to think about it before they go on to the next once. Also, seven of these sessions are going on simultaneously, so unless you have a Time Turner, you have to choose which one to go to. And they are often changed at the last minute, so you may not even end up going to the one you thought you were going to.

I was really struck by how little experimental work was presented. I was under the impression that experimental economics was catching on, but despite specifically trying to go to experiment-related sessions (excluding the 8:00 AM session for migraine reasons), I only counted a handful of experiments, most of them in the field rather than the lab. There was a huge amount of theory and applied econometrics. I guess this isn’t too surprising, as those are the two main kinds of research that only cost a researcher’s time. I guess in some sense this is good news for me: It means I don’t have as much competition as I thought.

Instead of gathering papers into sessions where five different people present vaguely-related papers in far too little time, we could use working papers, or better yet a more sophisticated online forum where research could be discussed in real-time before it even gets written into a paper. We could post results as soon as we get them, and instead of conducting one high-stakes anonymous peer review at the time of publication, conduct dozens of little low-stakes peer reviews as the research is ongoing. Discussants could be turned into collaborators.

The most valuable parts of conferences always seem to be the parts that aren’t official sessions: Luncheons, receptions, mixers. There you get to meet other people in the field. And this can be valuable, to be sure. But I fear that the individual gain is far larger than the social gain: Most of the real benefits of networking get dissipated by the competition to be better-connected than the other candidates. The kind of working relationships that seem to be genuinely valuable are the kind formed by working at the same school for several years, not the kind that can be forged by meeting once at a conference reception.

I guess every relationship has to start somewhere, and perhaps more collaborations have started that way than I realize. But it’s also worth asking: Should we really be putting so much weight on relationships? Is that the best way to organize an academic discipline?

“It’s not what you know, it’s who you know” is an accurate adage in many professions, but it seems like research should be where we would want it least to apply. This is supposed to be about advancing human knowledge, not making friends—and certainly not maintaining the old boys’ club.

What’s wrong with academic publishing?

JDN 2457257 EDT 14:23.

I just finished expanding my master’s thesis into a research paper that is, I hope, suitable for publication in an economics journal. As part of this process I’ve been looking into the process of submitting articles for publication in academic journals… and I’ve found has been disgusting and horrifying. It is astonishingly bad, and my biggest question is why researchers put up with it.

Thus, the subject of this post is what’s wrong with the system—and what we might do instead.

Before I get into it, let me say that I don’t actually disagree with “publish or perish” in principle—as SMBC points out, it’s a lot like “do your job or get fired”. Researchers should publish in peer-reviewed journals; that’s a big part of what doing research means. The problem is how most peer-reviewed journals are currently operated.

First of all, in case you didn’t know, most scientific journals are owned by for-profit corporations. The largest corporation Elsevier, owns The Lancet and all of ScienceDirect, and has net income of over 1 billion Euros a year. Then there’s Springer and Wiley-Blackwell; between the three of them, these publishers account for over 40% of all scientific publications. These for-profit publishers retain the full copyright to most of the papers they publish, and tightly control access with paywalls; the cost to get through these paywalls is generally thousands of dollars a year for individuals and millions of dollars a year for universities. Their monopoly power is so great it “makes Rupert Murdoch look like a socialist.”

For-profit journals do often offer an “open-access” option in which you basically buy back your own copyright, but the price is high—the most common I’ve seen are $1800 or $3000 per paper—and very few researchers do this, for obvious financial reasons. In fact I think for a full-time tenured faculty researcher it’s probably worth it, given the alternatives. (Then again, full-time tenured faculty are becoming an endangered species lately; what might be worth it in the long run can still be very difficult for a cash-strapped adjunct to afford.) Open-access means people can actually read your paper and potentially cite your paper. Closed-access means it may languish in obscurity.

And of course it isn’t just about the benefits for the individual researcher. The scientific community as a whole depends upon the free flow of information; the reason we publish in the first place is that we want people to read papers, discuss them, replicate them, challenge them. Publication isn’t the finish line; it’s at best a checkpoint. Actually one thing that does seem to be wrong with “publish or perish” is that there is so much pressure for publication that we publish too many pointless papers and nobody has time to read the genuinely important ones.

These prices might be justifiable if the for-profit corporations actually did anything. But in fact they are basically just aggregators. They don’t do the peer-review, they farm it out to other academic researchers. They don’t even pay those other researchers; they just expect them to do it. (And they do! Like I said, why do they put up with this?) They don’t pay the authors who have their work published (on the contrary, they often charge submission fees—about $100 seems to be typical—simply to look at them). It’s been called “the world’s worst restaurant”, where you pay to get in, bring your own ingredients and recipes, cook your own food, serve other people’s food while they serve yours, and then have to pay again if you actually want to be allowed to eat.

They pay for the printing of paper copies of the journal, which basically no one reads; and they pay for the electronic servers that host the digital copies that everyone actually reads. They also provide some basic copyediting services (copyediting APA style is a job people advertise on Craigslist—so you can guess how much they must be paying).

And even supposing that they actually provided some valuable and expensive service, the fact would remain that we are making for-profit corporations the gatekeepers of the scientific community. Entities that exist only to make money for their owners are given direct control over the future of human knowledge. If you look at Cracked’s “reasons why we can’t trust science anymore”, all of them have to do with the for-profit publishing system. p-hacking might still happen in a better system, but publishers that really had the best interests of science in mind would be more motivated to fight it than publishers that are simply trying to raise revenue by getting people to buy access to their papers.

Then there’s the fact that most journals do not allow authors to submit to multiple journals at once, yet take 30 to 90 days to respond and only publish a fraction of what is submitted—it’s almost impossible to find good figures on acceptance rates (which is itself a major problem!), but the highest figures I’ve seen are 30% acceptance, a more typical figure seems to be 10%, and some top journals go as low as 3%. In the worst-case scenario you are locked into a journal for 90 days with only a 3% chance of it actually publishing your work. At that rate publishing an article could take years.

Is open-access the solution? Yes… well, part of it, anyway.

There are a large number of open-access journals, some of which do not charge submission fees, but very few of them are prestigious, and many are outright predatory. Predatory journals charge exorbitant fees, often after accepting papers for publication; many do little or no real peer review. There are almost seven hundred known predatory open-access journals; over one hundred have even been caught publishing hoax papers. These predatory journals are corrupting the process of science.

There are a few reputable open-access journals, such as BMC Biology and PLOSOne. Though not actually a journal, ArXiv serves a similar role. These will be part of the solution, most definitely. Yet even legitimate open-access journals often charge each author over $1000 to publish an article. There is a small but significant positive correlation between publication fees and journal impact factor.

We need to found more open-access journals which are funded by either governments or universities, so that neither author nor reader ever pays a cent. Science is a public good and should be funded as such. Even if copyright makes sense for other forms of content (I’m not so sure about that), it most certainly does not make sense for scientific knowledge, which by its very nature is only doing its job if it is shared with the world.

These journals should be specifically structured to be method-sensitive but results-blind. (It’s a very good thing that medical trials are usually registered before they are completed, so that publication is assured even if the results are negative—the same should be done with other sciences. Unfortunately, even in medicine there is significant publication bias.) If you could sum up the scientific method in one phrase, it might just be that: Method-sensitive but results-blind. If you think you know what you’re going to find beforehand, you may not be doing science. If you are certain what you’re going to find beforehand, you’re definitely not doing science.

The process should still be highly selective, but it should be possible—indeed, expected—to submit to multiple journals at once. If journals want to start paying their authors to entice them to publish in that journal rather than take another offer, that’s fine with me. Researchers are the ones who produce the content; if anyone is getting paid for it, it should be us.

This is not some wild and fanciful idea; it’s already the way that book publishing works. Very few literary agents or book publishers would ever have the audacity to say you can’t submit your work elsewhere; those that try are rapidly outcompeted as authors stop submitting to them. It’s fundamentally unreasonable to expect anyone to hang all their hopes on a particular buyer months in advance—and that is what you are, publishers, you are buyers. You are not sellers, you did not create this content.

But new journals face a fundamental problem: Good researchers will naturally want to publish in journals that are prestigious—that is, journals that are already prestigious. When all of the prestige is in journals that are closed-access and owned by for-profit companies, the best research goes there, and the prestige becomes self-reinforcing. Journals are prestigious because they are prestigious; welcome to tautology club.

Somehow we need to get good researchers to start boycotting for-profit journals and start investing in high-quality open-access journals. If Elsevier and Springer can’t get good researchers to submit to them, they’ll change their ways or wither and die. Research should be funded and published by governments and nonprofit institutions, not by for-profit corporations.

This may in fact highlight a much deeper problem in academia, the very concept of “prestige”. I have no doubt that Harvard is a good university, better university than most; but is it actually the best as it is in most people’s minds? Might Stanford or UC Berkeley be better, or University College London, or even the University of Michigan? How would we tell? Are the students better? Even if they are, might that just be because all the better students went to the schools that had better reputations? Controlling for the quality of the student, more prestigious universities are almost uncorrelated with better outcomes. Those who get accepted to Ivies but attend other schools do just as well in life as those who actually attend Ivies. (Good news for me, getting into Columbia but going to Michigan.) Yet once a university acquires such a high reputation, it can be very difficult for it to lose that reputation, and even more difficult for others to catch up.

Prestige is inherently zero-sum; for me to get more prestige you must lose some. For one university or research journal to rise in rankings, another must fall. Aside from simply feeding on other prestige, the prestige of a university is largely based upon the students it rejects—its “selectivity” score. What does it say about our society that we value educational institutions based upon the number of people they exclude?

Zero-sum ranking is always easier to do than nonzero-sum absolute scoring. Actually that’s a mathematical theorem, and one of the few good arguments against range voting (still not nearly good enough, in my opinion); if you have a list of scores you can always turn them into ranks (potentially with ties); but from a list of ranks there is no way to turn them back into scores.

Yet ultimately it is absolute scores that must drive humanity’s progress. If life were simply a matter of ranking, then progress would be by definition impossible. No matter what we do, there will always be top-ranked and bottom-ranked people.

There is simply no way mathematically for more than 1% of human beings to be in the top 1% of the income distribution. (If you’re curious where exactly that lies today, I highly recommend this interactive chart by the New York Times.) But we could raise the standard of living for the majority of people to a level that only the top 1% once had—and in fact, within the First World we have already done this. We could in fact raise the standard of living for everyone in the First World to a level that only the top 1%—or less—had as recently as the 16th century, by the simple change of implementing a basic income.

There is no way for more than 0.14% of people to have an IQ above 145, because IQ is defined to have a mean of 100 and a standard deviation of 15, regardless of how intelligent people are. People could get dramatically smarter over timeand in fact have—and yet it would still be the case that by definition, only 0.14% can be above 145.

Similarly, there is no way for much more than 1% of people to go to the top 1% of colleges. There is no way for more than 1% of people to be in the highest 1% of their class. But we could increase the number of college degrees (which we have); we could dramatically increase literacy rates (which we have).

We need to find a way to think of science in the same way. I wouldn’t suggest simply using number of papers published or even number of drugs invented; both of those are skyrocketing, but I can’t say that most of the increase is actually meaningful. I don’t have a good idea of what an absolute scale for scientific quality would look like, even at an aggregate level; and it is likely to be much harder still to make one that applies on an individual level.

But I think that ultimately this is the only way, the only escape from the darkness of cutthroat competition. We must stop thinking in terms of zero-sum rankings and start thinking in terms of nonzero-sum absolute scales.