I finally have a published paper.

Jun 12 JDN 2459773

Here it is, my first peer-reviewed publication: “Imperfect Tactic Collusion and Asymmetric Price Transmission”, in the Journal of Economic Behavior and Organization.

Due to the convention in economics that authors are displayed alphabetically, I am listed third of four, and will be typically collapsed into “Bulutay et. al.”. I don’t actually think it should be “Julius et. al.”; I think Dave Hales did the most important work, and I wanted it to be “Hales et. al.”; but anything non-alphabetical is unusual in economics, and it would have taken a strong justification to convince the others to go along with it. This is a very stupid norm (and I attribute approximately 20% of Daron Acemoglu’s superstar status to it), but like any norm, it is difficult to dislodge.

I thought I would feel different when this day finally came. I thought I would feel joy, or at least satisfaction. I had been hoping that satisfaction would finally spur me forward in resubmitting my single-author paper, “Experimental Public Goods Games with Progressive Taxation”, so I could finally get a publication that actually does have “Julius (2022)” (or, at this rate, 2023, 2024…?). But that motivating satisfaction never came.

I did feel some vague sense of relief: Thank goodness, this ordeal is finally over and I can move on. But that doesn’t have the same motivating force; it doesn’t make me want to go back to the other papers I can now hardly bear to look at.

This reaction (or lack thereof?) could be attributed to circumstances: I have been through a lot lately. I was already overwhelmed by finishing my dissertation and going on the job market, and then there was the pandemic, and I had to postpone my wedding, and then when I finally got a job we had to suddenly move abroad, and then it was awful finding a place to live, and then we actually got married (which was lovely, but still stressful), and it took months to get my medications sorted with the NHS, and then I had a sudden resurgence of migraines which kept me from doing most of my work for weeks, and then I actually caught COVID and had to deal with that for a few weeks too. So it really isn’t too surprising that I’d be exhausted and depressed after all that.

Then again, it could be something deeper. I didn’t feel this way about my wedding. That genuinely gave me the joy and satisfaction that I had been expecting; I think it really was the best day of my life so far. So it isn’t as if I’m incapable of these feelings under my current state.

Rather, I fear that I am becoming more permanently disillusioned with academia. Now that I see how the sausage is made, I am no longer so sure I want to be one of the people making it. Publishing that paper didn’t feel like I had accomplished something, or even made some significant contribution to human knowledge. In fact, the actual work of publication was mostly done by my co-authors, because I was too overwhelmed by the job market at the time. But what I did have to do—and what I’ve tried to do with my own paper—felt like a miserable, exhausting ordeal.

More and more, I’m becoming convinced that a single experiment tells us very little, and we are being asked to present each one as if it were a major achievement when it’s more like a single brick in a wall.

But whatever new knowledge our experiments may have gleaned, that part was done years ago. We could have simply posted the draft as a working paper on the web and moved on, and the world would know just as much and our lives would have been a lot easier.

Oh, but then it would not have the imprimatur of peer review! And for our careers, that means absolutely everything. (Literally, when they’re deciding tenure, nothing else seems to matter.) But for human knowledge, does it really mean much? The more referee reports I’ve read, the more arbitrary they feel to me. This isn’t an objective assessment of scientific merit; it’s the half-baked opinion of a single randomly chosen researcher who may know next to nothing about the topic—or worse, have a vested interest in defending a contrary paradigm.

Yes, of course, what gets through peer review is of considerably higher quality than any randomly-selected content on the Internet. (The latter can be horrifically bad.) But is this not also true of what gets submitted for peer review? In fact, aren’t many blogs written by esteemed economists (say, Krugman? Romer? Nate Silver?) of considerably higher quality as well, despite having virtually none of the gatekeepers? I think Krugman’s blog is nominally edited by the New York Times, and Silver has a whole staff at FiveThirtyEight (they’re hiring, in fact!), but I’m fairly certain Romer just posts whatever he wants like I do. Of course, they had to establish their reputations (Krugman and Romer each won a Nobel). But still, it seems like maybe peer-review isn’t doing the most important work here.

Even blogs by far less famous economists (e.g. Miles Kimball, Brad DeLong) are also very good, and probably contribute more to advancing the knowledge of the average person than any given peer-reviewed paper, simply because they are more readable and more widely read. What we call “research” means going from zero people knowing a thing to maybe a dozen people knowing it; “publishing” means going from a dozen to at most a thousand; to go from a thousand to a billion, we call that “education”.

They all matter, of course; but I think we tend to overvalue research relative to education. A world where a few people know something is really not much better than a world where nobody does, while a world where almost everyone knows something can be radically superior. And the more I see just how far behind the cutting edge of research most economists are—let alone most average people—the more apparent it becomes to me that we are investing far too much in expanding that cutting edge (and far, far too much in gatekeeping who gets to do that!) and not nearly enough in disseminating that knowledge to humanity.

I think maybe that’s why finally publishing a paper felt so anticlimactic for me. I know that hardly anyone will ever actually read the damn thing. Just getting to this point took far more effort than it should have; dozens if not hundreds of hours of work, months of stress and frustration, all to satisfy whatever arbitrary criteria the particular reviewers happened to use so that we could all clear this stupid hurdle and finally get that line on our CVs. (And we wonder why academics are so depressed?) Far from being inspired to do the whole process again, I feel as if I have finally emerged from the torture chamber and may at last get some chance for my wounds to heal.

Even publishing fiction was not this miserable. Don’t get me wrong; it was miserable, especially for me, as I hate and fear rejection to the very core of my being in a way most people do not seem to understand. But there at least the subjectivity and arbitrariness of the process is almost universally acknowledged. Agents and editors don’t speak of your work being “flawed” or “wrong”; they don’t even say it’s “unimportant” or “uninteresting”. They say it’s “not a good fit” or “not what we’re looking for right now”. (Journal editors sometimes make noises like that too, but there’s always a subtext of “If this were better science, we’d have taken it.”) Unlike peer reviewers, they don’t come back with suggestions for “improvements” that are often pointless or utterly infeasible.

And unlike peer reviewers, fiction publishers acknowledge their own subjectivity and that of the market they serve. Nobody really thinks that Fifty Shades of Grey was good in any deep sense; but it was popular and successful, and that’s all the publisher really cares about. As a result, failing to be the next Fifty Shades of Grey ends up stinging a lot less than failing to be the next article in American Economic Review. Indeed, I’ve never had any illusions that my work would be popular among mainstream economists. But I once labored under the belief that it would be more important that it is true; and I guess I now consider that an illusion.

Moreover, fiction writers understand that rejection hurts; I’ve been shocked how few academics actually seem to. Nearly every writing conference I’ve ever been to has at least one seminar on dealing with rejection, often several; at academic conferences, I’ve literally never seen one. There seems to be a completely different mindset among academics—at least, the successful, tenured ones—about the process of peer review, what it means, even how it feels. When I try to talk with my mentors about the pain of getting rejected, they just… don’t get it. They offer me guidance on how to deal with anger at rejection, when that is not at all what I feel—what I feel is utter, hopeless, crushing despair.

There is a type of person who reacts to rejection with anger: Narcissists. (Look no further than the textbook example, Donald Trump.) I am coming to fear that I’m just not narcissistic enough to be a successful academic. I’m not even utterly lacking in narcissism: I am almost exactly average for a Millennial on the Narcissistic Personality Inventory. I score fairly high on Authority and Superiority (I consider myself a good leader and a highly competent individual) but very low on Exploitativeness and Self-Sufficiency (I don’t like hurting people and I know no man is an island). Then again, maybe I’m just narcissistic in the wrong way: I score quite low on “grandiose narcissism”, but relatively high on “vulnerable narcissism”. I hate to promote myself, but I find rejection devastating. This combination seems to be exactly what doesn’t work in academia. But it seems to be par for the course among writers and poets. Perhaps I have the mind of a scientist, but I have the soul of a poet. (Send me through the wormhole! Please? Please!?)

What’s wrong with academic publishing?

JDN 2457257 EDT 14:23.

I just finished expanding my master’s thesis into a research paper that is, I hope, suitable for publication in an economics journal. As part of this process I’ve been looking into the process of submitting articles for publication in academic journals… and I’ve found has been disgusting and horrifying. It is astonishingly bad, and my biggest question is why researchers put up with it.

Thus, the subject of this post is what’s wrong with the system—and what we might do instead.

Before I get into it, let me say that I don’t actually disagree with “publish or perish” in principle—as SMBC points out, it’s a lot like “do your job or get fired”. Researchers should publish in peer-reviewed journals; that’s a big part of what doing research means. The problem is how most peer-reviewed journals are currently operated.

First of all, in case you didn’t know, most scientific journals are owned by for-profit corporations. The largest corporation Elsevier, owns The Lancet and all of ScienceDirect, and has net income of over 1 billion Euros a year. Then there’s Springer and Wiley-Blackwell; between the three of them, these publishers account for over 40% of all scientific publications. These for-profit publishers retain the full copyright to most of the papers they publish, and tightly control access with paywalls; the cost to get through these paywalls is generally thousands of dollars a year for individuals and millions of dollars a year for universities. Their monopoly power is so great it “makes Rupert Murdoch look like a socialist.”

For-profit journals do often offer an “open-access” option in which you basically buy back your own copyright, but the price is high—the most common I’ve seen are $1800 or $3000 per paper—and very few researchers do this, for obvious financial reasons. In fact I think for a full-time tenured faculty researcher it’s probably worth it, given the alternatives. (Then again, full-time tenured faculty are becoming an endangered species lately; what might be worth it in the long run can still be very difficult for a cash-strapped adjunct to afford.) Open-access means people can actually read your paper and potentially cite your paper. Closed-access means it may languish in obscurity.

And of course it isn’t just about the benefits for the individual researcher. The scientific community as a whole depends upon the free flow of information; the reason we publish in the first place is that we want people to read papers, discuss them, replicate them, challenge them. Publication isn’t the finish line; it’s at best a checkpoint. Actually one thing that does seem to be wrong with “publish or perish” is that there is so much pressure for publication that we publish too many pointless papers and nobody has time to read the genuinely important ones.

These prices might be justifiable if the for-profit corporations actually did anything. But in fact they are basically just aggregators. They don’t do the peer-review, they farm it out to other academic researchers. They don’t even pay those other researchers; they just expect them to do it. (And they do! Like I said, why do they put up with this?) They don’t pay the authors who have their work published (on the contrary, they often charge submission fees—about $100 seems to be typical—simply to look at them). It’s been called “the world’s worst restaurant”, where you pay to get in, bring your own ingredients and recipes, cook your own food, serve other people’s food while they serve yours, and then have to pay again if you actually want to be allowed to eat.

They pay for the printing of paper copies of the journal, which basically no one reads; and they pay for the electronic servers that host the digital copies that everyone actually reads. They also provide some basic copyediting services (copyediting APA style is a job people advertise on Craigslist—so you can guess how much they must be paying).

And even supposing that they actually provided some valuable and expensive service, the fact would remain that we are making for-profit corporations the gatekeepers of the scientific community. Entities that exist only to make money for their owners are given direct control over the future of human knowledge. If you look at Cracked’s “reasons why we can’t trust science anymore”, all of them have to do with the for-profit publishing system. p-hacking might still happen in a better system, but publishers that really had the best interests of science in mind would be more motivated to fight it than publishers that are simply trying to raise revenue by getting people to buy access to their papers.

Then there’s the fact that most journals do not allow authors to submit to multiple journals at once, yet take 30 to 90 days to respond and only publish a fraction of what is submitted—it’s almost impossible to find good figures on acceptance rates (which is itself a major problem!), but the highest figures I’ve seen are 30% acceptance, a more typical figure seems to be 10%, and some top journals go as low as 3%. In the worst-case scenario you are locked into a journal for 90 days with only a 3% chance of it actually publishing your work. At that rate publishing an article could take years.

Is open-access the solution? Yes… well, part of it, anyway.

There are a large number of open-access journals, some of which do not charge submission fees, but very few of them are prestigious, and many are outright predatory. Predatory journals charge exorbitant fees, often after accepting papers for publication; many do little or no real peer review. There are almost seven hundred known predatory open-access journals; over one hundred have even been caught publishing hoax papers. These predatory journals are corrupting the process of science.

There are a few reputable open-access journals, such as BMC Biology and PLOSOne. Though not actually a journal, ArXiv serves a similar role. These will be part of the solution, most definitely. Yet even legitimate open-access journals often charge each author over $1000 to publish an article. There is a small but significant positive correlation between publication fees and journal impact factor.

We need to found more open-access journals which are funded by either governments or universities, so that neither author nor reader ever pays a cent. Science is a public good and should be funded as such. Even if copyright makes sense for other forms of content (I’m not so sure about that), it most certainly does not make sense for scientific knowledge, which by its very nature is only doing its job if it is shared with the world.

These journals should be specifically structured to be method-sensitive but results-blind. (It’s a very good thing that medical trials are usually registered before they are completed, so that publication is assured even if the results are negative—the same should be done with other sciences. Unfortunately, even in medicine there is significant publication bias.) If you could sum up the scientific method in one phrase, it might just be that: Method-sensitive but results-blind. If you think you know what you’re going to find beforehand, you may not be doing science. If you are certain what you’re going to find beforehand, you’re definitely not doing science.

The process should still be highly selective, but it should be possible—indeed, expected—to submit to multiple journals at once. If journals want to start paying their authors to entice them to publish in that journal rather than take another offer, that’s fine with me. Researchers are the ones who produce the content; if anyone is getting paid for it, it should be us.

This is not some wild and fanciful idea; it’s already the way that book publishing works. Very few literary agents or book publishers would ever have the audacity to say you can’t submit your work elsewhere; those that try are rapidly outcompeted as authors stop submitting to them. It’s fundamentally unreasonable to expect anyone to hang all their hopes on a particular buyer months in advance—and that is what you are, publishers, you are buyers. You are not sellers, you did not create this content.

But new journals face a fundamental problem: Good researchers will naturally want to publish in journals that are prestigious—that is, journals that are already prestigious. When all of the prestige is in journals that are closed-access and owned by for-profit companies, the best research goes there, and the prestige becomes self-reinforcing. Journals are prestigious because they are prestigious; welcome to tautology club.

Somehow we need to get good researchers to start boycotting for-profit journals and start investing in high-quality open-access journals. If Elsevier and Springer can’t get good researchers to submit to them, they’ll change their ways or wither and die. Research should be funded and published by governments and nonprofit institutions, not by for-profit corporations.

This may in fact highlight a much deeper problem in academia, the very concept of “prestige”. I have no doubt that Harvard is a good university, better university than most; but is it actually the best as it is in most people’s minds? Might Stanford or UC Berkeley be better, or University College London, or even the University of Michigan? How would we tell? Are the students better? Even if they are, might that just be because all the better students went to the schools that had better reputations? Controlling for the quality of the student, more prestigious universities are almost uncorrelated with better outcomes. Those who get accepted to Ivies but attend other schools do just as well in life as those who actually attend Ivies. (Good news for me, getting into Columbia but going to Michigan.) Yet once a university acquires such a high reputation, it can be very difficult for it to lose that reputation, and even more difficult for others to catch up.

Prestige is inherently zero-sum; for me to get more prestige you must lose some. For one university or research journal to rise in rankings, another must fall. Aside from simply feeding on other prestige, the prestige of a university is largely based upon the students it rejects—its “selectivity” score. What does it say about our society that we value educational institutions based upon the number of people they exclude?

Zero-sum ranking is always easier to do than nonzero-sum absolute scoring. Actually that’s a mathematical theorem, and one of the few good arguments against range voting (still not nearly good enough, in my opinion); if you have a list of scores you can always turn them into ranks (potentially with ties); but from a list of ranks there is no way to turn them back into scores.

Yet ultimately it is absolute scores that must drive humanity’s progress. If life were simply a matter of ranking, then progress would be by definition impossible. No matter what we do, there will always be top-ranked and bottom-ranked people.

There is simply no way mathematically for more than 1% of human beings to be in the top 1% of the income distribution. (If you’re curious where exactly that lies today, I highly recommend this interactive chart by the New York Times.) But we could raise the standard of living for the majority of people to a level that only the top 1% once had—and in fact, within the First World we have already done this. We could in fact raise the standard of living for everyone in the First World to a level that only the top 1%—or less—had as recently as the 16th century, by the simple change of implementing a basic income.

There is no way for more than 0.14% of people to have an IQ above 145, because IQ is defined to have a mean of 100 and a standard deviation of 15, regardless of how intelligent people are. People could get dramatically smarter over timeand in fact have—and yet it would still be the case that by definition, only 0.14% can be above 145.

Similarly, there is no way for much more than 1% of people to go to the top 1% of colleges. There is no way for more than 1% of people to be in the highest 1% of their class. But we could increase the number of college degrees (which we have); we could dramatically increase literacy rates (which we have).

We need to find a way to think of science in the same way. I wouldn’t suggest simply using number of papers published or even number of drugs invented; both of those are skyrocketing, but I can’t say that most of the increase is actually meaningful. I don’t have a good idea of what an absolute scale for scientific quality would look like, even at an aggregate level; and it is likely to be much harder still to make one that applies on an individual level.

But I think that ultimately this is the only way, the only escape from the darkness of cutthroat competition. We must stop thinking in terms of zero-sum rankings and start thinking in terms of nonzero-sum absolute scales.