Selectivity is a terrible measure of quality

May 23 JDN 2459358

How do we decide which universities and research journals are the best? There are a vast number of ways we could go about this—and there are in fact many different ranking systems out there, though only a handful are widely used. But one primary criterion which seems to be among the most frequently used is selectivity.

Selectivity is a very simple measure: What proportion of people who try to get in, actually get in? For universities this is admission rates for applicants; for journals it is acceptance rates for submitted papers.

The top-rated journals in economics have acceptance rates of 1-7%. The most prestigious universities have acceptance rates of 4-10%. So a reasonable ballpark is to assume a 95% chance of not getting accepted in either case. Of course, some applicants are more or less qualified, and some papers are more or less publishable; but my guess is that most applicants are qualified and most submitted papers are publishable. So these low acceptance rates mean refusing huge numbers of qualified people.


Selectivity is an objective, numeric score that can be easily generated and compared, and is relatively difficult to fake. This may accouunt for its widespread appeal. And it surely has some correlation with genuine quality: Lots of people are likely to apply to a school because it is good, and lots of people are likely to submit to a journal because it is good.

But look a little bit closer, and it becomes clear that selectivity is really a terrible measure of quality.


One, it is extremely self-fulfilling. Once a school or a journal becomes prestigious, more people will try to get in there, and that will inflate its selectivity rating. Harvard is extremely selective because Harvard is famous and high-rated. Why is Harvard so high-rated? Well, in part because Harvard is extremely selective.

Two, it incentivizes restricting the number of applicants accepted.

Ivy League schools have vast endowments, and could easily afford to expand their capacity, thus employing more faculty and educating more students. But that would require reducing their acceptance rates and hence jeopardizing their precious selectivity ratings. If the goal is to give as many people as possible the highest quality education, then selectivity is a deeply perverse incentive: It specifically incentivizes not educating too many students.

Similarly, most journals include something in their rejection letters about “limited space”, which in the age of all-digital journals is utter nonsense. Journals could choose to publish ten, twenty, fifty times as many papers as they currently do—or half, or a tenth. They could publish everything that gets submitted, or only publish one paper a year. It’s an entirely arbitrary decision with no real constraints. They choose what proportion of papers to publish entirely based primarily on three factors that have absolutely nothing to do with limited space: One, they want to publish enough papers to make it seem like they are putting out regular content; two, they want to make sure they publish anything that will turn out to be a major discovery (though they honestly seem systematically bad at predicting that); and three, they want to publish as few papers as possible within those constraints to maximize their selectivity.

To be clear, I’m not saying that journals should publish everything that gets submitted. Actually I think too many papers already get published—indeed, too many get written. The incentives in academia are to publish as many papers in top journals as possible, rather than to actually do the most rigorous and ground-breaking research. The best research often involves spending long periods of time making very little visible progress, and it does not lend itself to putting out regular publications to impress tenure committees and grant agencies.

The number of scientific papers published each year has grown at about 5% per year since 1900. The number of peer-reviewed journals has grown at an increasing rate, from about 3% per year for most of the 20th century to over 6% now. These are far in excess of population growth, technological advancement, or even GDP growth; this many scientific papers is obviously unsustainable. There are now 300 times as many scientific papers published per year as there were in 1900—while the world population has only increased by about 5-fold during that time. Yes, the number of scientists has also increased—but not that fast. About 8 million people are scientists, publishing an average of 2 million articles per year—one per scientist every four years. But the number of scientist jobs grows at just over 1%—basically tracking population growth or the job market in general. If papers published continue to grow at 5% while the number of scientists increases at 1%, then in 100 years each scientist will have to publish 48 times as many papers as today, or about 1 every month.


So the problem with research journals isn’t so much that journals aren’t accepting enough papers, as that too many people are submitting papers. Of course the real problem is that universities have outsourced their hiring decisions to journal editors. Rather than actually evaluating whether someone is a good teacher or a good researcher (or accepting that they can’t and hiring randomly), universities have trusted in the arbitrary decisions of research journals to decide whom they should hire.

But selectivity as a measure of quality means that journals have no reason not to support this system; they get their prestige precisely from the fact that scientists are so pressured to publish papers. The more papers get submitted, the better the journals look for rejecting them.

Another way of looking at all this is to think about what the process of acceptance or rejection entails. It is inherently a process of asymmetric information.

If we had perfect information, what would the acceptance rate of any school or journal be? 100%, regardless of quality. Only the applicants who knew they would get accepted would apply. So the total number of admitted students and accepted papers would be exactly the same, but all the acceptance rates would rise to 100%.

Perhaps that’s not realistic; but what if the application criteria were stricter? For instance, instead of asking you your GPA and SAT score, Harvard’s form could simply say: “Anyone with a GPA less than 4.0 or an SAT score less than 1500 need not apply.” That’s practically true anyway. But Harvard doesn’t have an incentive to say it out loud, because then applicants who know they can’t meet that standard won’t bother applying, and Harvard’s precious selectivity number will go down. (These are far from sufficient, by the way; I was valedictorian and had a 1590 on my SAT and still didn’t get in.)

There are other criteria they’d probably be even less willing to emphasize, but are no less significant: “If your family income is $20,000 or less, there is a 95% chance we won’t accept you.” “Other things equal, your odds of getting in are much better if you’re Black than if you’re Asian.”

For journals it might be more difficult to express the criteria clearly, but they could certainly do more than they do. Journals could more strictly delineate what kind of papers they publish: This one only for pure theory, that one only for empirical data, this one only for experimental results. They could choose more specific content niches rather than literally dozens of journals all being ostensibly about “economics in general” (the American Economic Review, the Quarterly Journal of Economics, the Journal of Political Economy, the Review of Economic Studies, the European Economic Review, the International Economic Review, Economic Inquiry… these are just the most prestigious). No doubt there would still have to be some sort of submission process and some rejections—but if they really wanted to reduce the number of submissions they could easily do so. The fact is, they want to have a large number of submissions that they can reject.

What this means is that rather than being a measure of quality, selectivity is primarily a measure of opaque criteria. It’s possible to imagine a world where nearly every school and every journal accept less than 1% of applicants; this would occur if the criteria for acceptance were simply utterly unknown and everyone had to try hundreds of places before getting accepted.


Indeed, that’s not too dissimilar to how things currently work in the job market or the fiction publishing market. The average job opening receives a staggering 250 applications. In a given year, a typical literary agent receives 5000 submissions and accepts 10 clients—so about one in every 500.

For fiction writing I find this somewhat forgivable, if regrettable; the quality of a novel is a very difficult thing to assess, and to a large degree inherently subjective. I honestly have no idea what sort of submission guidelines one could put on an agency page to explain to authors what distinguishes a good novel from a bad one (or, not quite the same thing, a successful one from an unsuccessful one).

Indeed, it’s all the worse because a substantial proportion of authors don’t even follow the guidelines that they do include! The most common complaint I hear from agents and editors at writing conferences is authors not following their submission guidelines—such basic problems as submitting content from the wrong genre, not formatting it correctly, having really egregious grammatical errors. Quite frankly I wish they’d shut up about it, because I wanted to hear what would actually improve my chances of getting published, not listen to them rant about the thousands of people who can’t bother to follow directions. (And I’m pretty sure that those people aren’t likely to go to writing conferences and listen to agents give panel discussions.)

But for the job market? It’s really not that hard to tell who is qualified for most jobs. If it isn’t something highly specialized, most people could probably do it, perhaps with a bit of training. If it is something highly specialized, you can restrict your search to people who already have the relevant education or training. In any case, having experience in that industry is obviously a plus. Beyond that, it gets much harder to assess quality—but also much less necessary. Basically anyone with an advanced degree in the relevant subject or a few years of experience at that job will probably do fine, and you’re wasting effort by trying to narrow the field further. If it is very hard to tell which candidate is better, that usually means that the candidates really aren’t that different.

To my knowledge, not a lot of employers or fiction publishers pride themselves on their selectivity. Indeed, many fiction publishers have a policy of simply refusing unsolicited submissions, relying upon literary agents to pre-filter their submissions for them. (Indeed, even many agents refuse unsolicited submissions—which raises the question: What is a debut author supposed to do?) This is good, for if they did—if Penguin Random House (or whatever that ludicrous all-absorbing conglomerate is calling itself these days; ah, what was it like in that bygone era, when anti-trust enforcement was actually a thing?) decided to start priding itself on its selectivity of 0.05% or whatever—then the already massively congested fiction industry would probably grind to a complete halt.

This means that by ranking schools and journals based on their selectivity, we are partly incentivizing quality, but mostly incentivizing opacity. The primary incentive is for them to attract as many applicants as possible, even knowing full well that they will reject most of these applicants. They don’t want to be too clear about what they will accept or reject, because that might discourage unqualified applicants from trying and thus reduce their selectivity rate. In terms of overall welfare, every rejected application is wasted human effort—but in terms of the institution’s selectivity rating, it’s a point in their favor.

The Asymmetry that Rules the World

JDN 2456921 PDT 13:30.

One single asymmetry underlies millions of problems and challenges the world has always faced. No, it’s not Christianity versus Islam (or atheism). No, it’s not the enormous disparities in wealth between the rich and the poor, though you’re getting warmer.

It is the asymmetry of information—the fundamental fact that what you know and what I know are not the same. If this seems so obvious as to be unworthy of comment, maybe you should tell that to the generations of economists who have assumed perfect information in all of their models.

It’s not clear that information asymmetry could ever go away—even in the utopian post-scarcity economy of the Culture, one of the few sacred rules is the sanctity of individual thought. The closest to an information-symmetric world I can think of is the Borg, and with that in mind we may ask whether we want such a thing after all. It could even be argued that total information symmetry is logically impossible, because once you make two individuals know and believe exactly the same things, you don’t have two individuals anymore, you just have one. (And then where do we draw the line? It’s that damn Ship of Theseus again—except of course the problem was never the ship, but defining the boundaries of Theseus himself.)

Right now you may be thinking: So what? Why is asymmetric information so important? Well, as I mentioned in an earlier post, the Myerson-Satterthwaithe Theorem proves—mathematically proves, as certain as 2+2=4—that in the presence of asymmetric information, there is no market mechanism that guarantees Pareto-efficiency.

You can’t square that circle; because information is asymmetric, there’s just no way to make a free market that insures Pareto efficiency. This result is so strong that it actually makes you begin to wonder if we should just give up on economics entirely! If there’s no way we can possibly make a market that works, why bother at all?

But this is not the appropriate response. First of all, Pareto-efficiency is overrated; there are plenty of bad systems that are Pareto-efficient, and even some good systems that aren’t quite Pareto-efficient.

More importantly, even if there is no perfect market system, there clearly are better and worse market systems. Life is better here in the US than it is in Venezuela. Life in Sweden is arguably a bit better still (though not in every dimension). Life in Zambia and North Korea is absolutely horrific. Clearly there are better and worse ways to run a society, and the market system is a big part of that. The quality—and sometimes quantity—of life of billions of people can be made better or worse by the decisions we make in managing our economic system. Asymmetric information cannot be conquered, but it can be tamed.

This is actually a major subject for cognitive economics: How can we devise systems of regulation that minimize the damage done by asymmetric information? Akerlof’s Nobel was for his work on this subject, especially his famous paper “The Market for Lemons” in which he showed how product quality regulations could increase efficiency using the example of lemon cars. What he showed was, in short, that libertarian deregulation is stupid; removing regulations on product safety and quality doesn’t increase efficiency, it reduces it. (This is of course only true if the regulations are good ones; but despite protests from the supplement industry I really don’t see how “this bottle of pills must contain what it claims to contain” is an illegitimate regulation.)

Unfortunately, the way we currently write regulations leaves much to be desired: Basically, lobbyists pay hundreds of staffers to make hundreds of pages that no human being can be expected to read, and then hands them to Congress with a wink and a reminder of last year’s campaign contributions, who passes them without question. (Can you believe the US is one of the least corrupt governments in the world? Yup, that’s how bad it is out there.) As a result, we have a huge morass of regulations that nobody really understands, and there is a whole “industry” of people whose job it is to decode those regulations and use them to the advantage of whoever is paying them—lawyers. The amount of deadweight loss introduced into our economy is almost incalculable; if I had to guess, I’d have to put it somewhere in the trillions of dollars per year. At the very least, I can tell you that the $200 billion per year spent by corporations on litigation is all deadweight loss due to bad regulation. That is an industry that should not exist—I cannot stress this enough. We’ve become so accustomed to the idea that regulations are this complicated that people have to be paid six-figure salaries to understand them that we never stopped to think whether this made any sense. The US Constitution was originally printed on 6 pages.

The tax code should contain one formula for setting tax brackets with one or two parameters to adjust to circumstances, and then a list of maybe two dozen goods with special excise taxes for their externalities (like gasoline and tobacco). In reality it is over 70,000 pages.

Laws should be written with a clear and general intent, and then any weird cases can be resolved in court—because there will always be cases you couldn’t anticipate. Shakespeare was onto something when he wrote, “First, kill all the lawyers.” (I wouldn’t kill them; I’d fire them and make them find a job doing something genuinely useful, like engineering or management.)

All told, I think you could run an entire country with less than 100 pages of regulations. Furthermore, these should be 100 pages that are taught to every high school student, because after all, we’re supposed to be following them. How are we supposed to follow them if we don’t even know them? There’s a principle called ignorantia non excusatignorance does not excuse—which is frankly Kafkaesque. If you can be arrested for breaking a law you didn’t even know existed, in what sense can we call this a free society? (People make up strawman counterexamples: “Gee, officer, I didn’t know it was illegal to murder people!” But all you need is a standard of reasonable knowledge and due diligence, which courts already use to make decisions.)

So, in that sense, I absolutely favor deregulation. But my reasons are totally different from libertarians: I don’t want regulations to stop constraining businesses, I want regulations to be so simple and clear that no one can get around them. In the system I envision, you wouldn’t be able to sell fraudulent derivatives, because on page 3 it would clearly say that fraud is illegal and punishable in proportion to the amount of money involved.

But until that happens—and let’s face it, it’s gonna be awhile—we’re stuck with these ridiculous regulations, and that introduces a whole new type of asymmetric information. This is the way that regulations can make our economy less efficient; they distort what we can do not just by making it illegal, but by making it so we don’t know what is illegal.

The wealthy and powerful can hire people to explain—or evade—the regulations, while the rest of us are forced to live with them. You’ve felt this in a small way if you’ve ever gotten a parking ticket and didn’t know why. Asymmetric information strikes again.