Why do we need “publish or perish”?

June 23 JDN 2458658

This question may seem a bit self-serving, coming from a grad student who is struggling to get his first paper published in a peer-reviewed journal. But given the deep structural flaws in the academic publishing system, I think it’s worth taking a step back to ask just what peer-reviewed journals are supposed to be accomplishing.

The argument is often made that research journals are a way of sharing knowledge. If this is their goal, they have utterly and totally failed. Most papers are read by only a handful of people. When scientists want to learn about the research their colleagues are doing, they don’t read papers; they go to conferences to listen to presentations and look at posters. The way papers are written, they are often all but incomprehensible to anyone outside a very narrow subfield. When published by proprietary journals, papers are often hidden behind paywalls and accessible only through universities. As a knowledge-sharing mechanism, the peer-reviewed journal is a complete failure.

But academic publishing serves another function, which in practice is its only real function: Peer-reviewed publications are a method of evaluation. They are a way of deciding which researchers are good enough to be hired, get tenure, and receive grants. Having peer-reviewed publications—particularly in “top journals”, however that is defined within a given field—is a key metric that universities and grant agencies use to decide which researchers are worth spending on. Indeed, in some cases it seems to be utterly decisive.

We should be honest about this: This is an absolutely necessary function. It is uncomfortable to think about the fact that we must exclude a large proportion of competent, qualified people from being hired or getting tenure in academia, but given the large number of candidates and the small amounts of funding available, this is inevitable. We can’t hire everyone who would probably be good enough. We can only hire a few, and it makes sense to want those few to be the best. (Also, don’t fret too much: Even if you don’t make it into academia, getting a PhD is still a profitable investment. Economists and natural scientists do the best, unsurprisingly; but even humanities PhDs are still generally worth it. Median annual earnings of $77,000 is nothing to sneeze at: US median household income is only about $60,000. Humanities graduates only seem poor in relation to STEM or professional graduates; they’re still rich compared to everyone else.)

But I think it’s worth asking whether the peer review system is actually selecting the best researchers, or even the best research. Note that these are not the same question: The best research done in graduate school might not necessarily reflect the best long-run career trajectory for a researcher. A lot of very important, very difficult questions in science are just not the sort of thing you can get a convincing answer to in a couple of years, and so someone who wants to work on the really big problems may actually have a harder time getting published in graduate school or as a junior faculty member, even though ultimately work on the big problems is what’s most important for society. But I’m sure there’s a positive correlation overall: The kind of person who is going to do better research later is probably, other things equal, going to do better research right now.

Yet even accepting the fact that all we have to go on in assessing what you’ll eventually do is what you have already done, it’s not clear that the process of publishing in a peer-reviewed journal is a particularly good method of assessing the quality of research. Some really terrible research has gotten published in journals—I’m gonna pick on Daryl Bem, because he’s the worst—and a lot of really good research never made it into journals and is languishing on old computer hard drives. (The term “file drawer problem” is about 40 years obsolete; though to be fair, it was in fact coined about 40 years ago.)

That by itself doesn’t actually prove that journals are a bad mechanism. Even a good mechanism, applied to a difficult problem, is going to make some errors. But there are a lot of things about academic publishing, at least as currently constituted, that obviously don’t seem like a good mechanism, such as for-profit publishers, unpaid reviewiers, lack of double-blinded review, and above all, the obsession with “statistical significance” that leads to p-hacking.

Each of these problems I’ve listed has a simple fix (though whether the powers that be actually are willing to implement it is a different question: Questions of policy are often much easier to solve than problems of politics). But maybe we should ask whether the system is even worth fixing, or if it should simply be replaced entirely.

While we’re at it, let’s talk about the academic tenure system, because the peer-review system is largely an evaluation mechanism for the academic tenure system. Publishing in top journals is what decides whether you get tenure. The problem with “Publish or perish” isn’t the “publish”; it’s the perish”. Do we even need an academic tenure system?

The usual argument for academic tenure concerns academic freedom: Tenured professors have job security, so they can afford to say things that may be controversial or embarrassing to the university. But the way the tenure system works is that you only have this job security after going through a long and painful gauntlet of job insecurity. You have to spend several years prostrating yourself to the elders of your field before you can get inducted into their ranks and finally be secure.

Of course, job insecurity is the norm, particularly in the United States: Most employment in the US is “at-will”, meaning essentially that your employer can fire you for any reason at any time. There are specifically illegal reasons for firing (like gender, race, and religion); but it’s extremely hard to prove wrongful termination when all the employer needs to say is, “They didn’t do a good job” or “They weren’t a team player”. So I can understand how it must feel strange for a private-sector worker who could be fired at any time to see academics complain about the rigors of the tenure system.

But there are some important differences here: The academic job market is not nearly as competitive as the private sector job market. There simply aren’t that many prestigious universities, and within each university there are only a small number of positions to fill. As a result, universities have an enormous amount of power over their faculty, which is why they can get away with paying adjuncts salaries that amount to less than minimum wage. (People with graduate degrees! Making less than minimum wage!) At least in most private-sector labor markets in the US, the market is competitive enough that if you get fired, you can probably get hired again somewhere else. In academia that’s not so clear.

I think what bothers me the most about the tenure system is the hierarchical structure: There is a very sharp divide between those who have tenure, those who don’t have it but can get it (“tenure-track”), and those who can’t get it. The lines between professor, associate professor, assistant professor, lecturer, and adjunct are quite sharp. The higher up you are, the more job security you have, the more money you make, and generally the better your working conditions are overall. Much like what makes graduate school so stressful, there are a series of high-stakes checkpoints you need to get through in order to rise in the ranks. And several of those checkpoints are based largely, if not entirely, on publication in peer-reviewed journals.

In fact, we are probably stressing ourselves out more than we need to. I certainly did for my advancement to candidacy; I spent two weeks at such a high stress level I was getting migraines every single day (clearly on the wrong side of the Yerkes-Dodson curve), only to completely breeze through the exam.

I think I might need to put this up on a wall somewhere to remind myself:

Most grad students complete their degrees, and most assistant professors get tenure.

The real filters are admissions and hiring: Most applications to grad school are rejected (though probably most graduate students are ultimately accepted somewhere—I couldn’t find any good data on that in a quick search), and most PhD graduates do not get hired on the tenure track. But if you can make it through those two gauntlets, you can probably make it through the rest.

In our current system, publications are a way to filter people, because the number of people who want to become professors is much higher than the number of professor positions available. But as an economist, this raises a very big question: Why aren’t salaries falling?

You see, that’s how markets are supposed to work: When supply exceeds demand, the price is supposed to fall until the market clears. Lower salaries would both open up more slots at universities (you can hire more faculty with the same level of funding) and shift some candidates into other careers (if you can get paid a lot better elsewhere, academia may not seem so attractive). Eventually there should be a salary point at which demand equals supply. So why aren’t we reaching it?

Well, it comes back to that tenure system. We can’t lower the salaries of tenured faculty, not without a total upheaval of the current system. So instead what actually happens is that universities switch to using adjuncts, who have very low salaries indeed. If there were no tenure, would all faculty get paid like adjuncts? No, they wouldn’tbecause universities would have all that money they’re currently paying to tenured faculty, and all the talent currently locked up in tenured positions would be on the market, driving up the prevailing salary. What would happen if we eliminated tenure is not that all salaries would fall to adjunct level; rather, salaries would all adjust to some intermediate level between what adjuncts currently make and what tenured professors currently make.

What would the new salary be, exactly? That would require a detailed model of the supply and demand elasticities, so I can’t tell you without starting a whole new research paper. But a back-of-the-envelope calculation would suggest something like the overall current median faculty salary. This suggests a median salary somewhere around $75,000. This is a lot less than some professors make, but it’s also a lot more than what adjuncts make, and it’s a pretty good living overall.

If the salary for professors fell, the pool of candidates would decrease, and we wouldn’t need such harsh filtering mechanisms. We might decide we don’t need a strict evaluation system at all, and since the knowledge-sharing function of journals is much better served by other means, we could probably get rid of them altogether.

Of course, who am I kidding? That’s not going to happen. The people who make these rules succeeded in the current system. They are the ones who stand to lose high salaries and job security under a reform policy. They like things just the way they are.

Green New Deal Part 3: Guaranteeing education and healthcare is easy—why aren’t we doing it?

Apr 21 JDN 2458595

Last week was one of the “hard parts” of the Green New Deal. Today it’s back to one of the “easy parts”: Guaranteed education and healthcare.

“Providing all people of the United States with – (i) high-quality health care; […]

“Providing resources, training, and high-quality education, including higher education, to all people of the United States.”

Many Americans seem to think that providing universal healthcare would be prohibitively expensive. In fact, it would have literally negative net cost.
The US currently has the most bloated, expensive, inefficient healthcare system in the entire world. We spend almost $10,000 per person per year on healthcare, and get outcomes no better than France or the UK where they spend less than $5,000.
In fact, our public healthcare expenditures are currently higher than almost every other country. Our private expenditures are therefore pure waste; all they are doing is providing returns for the shareholders of corporations. If we were to simply copy the UK National Health Service and spend money in exactly the same way as they do, we would spend the same amount in public funds and almost nothing in private funds—and the UK has a higher mean lifespan than the US.
This is absolutely a no-brainer. Burn the whole system of private insurance down. Copy a healthcare system that actually works, like they use in every other First World country.
It wouldn’t even be that complicated to implement: We already have a single-payer healthcare system in the US; it’s called Medicare. Currently only old people get it; but old people use the most healthcare anyway. Hence, Medicare for All: Just lower the eligibility age for Medicare to 18 (if not zero). In the short run there would be additional costs for the transition, but in the long run we would save mind-boggling amounts of money, all while improving healthcare outcomes and extending our lifespans. Current estimates say that the net savings of Medicare for All would be about $5 trillion over the next 10 years. We can afford this. Indeed, the question is, as it was for infrastructure: How can we afford not to do this?
Isn’t this socialism? Yeah, I suppose it is. But healthcare is one of the few things that socialist countries consistently do extremely well. Cuba is a socialist country—a real socialist country, not a social democratic welfare state like Norway but a genuinely authoritarian centrally-planned economy. Cuba’s per-capita GDP PPP is a third of ours. Yet their life expectancy is actually higher than ours, because their healthcare system is just that good. Their per-capita healthcare spending is one-fourth of ours, and their health outcomes are better. So yeah, let’s be socialist in our healthcare. Socialists seem really good at healthcare.
And this makes sense, if you think about it. Doctors can do their jobs a lot better when they’re focused on just treating everyone who needs help, rather than arguing with insurance companies over what should and shouldn’t be covered. Preventative medicine is extremely cost-effective, yet it’s usually the first thing that people skimp on when trying to save money on health insurance. A variety of public health measures (such as vaccination and air quality regulation) are extremely cost-effective, but they are public goods that the private sector would not pay for by itself.
It’s not as if healthcare was ever really a competitive market anyway: When you get sick or injured, do you shop around for the best or cheapest hospital? How would you even go about that, when they don’t even post most of their prices and what prices they post are often wildly different than what you’ll actually pay?
The only serious argument I’ve heard against single-payer healthcare is a moral one: “Why should I have to pay for other people’s healthcare?” Well, I guess, because… you’re a human being? You should care about other human beings, and not want them to suffer and die from easily treatable diseases?
I don’t know how to explain to you that you should care about other people.

Single-payer healthcare is not only affordable: It would be cheaper and better than what we are currently doing. (In fact, almost anything would be cheaper and better than what we are currently doing—Obamacare was an improvement over the previous mess, but it’s still a mess.)
What about public education? Well, we already have that up to the high school level, and it works quite well.
Contrary to popular belief, the average public high school has better outcomes in terms of test scores and college placements than the average private high school. There are some elite private schools that do better, but they are extraordinarily expensive and they self-select only the best students. Public schools have to take all students, and they have a limited budget; but they have high quality standards and they require their teachers to be certified.
The flaws in our public school system are largely from it being not public enough, which is to say that schools are funded by their local property taxes instead of having their costs equally shared across whole states. This gives them the same basic problem as private schools: Rich kids get better schools.
If we removed that inequality, our educational outcomes would probably be among the best in the world—indeed, in our most well-funded school districts, they are. The state of Massachusetts which actually funds their public schools equally and well, gets international test scores just as good as the supposedly “superior” educational systems of Asian countries. In fact, this is probably even unfair to Massachusetts, as we know that China specifically selects the regions that have the best students to be the ones to take these international tests. Massachusetts is the best the US has to offer, but Shanghai is also the best China has to offer, so it’s only fair we compare apples to apples.
Public education has benefits for our whole society. We want to have a population of citizens, workers, and consumers who are well-educated. There are enormous benefits of primary and secondary education in terms of reducing poverty, improving public health, and increased economic growth.
So there’s my impassioned argument for why we should continue to support free, universal public education up to high school.
When it comes to college, I can’t be quite so enthusiastic. While there are societal benefits of college education, most of the benefits of college accrue to the individuals who go to college themselves.
The median weekly income of someone with a high school diploma is about $730; with a bachelor’s degree this rises to $1200; and with a doctoral or professional degree it gets over $1800. Higher education also greatly reduces your risk of being unemployed; while about 4% of the general population is unemployed, only 1.5% of people with doctorates or professional degrees are. Add that up over all the weeks of your life, and it’s a lot of money.
The net present value of a college education has been estimated at approximately $1 million. This result is quite sensitive to the choice of discount rate; at a higher discount rate you can get the net present value as “low” as $250,000.
With this in mind, the fact that the median student loan debt for a college graduate is about $30,000 doesn’t sound so terrible, does it? You’re taking out a loan for $30,000 to get something that will earn you between $250,000 and $1 million over the course of your life.
There is some evidence that having student loans delays homeownership; but this is a problem with our mortgage system, not our education system. It’s mainly the inability to finance a down payment that prevents people from buying homes. We should implement a system of block grants for first-time homeowners that gives them a chunk of money to make a down payment, perhaps $50,000. This would cost about as much as the mortgage interest tax deduction which mainly benefits the upper-middle class.
Higher education does have societal benefits as well. Perhaps the starkest I’ve noticed is how categorically higher education decided people’s votes on Donald Trump: Counties with high rates of college education almost all voted for Clinton, and counties with low rates of college education almost all voted for Trump. This was true even controlling for income and a lot of other demographic factors. Only authoritarianism, sexism and racism were better predictors of voting for Trump—and those could very well be mediating variables, if education reduces such attitudes.
If indeed it’s true that higher education makes people less sexist, less racist, less authoritarian, and overall better citizens, then it would be worth every penny to provide universal free college.
But it’s worth noting that even countries like Germany and Sweden which ostensibly do that don’t really do that: While college tuition is free for Swedish citizens and Germany provides free college for all students of any nationality, nevertheless the proportion of people in Sweden and Germany with bachelor’s degrees is actually lower than that of the United States. In Sweden the gap largely disappears if you restrict to younger cohorts—but in Germany it’s still there.
Indeed, from where I’m sitting, “universal free college” looks an awful lot like “the lower-middle class pays for the upper-middle class to go to college”. Social class is still a strong predictor of education level in Sweden. Among OECD countries, education seems to be the best at promoting upward mobility in Australia, and average college tuition in Australia is actually higher than average college tuition in the US (yes, even adjusting for currency exchange: Australian dollars are worth only slightly less than US dollars).
What does Australia do? They have a really good student loan system. You have to reach an annual income of about $40,000 per year before you need to make payments at all, and the loans are subsidized to be interest-free. Once you do owe payments, the debt is repaid at a rate proportional to your income—so effectively it’s not a debt at all but an equity stake.
In the US, students have been taking the desperate (and very cyberpunk) route of selling literal equity stakes in their education to Wall Street banks; this is a terrible idea for a hundred reasons. But having the government have something like an equity stake in students makes a lot of sense.
Because of the subsidies and generous repayment plans, the Australian government loses money on their student loan system, but so what? In order to implement universal free college, they would have spent an awful lot more than they are losing now. This way, the losses are specifically on students who got a lot of education but never managed to raise their income high enough—which means the government is actually incentivized to improve the quality of education or job-matching.
The cost of universal free college is considerable: That $1.3 trillion currently owed as student loans would be additional government debt or tax liability instead. Is this utterly unaffordable? No. But it’s not trivial either. We’re talking about roughly $60 billion per year in additional government spending, a bit less than what we currently spend on food stamps. An expenditure like that should have a large public benefit (as food stamps absolutely, definitely do!); I’m not convinced that free college would have such a benefit.
It would benefit me personally enormously: I currently owe over $100,000 in debt (about half from my undergrad and half from my first master’s). But I’m fairly privileged. Once I finally make it through this PhD, I can expect to make something like $100,000 per year until I retire. I’m not sure that benefiting people like me should be a major goal of public policy.
That said, I don’t think universal free college is a terrible policy. Done well, it could be a good thing. But it isn’t the no-brainer that single-payer healthcare is. We can still make sure that students are not overburdened by debt without making college tuition actually free.

How do you change a paradigm?

Mar 3 JDN 2458546

I recently attended the Institute for New Economic Thinking (INET) Young Scholars Initiative (YSI) North American Regional Convening (what a mouthful!). I didn’t present, so I couldn’t get funding for a hotel, so I commuted to LA each day. That was miserable; if I ever go again, it will be with funding.

The highlight of the conference was George Akerlof‘s keynote, which I knew would be the case from the start. The swag bag labeled “Rebel Without a Paradigm” was also pretty great (though not as great as the “Totes Bi” totes at the Human Rights Council Time to THRIVE conference).

The rest of the conference was… a bit strange, to be honest. They had a lot of slightly cheesy interactive activities and exhibits; the conference was targeted at grad students, but some of these would have drawn groans from my more jaded undergrads (and “jaded grad student” is a redundancy). The poster session was pathetically small; I think there were literally only three posters. (Had I known in time for the deadline, I could surely have submitted a poster.)

The theme of the conference was challenging the neoclassical paradigm. This was really the only unifying principle. So we had quite an eclectic mix of presenters: There were a few behavioral economists (like Akerlof himself), and some econophysicists and complexity theorists, but mostly the conference was filled with a wide variety of heterodox theorists, ranging all the way from Austrian to Marxist. Also sprinkled in were a few outright cranks, whose ideas were just total nonsense; fortunately these were relatively rare.

And what really struck me about listening to the heterodox theorists was how mainstream it made me feel. I went to a session on development economics, expecting randomized controlled trials of basic income and maybe some political economy game theory, and instead saw several presentations of neo-Marxist postcolonial theory. At the AEA conference I felt like a radical firebrand; at the YSI conference I felt like a holdout of the ancien regime. Is this what it feels like to push the envelope without leaping outside it?

The whole atmosphere of the conference was one of “Why won’t they listen to us!?” and I couldn’t help but feel like I kind of knew why. All this heterodox theory isn’t testable. It isn’t useful. It doesn’t solve the problem. Even if you are entirely correct that Latin America is poor because of colonial and neocolonial exploitation by the West (and I’m fairly certain that you’re not; standard of living under the Mexica wasn’t so great you know), that doesn’t tell me how to feed starving children in Nicaragua.

Indeed, I think it’s notable that the one Nobel Laureate they could find to speak for us was a behavioral economist. Behavioral economics has actually managed to penetrate into the mainstream somewhat. Not enough, not nearly quickly enough, to be sure—but it’s happening. Why is it happening? Because behavioral economics is testable, it’s useful, and it solves problems.

Indeed, behavioral economics is more testable than most neoclassical economics: We run lab experiments while they’re adding yet another friction or shock to the never-ending DSGE quagmire.

And we’ve already managed to solve some real policy problems this way, like Alvin Roth’s kidney matching system and Richard Thaler’s “Save More Tomorrow” program.

The (limited) success of behavioral economics came not because we continued to batter at the gates of the old paradigm demanding to be let in, but because we tied ourselves to the methodology of hard science and gathered irrefutable empirical data. We didn’t get as far as we have by complaining that economics is too much like physics; we actually made it more like physics. Physicists do experiments. They make sharp, testable predictions. They refute their hypotheses. And now, so do we.

That said, Akerlof was right when he pointed out that the insistence upon empirical precision has limited the scope of questions we are able to ask, and kept us from addressing some of the really vital economic problems in the world. And neoclassical theory is too narrow; in particular, the ongoing insistence that behavior must be modeled as perfectly rational and completely selfish is infuriating. That model has clearly failed at this point, and it’s time for something new.

So I do think there is some space for heterodox theory in economics. But there actually seems to be no shortage of heterodox theory; it’s easy to come up with ideas that are different from the mainstream. What we actually need is more ways to constrain theory with empirical evidence. The goal must be to have theory that actually predicts and explains the world better than neoclassical theory does—and that’s a higher bar than you might imagine. Neoclassical theory isn’t an abject failure; in fact, if we’d just followed the standard Keynesian models in the Great Recession, we would have recovered much faster. Most of this neo-Marxist theory struck me as not even wrong: the ideas were flexible enough that almost any observed outcome could be fit into them.

Galileo and Einstein didn’t just come up with new ideas and complain that no one listened to them. They developed detailed, mathematically precise models that could be experimentally tested—and when they were tested, they worked better than the old theory. That is the way to change a paradigm: Replace it with one that you can prove is better.

Impostor Syndrome

Feb 24 JDN 2458539

You probably have experienced Impostor Syndrome, even if you didn’t know the word for it. (Studies estimate that over 70% of the general population, and virtually 100% of graduate students, have experienced it at least once.)

Impostor Syndrome feels like this:

All your life you’ve been building up accomplishments, and people kept praising you for them, but those things were easy, or you’ve just gotten lucky so far. Everyone seems to think you are highly competent, but you know better: Now that you are faced with something that’s actually hard, you can’t do it. You’re not sure you’ll ever be able to do it. You’re scared to try because you know you’ll fail. And now you fear that at any moment, your whole house of cards is going to come crashing down, and everyone will see what a fraud and a failure you truly are.

The magnitude of that feeling varies: For most people it can be a fleeting experience, quickly overcome. But for some it is chronic, overwhelming, and debilitating.

It may surprise you that I am in the latter category. A few years ago, I went to a seminar on Impostor Syndrome, and they played a “Bingo” game where you collect spaces by exhibiting symptoms: I won.

In a group of about two dozen students who were there specifically because they were worried about Impostor Syndrome, I exhibited the most symptoms. On the Clance Impostor Phenomenon Scale, I score 90%. Anything above 60% is considered diagnostic, though there is no DSM disorder specifically for Impostor Syndrome.

Another major cause of Impostor Syndrome is being an underrepresented minority. Women, people of color, and queer people are at particularly high risk. While men are less likely to experience Impostor Syndrome, we tend to experience it more intensely when we do.

Aside from being a graduate student, which is basically coextensive with Impostor Syndrome, being a writer seems to be one of the strongest predictors of Impostor Syndrome. Megan McArdle of The Atlantic theorizes that it’s because we were too good in English class, or, more precisely, that English class was much too easy for us. We came to associate our feelings of competence and accomplishment with tasks simply coming so easily we barely even had to try.

But I think there’s a bigger reason, which is that writers face rejection letters. So many rejection letters. 90% of novels are rejected at the query stage; then a further 80% are rejected at the manuscript review stage; this means that a given query letter has about a 2% chance of acceptance. This means that even if you are doing everything right and will eventually get published, you can on average expected 50 rejection letters. I collected a little over 20 and ran out of steam, my will and self-confidence utterly crushed. But statistically I should have continued for at least 30 more. In fact, it’s worse than that; you should always expect to continue 50 more, up until you finally get accepted—this is a memoryless distribution. And if always having to expect to wait for 50 more rejection letters sounds utterly soul-crushing, that’s because it is.

And that’s something fiction writing has in common with academic research. Top journals in economics have acceptance rates between 3% and 8%. I’d say this means you need to submit between 13 and 34 times to get into a top journal, but that’s nonsense; there are only 5 top journals in economics. So it’s more accurate to say that with any given paper, no matter how many times you submit, you only have about a 30% chance of getting into a top journal. After that, your submissions will necessarily not be to top journals. There are enough good second-tier journals that you can probably get into one eventually—after submitting about a dozen times. And maybe a hiring or tenure committee will care about a second-tier publication. It might count for something. But it’s those top 5 journals that really matter. If for every paper you have in JEBO or JPubE, another candidate has a paper in AER or JPE, they’re going to hire the other candidate. Your paper could use better methodology on a more important question, and be better written—but if for whatever reason AER didn’t like it, that’s what will decide the direction of your career.

If I were trying to design a system that would inflict maximal Impostor Syndrome, I’m not sure I could do much better than this. I guess I’d probably have just one top journal instead of five, and I’d make the acceptance rate 1% instead of 3%. But this whole process of high-stakes checkpoints and low chances of getting on a tenure track that will by no means guarantee actually getting tenure? That’s already quite well-optimized. It’s really a brilliant design, if that’s the objective. You select a bunch of people who have experienced nothing but high achievement their whole lives. If they ever did have low achievement, for whatever reason (could be no fault of their own, you don’t care), you’d exclude them from the start. You give them a series of intensely difficult tasks—tasks literally no one else has ever done that may not even be possible—with minimal support and utterly irrelevant and useless “training”, and evaluate them constantly at extremely high stakes. And then at the end you give them an almost negligible chance of success, and force even those who do eventually succeed to go through multiple steps of failure and rejection beforehand. You really maximize the contrast between how long a streak of uninterrupted successes they must have had in order to be selected in the first place, and how many rejections they have to go through in order to make it to the next level.

(By the way, it’s not that there isn’t enough teaching and research for all these PhD graduates; that’s what universities want you to think. It’s that universities are refusing to open up tenure-track positions and instead relying upon adjuncts and lecturers. And the obvious reason for that is to save money.)

The real question is why we let them put us through this. I’m wondering that more and more every day.

I believe in science. I believe I could make a real contribution to human knowledge—at least, I think I still believe that. But I don’t know how much longer I can stand this gauntlet of constant evaluation and rejection.

I am going through a particularly severe episode of Impostor Syndrome at the moment. I am at an impasse in my third-year research paper, which is supposed to be done by the end of the summer. My dissertation committee wants me to revise my second-year paper to submit to journals, and I just… can’t do it. I have asked for help from multiple sources, and received conflicting opinions. At this point I can’t even bring myself to work on it.

I’ve been aiming for a career as an academic research scientist for as long as I can remember, and everyone tells me that this is what I should do and where I belong—but I don’t really feel like I belong anymore. I don’t know if I have a thick enough skin to get through all these layers of evaluation and rejection. Everyone tells me I’m good at this, but I don’t feel like I am. It doesn’t come easily the way I had come to expect things to come easily. And after I’ve done the research, written the paper—the stuff that I was told was the real work—there are all these extra steps that are actually so much harder, so much more painful—submitting to journals and being rejected over, and over, and over again, practically watching the graph of my career prospects plummet before my eyes.

I think that what really triggered my Impostor Syndrome was finally encountering things I’m not actually good at. It sounds arrogant when I say it, but the truth is, I had never had anything in my entire academic experience that felt genuinely difficult. There were things that were tedious, or time-consuming; there were other barriers I had to deal with, like migraines, depression, and the influenza pandemic. But there was never any actual educational content I had difficulty absorbing and understanding. Maybe if I had, I would be more prepared for this. But of course, if that were the case, they’d never let me into grad school at all. Just to be here, I had to have an uninterrupted streak of easy success after easy success—so now that it’s finally hard, I feel completely blindsided. I’m finally genuinely challenged by something academic, and I can’t handle it. There’s math I don’t know how to do; I’ve never felt this way before.

I know that part of the problem is internal: This is my own mental illness talking. But that isn’t much comfort. Knowing that the problem is me doesn’t exactly reduce the feeling of being a fraud and a failure. And even a problem that is 100% inside my own brain isn’t necessarily a problem I can fix. (I’ve had migraines in my brain for the last 18 years; I still haven’t fixed them.)

There is so much that the academic community could do so easily to make this problem better. Stop using the top 5 journals as a metric, and just look at overall publication rates. Referee publications double-blind, so that grad students know their papers will actually be read and taken seriously, rather than thrown out as soon as the referee sees they don’t already have tenure. Or stop obsessing over publications all together, and look at the detailed content of people’s work instead of maximizing the incentive to keep putting out papers that nobody will ever actually read. Open up more tenure-track faculty positions, and stop hiring lecturers and adjuncts. If you have to save money, do it by cutting salaries for administrators and athletic coaches. And stop evaluating constantly. Get rid of qualifying exams. Get rid of advancement exams. Start from the very beginning of grad school by assigning a mentor to each student and getting directly into working on a dissertation. Don’t make the applied econometrics researchers take exams in macro theory. Don’t make the empirical macroeconomists study game theory. Focus and customize coursework specifically on what grad students will actually need for the research they want to do, and don’t use grades at all. Remove the evaluative element completely. We should feel as though we are allowed to not know things. We should feel as though we are allowed to get things wrong. You are supposed to be teaching us, and you don’t seem to know how to do that; you just evaluate us constantly and expect us to learn on our own.

But none of those changes are going to happen. Certainly not in time for me, and probably not ever, because people like me who want the system to change are precisely the people the current system seems designed to weed out. It’s the ones who make it through the gauntlet, and convince themselves that it was their own brilliance and hard work that carried them through (not luck, not being a White straight upper-middle-class cis male, not even perseverance and resilience in the face of rejection), who end up making the policies for the next generation.

Because those who should be fixing the problem refuse to do so, that leaves the rest of us. What can we do to relieve Impostor Syndrome in ourselves or those around us?

You’d be right to take any advice I give now with a grain of salt; it’s obviously not working that well on me. But maybe it can help someone else. (And again I realize that “Don’t listen to me, I have no idea what I’m talking about” is exactly what someone with Impostor Syndrome would say.)

One of the standard techniques for dealing with Impostor Syndrome is called self-compassion. The idea is to be as forgiving to yourself as you would be to someone you love. I’ve never been good at this. I always hold myself to a much higher standard than I would hold anyone else—higher even than I would allow anyone to impose on someone else. After being told my whole life how brilliant and special I am, I internalized it in perhaps the most toxic way possible: I set my bar higher. Things that other people would count as great success I count as catastrophic failure. “Good enough” is never good enough.

Another good suggestion is to change your comparison set: Don’t compare yourself just to faculty or other grad students, compare yourself to the population as a whole. Others will tell you to stop comparing altogether, but I don’t know if that’s even possible in a capitalist labor market.

I’ve also had people encourage me to focus on my core motivations, remind myself what really matters and why I want to be a scientist in the first place. But it can be hard to keep my eye on that prize. Sometimes I wonder if I’ll ever be able to do the things I originally set out to do, or if it’s trying to fit other people’s molds and being rejected repeatedly over and over again for the rest of my life.

I think the best advice I’ve ever received on dealing with Impostor Syndrome was actually this: “Realize that nobody knows what they’re doing.” The people who are the very best at things… really aren’t all that good at them. If you look around carefully, the evidence of incompetence is everywhere. Look at all the books that get published that weren’t worth writing, all the songs that get recorded that weren’t worth singing. Think about the easily-broken electronic gadgets, the glitchy operating systems, the zero-day exploits, the data breaches, the traffic lights that are timed so badly they make the traffic jams worse. Remember that the leading cause of airplane crashes is pilot error, that medical mistakes are the third-leading cause of death in the United States. Think about every vending machine that ate your dollar, every time your cable went out in a storm. All those people around you who look like they are competent and successful? They aren’t. They are just as confused and ignorant and clumsy as you are. Most of them also feel like frauds, at least some of the time.

My first AEA conference

Jan 13 JDN 2458497

The last couple of weeks have been a bit of a whirlwind for me. I submitted a grant proposal, I have another, much more complicated proposal due next week, I submitted a paper to a journal, and somewhere in there I went to the AEA conference for the first time.

Going to the conference made it quite clear that the race and gender disparities in economics are quite real: The vast majority of the attendees were middle-aged White males, all wearing one of either two outfits: Sportcoat and khakis, or suit and tie. (And almost all of the suits were grey or black and almost all of the shirts were white or pastel. Had you photographed in greyscale you’d only notice because the hotel carpets looked wrong.) In an upcoming post I’ll go into more detail about this problem, what seems to be causing it, and what might be done to fix it.

But for now I just want to talk about the conference itself, and moreover, the idea of having conferences—is this really the best way to organize ourselves as a profession?

One thing I really do like about the AEA conference is actually something that separates it from other professions: The job market for economics PhDs is a very formalized matching system designed to be efficient and minimize opportunities for bias. It should be a model for other job markets. All the interviews are conducted in rapid succession, at the conference itself, so that candidates can interview for positions all over the country or even abroad.

I wasn’t on the job market yet, but I will be in a few years. I wanted to see what it’s like before I have to run that gauntlet myself.

But then again, why did we need face-to-face interviews at all? What do they actually tell us?

It honestly seems like a face-to-face interview is optimized to maximize opportunities for discrimination. Do you know them personally? Nepotism opportunity. Are they male or female? Sexism opportunity. Are they in good health? Ableism opportunity. Do they seem gay, or mention a same-sex partner? Homophobia opportunity. Is their gender expression normative? Transphobia opportunity. How old are they? Ageism opportunity. Are they White? Racism opportunity. Do they have an accent? Nationalism opportunity. Do they wear fancy clothes? Classism opportunity. There are other forms of bias we don’t even have simple names for: Do they look pregnant? Do they wear a wedding band? Are they physically attractive? Are they tall?

You can construct your resume review system to not include any of this information, by excluding names, pictures, and personal information. But you literally can’t exclude all of this information from a face-to-face interview, and this is the only hiring mechanism that suffers from this fundamental flaw.

If it were really about proving your ability to do the job, they could send you a take-home exam (a lot of tech companies actually do this): Here’s a small sample project similar to what we want you to do, and a reasonable deadline in which to do it. Do it, and we’ll see if it’s good enough.

If they want to offer an opportunity for you to ask or answer specific questions, that could be done via text chat—which could be on the one hand end-to-end encrypted against eavesdropping and on the other hand leave a clear paper trail in case they try to ask you anything they shouldn’t. If they start asking about your sexual interests in the digital interview, you don’t just feel awkward and wonder if you should take the job: You have something to show in court.

Even if they’re interested in things like your social skills and presentation style, those aren’t measured well by interviews anyway. And they probably shouldn’t even be as relevant to hiring as they are.

With that in mind, maybe bringing all the PhD graduates in economics in the entire United States into one hotel for three days isn’t actually necessary. Maybe all these face-to-face interviews aren’t actually all that great, because their small potential benefits are outweighed by their enormous potential biases.

The rest of the conference is more like other academic conferences, which seems even less useful.

The conference format seems like a strange sort of formality, a ritual that we go through. It’s clearly not the optimal way to present ongoing research—though perhaps it’s better than publishing papers in journals, which is our current gold standard. A whole bunch of different people give you brief, superficial presentations of their research, which may be only tangentially related to anything you’re interested in, and you barely even have time to think about it before they go on to the next once. Also, seven of these sessions are going on simultaneously, so unless you have a Time Turner, you have to choose which one to go to. And they are often changed at the last minute, so you may not even end up going to the one you thought you were going to.

I was really struck by how little experimental work was presented. I was under the impression that experimental economics was catching on, but despite specifically trying to go to experiment-related sessions (excluding the 8:00 AM session for migraine reasons), I only counted a handful of experiments, most of them in the field rather than the lab. There was a huge amount of theory and applied econometrics. I guess this isn’t too surprising, as those are the two main kinds of research that only cost a researcher’s time. I guess in some sense this is good news for me: It means I don’t have as much competition as I thought.

Instead of gathering papers into sessions where five different people present vaguely-related papers in far too little time, we could use working papers, or better yet a more sophisticated online forum where research could be discussed in real-time before it even gets written into a paper. We could post results as soon as we get them, and instead of conducting one high-stakes anonymous peer review at the time of publication, conduct dozens of little low-stakes peer reviews as the research is ongoing. Discussants could be turned into collaborators.

The most valuable parts of conferences always seem to be the parts that aren’t official sessions: Luncheons, receptions, mixers. There you get to meet other people in the field. And this can be valuable, to be sure. But I fear that the individual gain is far larger than the social gain: Most of the real benefits of networking get dissipated by the competition to be better-connected than the other candidates. The kind of working relationships that seem to be genuinely valuable are the kind formed by working at the same school for several years, not the kind that can be forged by meeting once at a conference reception.

I guess every relationship has to start somewhere, and perhaps more collaborations have started that way than I realize. But it’s also worth asking: Should we really be putting so much weight on relationships? Is that the best way to organize an academic discipline?

“It’s not what you know, it’s who you know” is an accurate adage in many professions, but it seems like research should be where we would want it least to apply. This is supposed to be about advancing human knowledge, not making friends—and certainly not maintaining the old boys’ club.

If you really want grad students to have better mental health, remove all the high-stakes checkpoints

Post 260: Oct 14 JDN 2458406

A study was recently published in Nature Biotechnology showing clear evidence of a mental health crisis among graduate students (no, I don’t know why they picked the biotechnology imprint—I guess it wasn’t good enough for Nature proper?). This is only the most recent of several studies showing exceptionally high rates of mental health issues among graduate students.

I’ve seen universities do a lot of public hand-wringing and lip service about this issue—but I haven’t seen any that were seriously willing to do what it takes to actually solve the problem.

I think this fact became clearest to me when I was required to fill out an official “Individual Development Plan” form as a prerequisite for my advancement to candidacy, which included one question about “What are you doing to support your own mental health and work/life balance?”

The irony here is absolutely excruciating, because advancement to candidacy has been overwhelmingly my leading source of mental health stress for at least the last six months. And it is only one of several different high-stakes checkpoints that grad students are expected to complete, always threatened with defunding or outright expulsion from the graduate program if the checkpoint is not met by a certain arbitrary deadline.

The first of these was the qualifying exams. Then comes advancement to candidacy. Then I have to complete and defend a second-year paper, then a third-year paper. Finally I have to complete and defend a dissertation, and then go onto the job market and go through a gauntlet of applications and interviews. I can’t think of any other time in my life when I was under this much academic and career pressure this consistently—even finishing high school and applying to college wasn’t like this.

If universities really wanted to improve my mental health, they would find a way to get rid of all that.

Granted, a single university does not have total control over all this: There are coordination problems between universities regarding qualifying exams, advancement, and dissertation requirements. One university that unilaterally tried to remove all these would rapidly lose prestige, as it would not be regarded as “rigorous” to reduce the pressure on your grad students. But that itself is precisely the problem—we have equated “rigor” with pressuring grad students until they are on the verge of emotional collapse. Universities don’t seem to know how to make graduate school difficult in the ways that would actually encourage excellence in research and teaching; they simply know how to make it difficult in ways that destroy their students psychologically.

The job market is even more complicated; in the current funding environment, it would be prohibitively expensive to open up enough faculty positions to actually accept even half of all graduating PhDs to tenure-track jobs. Probably the best answer here is to refocus graduate programs on supporting employment outside academia, recognizing both that PhD-level skills are valuable in many workplaces and that not every grad student really wants to become a professor.

But there are clearly ways that universities could mitigate these effects, and they don’t seem genuinely interested in doing so. They could remove the advancement exam, for example; you could simply advance to candidacy as a formality when your advisor decides you are ready, never needing to actually perform a high-stakes presentation before a committee—because what the hell does that accomplish anyway? Speaking of advisors, they could have a formalized matching process that starts with interviewing several different professors and being matched to the one that best fits your goals and interests, instead of expecting you to reach out on your own and hope for the best. They could have you write a dissertation, but not perform a “dissertation defense”—because, again, what can they possibly learn from forcing you to present in a high-stakes environment that they couldn’t have learned from reading your paper and talking with you about it over several months?

They could adjust or even remove funding deadlines—especially for international students. Here at UCI at least, once you are accepted to the program, you are ostensibly guaranteed funding for as long as you maintain reasonable academic progress—but then they define “reasonable progress” in such a way that you have to form an advancement committee, fill out forms, write a paper, and present before a committee all by a certain date or your funding is in jeopardy. Residents of California (which includes all US students who successfully established residency after a full year) are given more time if we need it—but international students aren’t. How is that fair?

The unwillingness of universities to take such actions clearly shows that their commitment to improving students’ mental health is paper-thin. They are only willing to help their students improve their work-life balance as long as it doesn’t require changing anything about the graduate program. They will provide us with counseling services and free yoga classes, but they won’t seriously reduce the pressure they put on us at every step of the way.
I understand that universities are concerned about protecting their prestige, but I ask them this: Does this really improve the quality of your research or teaching output? Do you actually graduate better students by selecting only the ones who can survive being emotionally crushed? Do all these arbitrary high-stakes performances actually result in greater advancement of human knowledge?

Or is it perhaps that you yourselves were put through such hazing rituals years ago, and now your cognitive dissonance won’t let you admit that it was all for naught? “This must be worth doing, or else they wouldn’t have put me through so much suffering!” Are you trying to transfer your own psychological pain onto your students, lest you be forced to face it yourself?

What really works against bigotry

Sep 30 JDN 2458392

With Donald Trump in office, I think we all need to be thinking carefully about what got us to this point, how we have apparently failed in our response to bigotry. It’s good to see that Kavanaugh’s nomination vote has been delayed pending investigations, but we can’t hope to rely on individual criminal accusations to derail every potentially catastrophic candidate. The damage that someone like Kavanaugh would do to the rights of women, racial minorities, and LGBT people is too severe to risk. We need to attack this problem at its roots: Why are there so many bigoted leaders, and so many bigoted voters willing to vote for them?

The problem is hardly limited to the United States; we are witnessing a global crisis of far-right ideology, as even the UN has publicly recognized.

I think the left made a very dangerous wrong turn with the notion of “call-out culture”. There is now empirical data to support me on this. Publicly calling people racist doesn’t make them less racist—in fact, it usually makes them more racist. Angrily denouncing people doesn’t change their minds—it just makes you feel righteous. Our own accusatory, divisive rhetoric is part of the problem: By accusing anyone who even slightly deviates from our party line (say, by opposing abortion in some circumstances, as 75% of Americans do?) of being a fascist, we slowly but surely push more people toward actual fascism.

Call-out culture encourages a black-and-white view of the world, where there are “good guys” (us) and “bad guys” (them), and our only job is to fight as hard as possible against the “bad guys”. It frees us from the pain of nuance, complexity, and self-reflection—at only the cost of giving up any hope of actually understanding the real causes or solving the problem. Bigotry is not something that “other” people have, which you, fine upstanding individual, could never suffer from. We are all Judy Hopps.

This is not to say we should do nothing—indeed, that would be just as bad if not worse. The rise of neofascism has been possible largely because so many people did nothing. Knowing that there is bigotry in all of us shouldn’t stop us from recognizing that some people are far worse than others, or paralyze us against constructively improving ourselves and our society. See the shades of gray without succumbing to the Fallacy of Gray.

The most effective interventions at reducing bigotry are done in early childhood; obviously, it’s far too late for that when it comes to people like Trump and Kavanaugh.

But there are interventions that can work at reducing bigotry among adults. We need to first understand where the bigotry comes from—and it doesn’t always come from the same source. We need to be willing to look carefully—yes, even sympathetically—at people with bigoted views so that we can understand them.

There are deep, innate systems in the human brain that make bigotry come naturally to us. Even people on the left who devote their lives to combating discrimination against women, racial minorities and LGBT people can still harbor bigoted attitudes toward other groups—such as rural people or Republicans. If you think that all Republicans are necessarily racist, that’s not a serious understanding of what motivates Republicans—that’s just bigotry on your part. Trump is racist. Pence is racist. One could argue that voting for them constitutes, in itself, a racist act. But that does not mean that every single Republican voter is fundamentally and irredeemably racist.

It’s also important to have conversations face-to-face. I must admit that I am personally terrible at this; despite training myself extensively in etiquette and public speaking to the point where most people perceive me as charismatic, even charming, deep down I am still a strong introvert. I dislike talking in person, and dread talking over the phone. I would much prefer to communicate entirely in written electronic communication—but the data is quite clear on this: Face-to-face conversations work better at changing people’s minds. It may be awkward and uncomfortable, but by being there in person, you limit their ability to ignore you or dismiss you; you aren’t a tweet from the void, but an actual person, sitting there in front of them.

Speak with friends and family members. This, I know, can be especially awkward and painful. In the last few years I have lost connections with friends who were once quite close to me as a result of difficult political conversations. But we must speak up, for silence becomes complicity. And speaking up really can work.

Don’t expect people to change their entire worldview overnight. Focus on small, concrete policy ideas. Don’t ask them to change who they are; ask them to change what they believe. Ask them to justify and explain their beliefs—and really listen to them when they do. Be open to the possibility that you, too might be wrong about something.

If they say “We should deport all illegal immigrants!”, point out that whenever we try this, a lot of fields go unharvested for lack of workers, and ask them why they are so concerned about illegal immigrants. If they say “Illegal immigrants come here and commit crimes!” point them to the statistical data showing that illegal immigrants actually commit fewer crimes on average than native-born citizens (probably because they are more afraid of what happens if they get caught).

If they are concerned about Muslim immigrants influencing our culture in harmful ways, first, acknowledge that there are legitimate concerns about Islamic cultural values (particularly toward women and LGBT people)but then point out that over 90% of Muslim-Americans are proud to be American, and that welcoming people is much more effective at getting them to assimilate into our culture than keeping them out and treating them as outsiders.

If they are concerned about “White people getting outnumbered”, first point out that White people are still over 70% of the US population, and in most rural areas there are only a tiny fraction of non-White people. Point out that Census projections showing the US will be majority non-White by 2045 are based on naively extrapolating current trends, and we really have no idea what the world will look like almost 30 years from now. Next, ask them why they worry about being “outnumbered”; get them to consider that perhaps racial demographics don’t have to be a matter of zero-sum conflict.

After you’ve done this, you will feel frustrated and exhausted, and the relationship between you and the person you’re trying to convince will be strained. You will probably feel like you have accomplished absolutely nothing to change their mind—but you are wrong. Even if they don’t acknowledge any change in their beliefs, the mere fact that you sat down and asked them to justify what they believe, and presented calm, reasonable, cogent arguments against those beliefs will have an effect. It will be a small effect, difficult for you to observe in that moment. But it will still be an effect.

Think about the last time you changed your mind about something important. (I hope you can remember such a time; none of us were born being right about everything!) Did it happen all at once? Was there just one, single knock-down argument that convinced you? Probably not. (On some mathematical and scientific questions I’ve had that experience: Oh, wow, yeah, that proof totally demolishes what I believed. Well, I guess I was wrong. But most beliefs aren’t susceptible to such direct proof.) More likely, you were presented with arguments from a variety of sources over a long span of time, gradually chipping away at what you thought you knew. In the moment, you might not even have admitted that you thought any differently—even to yourself. But as the months or years went by, you believed something quite different at the end than you had at the beginning.

Your goal should be to catalyze that process in other people. Don’t take someone who is currently a frothing neo-Nazi and expect them to start marching with Black Lives Matter. Take someone who is currently a little bit uncomfortable about immigration, and calm their fears. Don’t take someone who thinks all poor people are subhuman filth and try to get them to support a basic income. Take someone who is worried about food stamps adding to our national debt, and show them how it is a small portion of our budget. Don’t take someone who thinks global warming was made up by the Chinese and try to get them to support a ban on fossil fuels. Take someone who is worried about gas prices going up as a result of carbon taxes and show them that carbon offsets would add only about $100 per person per year while saving millions of lives.

And if you’re ever on the other side, and someone has just changed your mind, even a little bit—say so. Thank them for opening your eyes. I think a big part of why we don’t spend more time trying to honestly persuade people is that so few people acknowledge us when we do.