I’m old enough to be President now.

Jan 22 JDN 2459967

When this post goes live, I will have passed my 35th birthday. This is old enough to be President of the United States, at least by law. (In practice, no POTUS has been less than 42.)

Not that I will ever be President. I have neither the wealth nor the charisma to run any kind of national political campaign. I might be able to get elected to some kind of local office at some point, like a school board or a city water authority. But I’ve been eligible to run for such offices for quite awhile now, and haven’t done so; nor do I feel particularly inclined at the moment.

No, the reason this birthday feels so significant is the milestone it represents. By this age, most people have spouses, children, careers. I have a spouse. I don’t have kids. I sort of have a career.

I have a job, certainly. I work for relatively decent pay. Not excellent, not what I was hoping for with a PhD in economics, but enough to live on (anywhere but an overpriced coastal metropolis). But I can’t really call that job a career, because I find large portions of it unbearable and I have absolutely no job security. In fact, I have the exact opposite: My job came with an explicit termination date from the start. (Do the people who come up with these short-term postdoc positions understand how that feels? It doesn’t seem like they do.)

I missed the window to apply for academic jobs that start next year. If I were happy here, this would be fine; I still have another year left on my contract. But I’m not happy here, and that is a grievous understatement. Working here is clearly the most important situational factor contributing to my ongoing depression. So I really ought to be applying to every alternative opportunity I can find—but I can’t find the will to try it, or the self-confidence to believe that my attempts could succeed if I did.

Then again, I’m not sure I should be applying to academic positions at all. If I did apply to academic positions, they’d probably be teaching-focused ones, since that’s the one part of my job I’m actually any good at. I’ve more or less written off applying to major research institutions; I don’t think I would get hired anyway, and even if I did, the pressure to publish is so unbearable that I think I’d be just as miserable there as I am here.

On the other hand, I can’t be sure that I would be so miserable even at another research institution; maybe with better mentoring and better administration I could be happy and successful in academic research after all.

The truth is, I really don’t know how much of my misery is due to academia in general, versus the British academic system, versus Edinburgh as an institution, versus starting work during the pandemic, versus the experience of being untenured faculty, versus simply my own particular situation. I don’t know if working at another school would be dramatically better, a little better, or just the same. (If it were somehow worse—which frankly seems hard to arrange—I would literally just quit immediately.)

I guess if the University of Michigan offered me an assistant professor job right now, I would take it. But I’m confident enough that they wouldn’t offer it to me that I can’t see the point in applying. (Besides, I missed the application windows this year.) And I’m not even sure that I would be happy there, despite the fact that just a few years ago I would have called it a dream job.

That’s really what I feel most acutely about turning 35: The shattering of dreams.

I thought I had some idea of how my life would go. I thought I knew what I wanted. I thought I knew what would make me happy.

The weirdest part it that it isn’t even that different from how I’d imagined it. If you’d asked me 10 or even 20 years ago what my career would be like at 35, I probably would have correctly predicted that I would have a PhD and be working at a major research university. 10 years ago I would have correctly expected it to be a PhD in economics; 20, I probably would have guessed physics. In both cases I probably would have thought I’d be tenured by now, or at least on the tenure track. But a postdoc or adjunct position (this is sort of both?) wouldn’t have been utterly shocking, just vaguely disappointing.

The biggest error by my past self was thinking that I’d be happy and successful in this career, instead of barely, desperately hanging on. I thought I’d have published multiple successful papers by now, and be excited to work on a new one. I imagined I’d also have published a book or two. (The fact that I self-published a nonfiction book at 16 but haven’t published any nonfiction ever since would be particularly baffling to my 15-year-old self, and is particularly depressing to me now.) I imagined myself becoming gradually recognized as an authority in my field, not languishing in obscurity; I imagined myself feeling successful and satisfied, not hopeless and depressed.

It’s like the dark Mirror Universe version of my dream job. It’s so close to what I thought I wanted, but it’s also all wrong. I finally get to touch my dreams, and they shatter in my hands.

When you are young, birthdays are a sincere cause for celebration; you look forward to the new opportunities the future will bring you. I seem to be now at the age where it no longer feels that way.

What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

How to fix economics publishing

Aug 7 JDN 2459806

The current system of academic publishing in economics is absolutely horrible. It seems practically designed to undermine the mental health of junior faculty.

1. Tenure decisions, and even most hiring decisions, are almost entirely based upon publication in five (5) specific journals.

2. One of those “top five” journals is owned by Elsevier, a corrupt monopoly that has no basis for its legitimacy yet somehow controls nearly one-fifth of all scientific publishing.

3. Acceptance rates in all of these journals are between 5% and 10%—greatly decreased from what they were a generation or two ago. Given a typical career span, the senior faculty evaluating you on whether you were published in these journals had about a three times better chance to get their own papers published there than you do.

4. Submissions are only single-blinded, so while you have no idea who is reading your papers, they know exactly who you are and can base their decision on whether you are well-known in the profession—or simply whether they like you.

5. Simultaneous submissions are forbidden, so when submitting to journals you must go one at a time, waiting to hear back from one before trying the next.

6. Peer reviewers are typically unpaid and generally uninterested, and so procrastinate as long as possible on doing their reviews.

7. As a result, review times for a paper are often measured in months, for every single cycle.

So, a highly successful paper goes like this: You submit it to a top journal, wait three months, it gets rejected. You submit it to another one, wait another four months, it gets rejected. You submit it to a third one, wait another two months, and you are told to revise and resubmit. You revise and resubmit, wait another three months, and then finally get accepted.

You have now spent an entire year getting one paper published. And this was a success.

Now consider a paper that doesn’t make it into a top journal. You submit, wait three months, rejected; you submit again, wait four months, rejected; you submit again, wait two months, rejected. You submit again, wait another five months, rejected; you submit to the fifth and final top-five, wait another four months, and get rejected again.

Now, after a year and a half, you can turn to other journals. You submit to a sixth journal, wait three months, rejected. You submit to a seventh journal, wait four months, get told to revise and resubmit. You revise and resubmit, wait another two months, and finally—finally, after two years—actually get accepted, but not to a top-five journal. So it may not even help you get tenure, unless maybe a lot of people cite it or something.

And what if you submit to a seventh, an eighth, a ninth journal, and still keep getting rejected? At what point do you simply give up on that paper and try to move on with your life?

That’s a trick question: Because what really happens, at least to me, is I can’t move on with my life. I get so disheartened from all the rejections of that paper that I can’t bear to look at it anymore, much less go through the work of submitting it to yet another journal that will no doubt reject it again. But worse than that, I become so depressed about my academic work in general that I become unable to move on to any other research either. And maybe it’s me, but it isn’t just me: 28% of academic faculty suffer from severe depression, and 38% from severe anxiety. And that’s across all faculty—if you look just at junior faculty it’s even worse: 43% of junior academic faculty suffer from severe depression. When a problem is that prevalent, at some point we have to look at the system that’s making us this way.

I can blame the challenges of moving across the Atlantic during a pandemic, and the fact that my chronic migraines have been the most frequent and severe they have been in years, but the fact remains: I have accomplished basically nothing towards the goal of producing publishable research in the past year. I have two years left at this job; if I started right now, I might be able to get something published before my contract is done. Assuming that the project went smoothly, I could start submitting it as soon as it was done, and it didn’t get rejected as many times as the last one.

I just can’t find the motivation to do it. When the pain is so immediate and so intense, and the rewards are so distant and so uncertain, I just can’t bring myself to do the work. I had hoped that talking about this with my colleagues would help me cope, but it hasn’t; in fact it only makes me seem to feel worse, because so few of them seem to understand how I feel. Maybe I’m talking to the wrong people; maybe the ones who understand are themselves suffering too much to reach out to help me. I don’t know.

But it doesn’t have to be this way. Here are some simple changes that could make the entire process of academic publishing in economics go better:

1. Boycott Elsevier and all for-profit scientific journal publishers. Stop reading their journals. Stop submitting to their journals. Stop basing tenure decisions on their journals. Act as though they don’t exist, because they shouldn’t—and then hopefully soon they won’t.

2. Peer reviewers should be paid for their time, and in return required to respond promptly—no more than a few weeks. A lack of response should be considered a positive vote on that paper.

3. Allow simultaneous submissions; if multiple journals accept, let the author choose between them. This is already how it works in fiction publishing, which you’ll note has not collapsed.

4. Increase acceptance rates. You are not actually limited by paper constraints anymore; everything is digital now. Most of the work—even in the publishing process—already has to be done just to go through peer review, so you may as well publish it. Moreover, most papers that are submitted are actually worthy of publishing, and this whole process is really just an idiotic status hierarchy. If the prestige of your journal decreases because you accept more papers, we are measuring prestige wrong. Papers should be accepted something like 50% of the time, not 5-10%.

5. Double blind submissions, and insist on ethical standards that maintain that blinding. No reviewer should know whether they are reading the work of a grad student or a Nobel Laureate. Reputation should mean nothing; scientific rigor should mean everything.

And, most radical of all, what I really need in my life right now:

6. Faculty should not have to submit their own papers. Each university department should have administrative staff whose job it is to receive papers from their faculty, format them appropriately, and submit them to journals. They should deal with all rejections, and only report to the faculty member when they have received an acceptance or a request to revise and resubmit. Faculty should simply do the research, write the papers, and then fire and forget them. We have highly specialized skills, and our valuable time is being wasted on the clerical tasks of formatting and submitting papers, which many other people could do as well or better. Worse, we are uniquely vulnerable to the emotional impact of the rejection—seeing someone else’s paper rejected is an entirely different feeling from having your own rejected.

Do all that, and I think I could be happy to work in academia. As it is, I am seriously considering leaving and never coming back.

I finally have a published paper.

Jun 12 JDN 2459773

Here it is, my first peer-reviewed publication: “Imperfect Tactic Collusion and Asymmetric Price Transmission”, in the Journal of Economic Behavior and Organization.

Due to the convention in economics that authors are displayed alphabetically, I am listed third of four, and will be typically collapsed into “Bulutay et. al.”. I don’t actually think it should be “Julius et. al.”; I think Dave Hales did the most important work, and I wanted it to be “Hales et. al.”; but anything non-alphabetical is unusual in economics, and it would have taken a strong justification to convince the others to go along with it. This is a very stupid norm (and I attribute approximately 20% of Daron Acemoglu’s superstar status to it), but like any norm, it is difficult to dislodge.

I thought I would feel different when this day finally came. I thought I would feel joy, or at least satisfaction. I had been hoping that satisfaction would finally spur me forward in resubmitting my single-author paper, “Experimental Public Goods Games with Progressive Taxation”, so I could finally get a publication that actually does have “Julius (2022)” (or, at this rate, 2023, 2024…?). But that motivating satisfaction never came.

I did feel some vague sense of relief: Thank goodness, this ordeal is finally over and I can move on. But that doesn’t have the same motivating force; it doesn’t make me want to go back to the other papers I can now hardly bear to look at.

This reaction (or lack thereof?) could be attributed to circumstances: I have been through a lot lately. I was already overwhelmed by finishing my dissertation and going on the job market, and then there was the pandemic, and I had to postpone my wedding, and then when I finally got a job we had to suddenly move abroad, and then it was awful finding a place to live, and then we actually got married (which was lovely, but still stressful), and it took months to get my medications sorted with the NHS, and then I had a sudden resurgence of migraines which kept me from doing most of my work for weeks, and then I actually caught COVID and had to deal with that for a few weeks too. So it really isn’t too surprising that I’d be exhausted and depressed after all that.

Then again, it could be something deeper. I didn’t feel this way about my wedding. That genuinely gave me the joy and satisfaction that I had been expecting; I think it really was the best day of my life so far. So it isn’t as if I’m incapable of these feelings under my current state.

Rather, I fear that I am becoming more permanently disillusioned with academia. Now that I see how the sausage is made, I am no longer so sure I want to be one of the people making it. Publishing that paper didn’t feel like I had accomplished something, or even made some significant contribution to human knowledge. In fact, the actual work of publication was mostly done by my co-authors, because I was too overwhelmed by the job market at the time. But what I did have to do—and what I’ve tried to do with my own paper—felt like a miserable, exhausting ordeal.

More and more, I’m becoming convinced that a single experiment tells us very little, and we are being asked to present each one as if it were a major achievement when it’s more like a single brick in a wall.

But whatever new knowledge our experiments may have gleaned, that part was done years ago. We could have simply posted the draft as a working paper on the web and moved on, and the world would know just as much and our lives would have been a lot easier.

Oh, but then it would not have the imprimatur of peer review! And for our careers, that means absolutely everything. (Literally, when they’re deciding tenure, nothing else seems to matter.) But for human knowledge, does it really mean much? The more referee reports I’ve read, the more arbitrary they feel to me. This isn’t an objective assessment of scientific merit; it’s the half-baked opinion of a single randomly chosen researcher who may know next to nothing about the topic—or worse, have a vested interest in defending a contrary paradigm.

Yes, of course, what gets through peer review is of considerably higher quality than any randomly-selected content on the Internet. (The latter can be horrifically bad.) But is this not also true of what gets submitted for peer review? In fact, aren’t many blogs written by esteemed economists (say, Krugman? Romer? Nate Silver?) of considerably higher quality as well, despite having virtually none of the gatekeepers? I think Krugman’s blog is nominally edited by the New York Times, and Silver has a whole staff at FiveThirtyEight (they’re hiring, in fact!), but I’m fairly certain Romer just posts whatever he wants like I do. Of course, they had to establish their reputations (Krugman and Romer each won a Nobel). But still, it seems like maybe peer-review isn’t doing the most important work here.

Even blogs by far less famous economists (e.g. Miles Kimball, Brad DeLong) are also very good, and probably contribute more to advancing the knowledge of the average person than any given peer-reviewed paper, simply because they are more readable and more widely read. What we call “research” means going from zero people knowing a thing to maybe a dozen people knowing it; “publishing” means going from a dozen to at most a thousand; to go from a thousand to a billion, we call that “education”.

They all matter, of course; but I think we tend to overvalue research relative to education. A world where a few people know something is really not much better than a world where nobody does, while a world where almost everyone knows something can be radically superior. And the more I see just how far behind the cutting edge of research most economists are—let alone most average people—the more apparent it becomes to me that we are investing far too much in expanding that cutting edge (and far, far too much in gatekeeping who gets to do that!) and not nearly enough in disseminating that knowledge to humanity.

I think maybe that’s why finally publishing a paper felt so anticlimactic for me. I know that hardly anyone will ever actually read the damn thing. Just getting to this point took far more effort than it should have; dozens if not hundreds of hours of work, months of stress and frustration, all to satisfy whatever arbitrary criteria the particular reviewers happened to use so that we could all clear this stupid hurdle and finally get that line on our CVs. (And we wonder why academics are so depressed?) Far from being inspired to do the whole process again, I feel as if I have finally emerged from the torture chamber and may at last get some chance for my wounds to heal.

Even publishing fiction was not this miserable. Don’t get me wrong; it was miserable, especially for me, as I hate and fear rejection to the very core of my being in a way most people do not seem to understand. But there at least the subjectivity and arbitrariness of the process is almost universally acknowledged. Agents and editors don’t speak of your work being “flawed” or “wrong”; they don’t even say it’s “unimportant” or “uninteresting”. They say it’s “not a good fit” or “not what we’re looking for right now”. (Journal editors sometimes make noises like that too, but there’s always a subtext of “If this were better science, we’d have taken it.”) Unlike peer reviewers, they don’t come back with suggestions for “improvements” that are often pointless or utterly infeasible.

And unlike peer reviewers, fiction publishers acknowledge their own subjectivity and that of the market they serve. Nobody really thinks that Fifty Shades of Grey was good in any deep sense; but it was popular and successful, and that’s all the publisher really cares about. As a result, failing to be the next Fifty Shades of Grey ends up stinging a lot less than failing to be the next article in American Economic Review. Indeed, I’ve never had any illusions that my work would be popular among mainstream economists. But I once labored under the belief that it would be more important that it is true; and I guess I now consider that an illusion.

Moreover, fiction writers understand that rejection hurts; I’ve been shocked how few academics actually seem to. Nearly every writing conference I’ve ever been to has at least one seminar on dealing with rejection, often several; at academic conferences, I’ve literally never seen one. There seems to be a completely different mindset among academics—at least, the successful, tenured ones—about the process of peer review, what it means, even how it feels. When I try to talk with my mentors about the pain of getting rejected, they just… don’t get it. They offer me guidance on how to deal with anger at rejection, when that is not at all what I feel—what I feel is utter, hopeless, crushing despair.

There is a type of person who reacts to rejection with anger: Narcissists. (Look no further than the textbook example, Donald Trump.) I am coming to fear that I’m just not narcissistic enough to be a successful academic. I’m not even utterly lacking in narcissism: I am almost exactly average for a Millennial on the Narcissistic Personality Inventory. I score fairly high on Authority and Superiority (I consider myself a good leader and a highly competent individual) but very low on Exploitativeness and Self-Sufficiency (I don’t like hurting people and I know no man is an island). Then again, maybe I’m just narcissistic in the wrong way: I score quite low on “grandiose narcissism”, but relatively high on “vulnerable narcissism”. I hate to promote myself, but I find rejection devastating. This combination seems to be exactly what doesn’t work in academia. But it seems to be par for the course among writers and poets. Perhaps I have the mind of a scientist, but I have the soul of a poet. (Send me through the wormhole! Please? Please!?)

Will we ever have the space opera future?

May 22 JDN 2459722

Space opera has long been a staple of science fiction. Like many natural categories, it’s not that easy to define; it has something to do with interstellar travel, a variety of alien species, grand events, and a big, complicated world that stretches far beyond any particular story we might tell about it.

Star Trek is the paradigmatic example, and Star Wars also largely fits, but there are numerous of other examples, including most of my favorite science fiction worlds: Dune, the Culture, Mass Effect, Revelation Space, the Liaden, Farscape, Babylon 5, the Zones of Thought.

I think space opera is really the sort of science fiction I most enjoy. Even when it is dark, there is still something aspirational about it. Even a corrupt feudal transplanetary empire or a terrible interstellar war still means a universe where people get to travel the stars.

How likely is it that we—and I mean ‘we’ in the broad sense, humanity and its descendants—will actually get the chance to live in such a universe?

First, let’s consider the most traditional kind of space opera, the Star Trek world, where FTL is commonplace and humans interact as equals with a wide variety of alien species that are different enough to be interesting, but similar enough to be relatable.

This, sad to say, is extremely unlikely. FTL is probably impossible, or if not literally impossible then utterly infeasible by any foreseeable technology. Yes, the Alcubierre drive works in theory… all you need is tons of something that has negative mass.

And while, by sheer probability, there almost have to be other sapient lifeforms somewhere out there in this vast universe, our failure to contact or even find clear evidence of any of them for such a long period suggests that they are either short-lived or few and far between. Moreover, any who do exist are likely to be radically different from us and difficult to interact with at all, much less relate to on a personal level. Maybe they don’t have eyes or ears; maybe they live only in liquid hydrogen or molten lead; maybe they communicate entirely by pheromones that are toxic to us.

Does this mean that the aspirations of space opera are ultimately illusory? Is it just a pure fantasy that will forever be beyond us? Not necessarily.

I can see two other ways to create a very space-opera-like world, one of which is definitely feasible, and the other is very likely to be. Let’s start with the one that’s definitely feasible—indeed so feasible we will very likely get to experience it in our own lifetimes.

That is to make it a simulation. An MMO video game, in a way, but something much grander than any MMO that has yet been made. Not just EVE and No Man’s Sky, not just World of Warcraft and Minecraft and Second Life, but also Facebook and Instagram and Zoom and so much more. Oz from Summer Wars; OASIS from Ready Player One. A complete, multifaceted virtual reality in which we can spend most if not all of our lives. One complete with not just sight and sound, but also touch, smell, even taste.

Since it’s a simulation, we can make our own rules. If we want FTL and teleportation, we can have them. (And I would like to note that in fact teleportation is available in EVE, No Man’s Sky, World of Warcraft, Minecraft, and even Second Life. It’s easy to implement in a simulation, and it really seems to be something people want to have.) If we want to meet—or even be—people from a hundred different sapient species, some more humanoid than others, we can. Each of us could rule entire planets, command entire starfleets.

And we could do this, if not right now, today, then very, very soon—the VR hardware is finally maturing and the software capability already exists if there is a development team with the will and the skills (and the budget) to do it. We almost certainly will do this—in fact, we’ll do it hundreds or thousands of different ways. You need not be content with any particular space opera world, when you can choose from a cornucopia of them; and fantasy worlds too, and plenty of other kinds of worlds besides.

Yet, I admit, there is something missing from that future. While such a virtual-reality simulation might reach the point where it would be fair to say it’s no longer simply a “video game”, it still won’t be real. We won’t actually be Vulcans or Delvians or Gek or Asari. We will merely pretend to be. When we take off the VR suit at the end of the day, we will still be humans, and still be stuck here on Earth. And even if most of the toil of maintaining this society and economy can be automated, there will still be some time we have to spend living ordinary lives in ordinary human bodies.

So, is there some chance that we might really live in a space-opera future? Where we will meet actual, flesh-and-blood people who have blue skin, antennae, or six limbs? Where we will actually, physically move between planets, feeling the different gravity beneath our feet and looking up at the alien sky?

Yes. There is a way this could happen. Not now, not for awhile yet. We ourselves probably won’t live to see it. But if humanity manages to continue thriving for a few more centuries, and technology continues to improve at anything like its current pace, then that day may come.

We won’t have FTL, so we’ll be bounded by the speed of light. But the speed of light is still quite fast. It can get you to Mars in minutes, to Jupiter in hours, and even to Alpha Centauri in a voyage that wouldn’t shock Magellan or Zheng He. Leaving this arm of the Milky Way, let alone traveling to another galaxy, is out of the question (at least if you ever want to come back while anyone you know is still alive—actually as a one-way trip it’s surprisingly feasible thanks to time dilation).

This means that if we manage to invent a truly superior kind of spacecraft engine, one which combines the high thrust of a hydrolox rocket with the high specific impulse of an ion thruster—and that is physically possible, because it’s well within what nuclear rockets ought to be capable of—then we could travel between planets in our solar system, and maybe even to nearby solar systems, in reasonable amounts of time. The world of The Expanse could therefore be in reach (well, the early seasons anyway), where human colonies have settled on Mars and Ceres and Ganymede and formed their own new societies with their own unique cultures.

We may yet run into some kind of extraterrestrial life—bacteria probably, insects maybe, jellyfish if we’re lucky—but we probably ever won’t actually encounter any alien sapients. If there are any, they are probably too primitive to interact with us, or they died out millennia ago, or they’re simply too far away to reach.

But if we cannot find Vulcans and Delvians and Asari, then we can become them. We can modify ourselves with cybernetics, biotechnology, or even nanotechnology, until we remake ourselves into whatever sort of beings we want to be. We may never find a whole interplanetary empire ruled by a race of sapient felinoids, but if furry conventions are any indication, there are plenty of people who would make themselves into sapient felinoids if given the opportunity.

Such a universe would actually be more diverse than a typical space opera. There would be no “planets of hats“, no entire societies of people acting—or perhaps even looking—the same. The hybridization of different species is almost by definition impossible, but when the ‘species’ are cosmetic body mods, we can combine them however we like. A Klingon and a human could have a child—and for that matter the child could grow up and decide to be a Turian.

Honestly there are only two reasons I’m not certain we’ll go this route:

One, we’re still far too able and willing to kill each other, so who knows if we’ll even make it that long. There’s also still plenty of room for some sort of ecological catastrophe to wipe us out.

And two, most people are remarkably boring. We already live in a world where one could go to work every day wearing a cape, a fursuit, a pirate outfit, or a Starfleet uniform, and yet people don’t let you. There’s nothing infeasible about me delivering a lecture dressed as a Kzin Starfleet science officer, and nor would it even particularly impair my ability to deliver the lecture well; and yet I’m quite certain it would be greatly frowned upon if I were to do so, and could even jeopardize my career (especially since I don’t have tenure).

Would it be distracting to the students if I were to do something like that? Probably, at least at first. But once they got used to it, it might actually make them feel at ease. If it were a social norm that lecturers—and students—can dress however they like (perhaps limited by local decency regulations, though those, too, often seem overly strict), students might show up to class in bunny pajamas or pirate outfits or full-body fursuits, but would that really be a bad thing? It could in fact be a good thing, if it helps them express their own identity and makes them more comfortable in their own skin.

But no, we live in a world where the mainstream view is that every man should wear exactly the same thing at every formal occasion. I felt awkward at the AEA conference because my shirt had color.

This means that there is really one major obstacle to building the space opera future: Social norms. If we don’t get to live in this world one day, it will be because the world is ruled by the sort of person who thinks that everyone should be the same.

The alienation of labor

Apr 10 JDN 2459680

Marx famously wrote that capitalism “alienates labor”. Much ink has been spilled over interpreting exactly what he meant by that, but I think the most useful and charitable reading goes something like the following:

When you make something for yourself, it feels fully yours. The effort you put into it feels valuable and meaningful. Whether you’re building a house to live in it or just cooking an omelet to eat it, your labor is directly reflected in your rewards, and you have a clear sense of purpose and value in what you are doing.

But when you make something for an employer, it feels like theirs, not yours. You have been instructed by your superiors to make a certain thing a certain way, for reasons you may or may not understand (and may or may not even agree with). Once you deliver the product—which may be as concrete as a carburetor or as abstract as an accounting report—you will likely never see it again; it will be used or not by someone else somewhere else whom you may not even ever get the chance to meet. Such labor feels tedious, effortful, exhausting—and also often empty, pointless, and meaningless.

On that reading, Marx isn’t wrong. There really is something to this. (I don’t know if this is really Marx’s intended meaning or not, and really I don’t much care—this is a valid thing and we should be addressing it, whether Marx meant to or not.)

There is a little parable about this, which I can’t quite remember where I heard:

Three men are moving heavy stones from one place to another. A traveler passes by and asks them, “What are you doing?”

The first man sighs and says, “We do whatever the boss tells us to do.”

The second man shrugs and says, “We pick up the rocks here, we move them over there.”

The third man smiles and says, “We’re building a cathedral.”

The three answers are quite different—yet all three men may be telling the truth as they see it.

The first man is fully alienated from his labor: he does whatever the boss says, following instructions that he considers arbitrary and mechanical. The second man is partially alienated: he knows the mechanics of what he is trying to accomplish, which may allow him to improve efficiency in some way (e.g. devise better ways to transport the rocks faster or with less effort), but he doesn’t understand the purpose behind it all, so ultimately his work still feels meaningless. But the third man is not alienated: he understands the purpose of his work, and he values that purpose. He sees that what he is doing is contributing to a greater whole that he considers worthwhile. It’s not hard to imagine that the third man will be the happiest, and the first will be the unhappiest.

There really is something about the capitalist wage-labor structure that can easily feed into this sort of alienation. You get a job because you need money to live, not because you necessarily value whatever the job does. You do as you are told so that you can keep your job and continue to get paid.

Some jobs are much more alienating than others. Most teachers and nurses see their work as a vocation, even a calling—their work has deep meaning for them and they value its purpose. At the other extreme there are corporate lawyers and derivatives traders, who must on some level understand that their work contributes almost nothing to the world (may in fact actively cause harm), but they continue to do the work because it pays them very well.

But there are many jobs in between which can be experienced both ways. Working in retail can be an agonizing grind where you must face a grueling gauntlet of ungrateful customers day in and day out—or it can be a way to participate in your local community and help your neighbors get the things they need. Working in manufacturing can be a mechanical process of inserting tab A into slot B and screwing it into place over, and over, and over again—or it can be a chance to create something, convert raw materials into something useful and valuable that other people can cherish.

And while individual perspective and framing surely matter here—those three men were all working in the same quarry, building the same cathedral—there is also an important objective component as well. Working as an artisan is not as alienating as working on an assembly line. Hosting a tent at a farmer’s market is not as alienating as working the register at Walmart. Tutoring an individual student is more purposeful than recording video lectures for a MOOC. Running a quirky local book store is more fulfilling than stocking shelves at Barnes & Noble.

Moreover, capitalism really does seem to push us more toward the alienating side of the spectrum. Assembly lines are far more efficient than artisans, so we make most of our products on assembly lines. Buying food at Walmart is cheaper and more convenient than at farmer’s markets, so more people shop there. Hiring one video lecturer for 10,000 students is a lot cheaper than paying 100 in-person lecturers, let alone 1,000 private tutors. And Barnes & Noble doesn’t drive out local book stores by some nefarious means: It just provides better service at lower prices. If you want a specific book for a good price right now, you’re much more likely to find it at Barnes & Noble. (And even more likely to find it on Amazon.)

Finding meaning in your work is very important for human happiness. Indeed, along with health and social relationships, it’s one of the biggest determinants of happiness. For most people in First World countries, it seems to be more important than income (though income certainly does matter).

Yet the increased efficiency and productivity upon which our modern standard of living depends seems to be based upon a system of production—in a word, capitalism—that systematically alienates us from meaning in our work.

This puts us in a dilemma: Do we keep things as they are, accepting that we will feel an increasing sense of alienation and ennui as our wealth continues to grow and we get ever-fancier toys to occupy our meaningless lives? Or do we turn back the clock, returning to a world where work once again has meaning, but at the cost of making everyone poorer—and some people desperately so?

Well, first of all, to some extent this is a false dichotomy. There are jobs that are highly meaningful but also highly productive, such as teaching and engineering. (Even recording a video lecture is a lot more fulfilling than plenty of jobs out there.) We could try to direct more people into jobs like these. There are jobs that are neither particularly fulfilling nor especially productive, like driving trucks, washing floors and waiting tables. We could redouble our efforts into automating such jobs out of existence. There are meaningless jobs that are lucrative only by rent-seeking, producing little or no genuine value, like the aforementioned corporate lawyers and derivatives traders. These, quite frankly, could simply be banned—or if there is some need for them in particular circumstances (I guess someone should defend corporations when they get sued; but they far more often go unjustly unpunished than unjustly punished!), strictly regulated and their numbers and pay rates curtailed.

Nevertheless, we still have decisions to make, as a society, about what we value most. Do we want a world of cheap, mostly adequate education, that feels alienating even to the people producing it? Then MOOCs are clearly the way to go; pennies on the dollar for education that could well be half as good! Or do we want a world of high-quality, personalized teaching, by highly-qualified academics, that will help students learn better and feel more fulfilling for the teachers? More pointedly—are we willing to pay for that higher-quality education, knowing it will be more expensive?

Moreover, in the First World at least, our standard of living is… pretty high already? Like seriously, what do we really need that we don’t already have? We could always imagine more, of course—a bigger house, a nicer car, dining at fancier restaurants, and so on. But most of us have roofs over our heads, clothes on our backs, and food on our tables.

Economic growth has done amazing things for us—but maybe we’re kind of… done? Maybe we don’t need to keep growing like this, and should start redirecting our efforts away from greater efficiency and toward greater fulfillment. Maybe there are economic possibilities we haven’t been considering.

Note that I specifically mean First World countries here. In Third World countries it’s totally different—they need growth, lots of it, as fast as possible. Fulfillment at work ends up being a pretty low priority when your children are starving and dying of malaria.

But then, you may wonder: If we stop buying cheap plastic toys to fill the emptiness in our hearts, won’t that throw all those Chinese factory workers back into poverty?

In the system as it stands? Yes, that’s a real concern. A sudden drop in consumption spending in general, or even imports in particular, in First World countries could be economically devastating for millions of people in Third World countries.

But there’s nothing inherent about this arrangement. There are less-alienating ways of working that can still provide a decent standard of living, and there’s no fundamental reason why people around the world couldn’t all be doing them. If they aren’t, it’s in the short run because they don’t have the education or the physical machinery—and in the long run it’s usually because their government is corrupt and authoritarian. A functional democratic government can get you capital and education remarkably fast—it certainly did in South Korea, Taiwan, and Japan.

Automation is clearly a big part of the answer here. Many people in the First World seem to suspect that our way of life depends upon the exploited labor of impoverished people in Third World countries, but this is largely untrue. Most of that work could be done by robots and highly-skilled technicians and engineers; it just isn’t because that would cost more. Yes, that higher cost would mean some reduction in standard of living—but it wouldn’t be nearly as dramatic as many people seem to think. We would have slightly smaller houses and slightly older cars and slightly slower laptops, but we’d still have houses and cars and laptops.

So I don’t think we should all cast off our worldly possessions just yet. Whether or not it would make us better off, it would cause great harm to countries that depend on their exports to us. But in the long run, I do think we should be working to achieve a future for humanity that isn’t so obsessed with efficiency and growth, and instead tries to provide both a decent standard of living and a life of meaning and purpose.

How can we fix medical residency?

Nov 21 JDN 459540

Most medical residents work 60 or more hours per week, and nearly 20% work 80 or more hours. 66% of medical residents report sleeping 6 hours or less each night, and 20% report sleeping 5 hours or less.

It’s not as if sleep deprivation is a minor thing: Worldwide, across all jobs, nearly 750,000 deaths annually are attributable to long working hours, most of these due to sleep deprivation.


By some estimates, medical errors account for as many as 250,000 deaths per year in the US alone. Even the most conservative estimates say that at least 25,000 deaths per year in the US are attributable to medical errors. It seems quite likely that long working hours increase the rate of dangerous errors (though it has been difficult to determine precisely how much).

Indeed, the more we study stress and sleep deprivation, the more we learn how incredibly damaging they are to health and well-being. Yet we seem to have set up a system almost intentionally designed to maximize the stress and sleep deprivation of our medical professionals. Some of them simply burn out and leave the profession (about 18% of surgical residents quit); surely an even larger number of people never enter medicine in the first place because they know they would burn out.

Even once a doctor makes it through residency and has learned to cope with absurd hours, this most likely distorts their whole attitude toward stress and sleep deprivation. They are likely to not consider them “real problems”, because they were able to “tough it out”—and they are likely to assume that their patients can do the same. One of the primary functions of a doctor is to reduce pain and suffering, and by putting doctors through unnecessary pain and suffering as part of their training, we are teaching them that pain and suffering aren’t really so bad and you should just grin and bear it.

We are also systematically selecting against doctors who have disabilities that would make it difficult to work these double-time hours—which means that the doctors who are most likely to sympathize with disabled patients are being systematically excluded from the profession.

There have been some attempts to regulate the working hours of residents, but they have generally not been effective. I think this is for three reasons:

1. They weren’t actually trying hard enough. A cap of 80 hours per week is still 40 hours too high, and looks to me like you are trying to get better PR without fixing the actual problem.

2. Their enforcement mechanisms left too much opportunity to cheat the system, and in fact most medical residents simply became pressured to continue over-working and under-report their hours.

3. They don’t seem to have considered how to effect the transition in a way that won’t reduce the total number of resident-hours, and so residents got less training and hospitals were less served.

The solution to problem 1 is obvious: The cap needs to be lower. Much lower.

The solution to problem 2 is trickier: What sort of enforcement mechanism would prevent hospitals from gaming the system?

I believe the answer is very steep overtime pay requirements, coupled with regular and intensive auditing. Every hour a medical resident goes over their cap, they should have to be paid triple time. Audits should be performed frequently, randomly and without notice. And if a hospital is caught falsifying their records, they should be required to pay all missing hours to all medical residents at quintuple time. And Medicare and Medicaid should not be allowed to reimburse these additional payments—they must come directly out of the hospital’s budget.

Under the current system, the “punishment” is usually a threat of losing accreditation, which is too extreme and too harmful to the residents. Precisely because this is such a drastic measure, it almost never happens. The punishment needs to be small enough that we will actually enforce it; and it needs to hurt the hospital, not the residents—overtime pay would do precisely that.

That brings me to problem 3: How can we ensure that we don’t reduce the total number of resident-hours?

This is important for two reasons: Each resident needs a certain number of hours of training to become a skilled doctor, and residents provide a significant proportion of hospital services. Of the roughly 1 million doctors in the US, about 140,000 are medical residents.

The answer is threefold:

1. Increase the number of residency slots (we have a global doctor shortage anyway).

2. Extend the duration of residency so that each resident gets the same number of total work hours.

3. Gradually phase in so that neither increase needs to be too fast.

Currently a typical residency is about 4 years. 4 years of 80-hour weeks is equivalent to 8 years of 40-hour weeks. The goal is for each resident to get 320 hour-years of training.

With 140,000 current residents averaging 4 years, a typical cohort is about 35,000. So the goal is to each year have at least (35,000 residents per cohort)(4 cohorts)(80 hours per week) = 11 million resident-hours per week.

In cohort 1, we reduce the cap to 70 hours, and increase the number of accepted residents to 40,000. Residents in cohort 1 will continue their residency for 4 years, 7 months. This gives each one 321 hour-years of training.

In cohort 2, we reduce the cap to 60 hours, and increase the number of accepted residents to 46,000.

Residents in cohort 2 will continue their residency for 5 years, 4 months. This gives each one 320 hour-years of training.

In cohort 3, we reduce the cap to 55 hours, and increase the number of accepted residents to 50,000.

Residents in cohort 3 will continue their residency for 6 years. This gives each one 330 hour-years of training.

In cohort 4, we reduce the cap to 50 hours, and increase the number of accepted residents to 56,000. Residents in cohort 4 will continue their residency for 6 years, 6 months. This gives each one 325 hour-years of training.

In cohort 5, we reduce the cap to 45 hours, and increase the number of accepted residents to 60,000. Residents in cohort 5 will continue their residency for 7 years, 2 months. This gives each one 322 hour-years of training.

In cohort 6, we reduce the cap to 40 hours, and increase the number of accepted residents to 65,000. Residents in cohort 6 will continue their residency for 8 years. This gives each one 320 hour-years of training.

In cohort 7, we keep the cap to 40 hours, and increase the number of accepted residents to 70,000. This is now the new standard, with 8-year residencies with 40 hour weeks.

I’ve made a graph here of what this does to the available number of resident-hours each year. There is a brief 5% dip in year 4, but by the time we reach year 14 we’ve actually doubled the total number of available resident-hours at any given time—without increasing the total amount of work each resident does, simply keeping them longer and working them less intensively each year. Given that quality of work is reduced by working longer hours, it’s likely that even this brief reduction in hours would not result in any reduced quality of care for patients.

[residency_hours.png]

I have thus managed to increase the number of available resident-hours, ensure that each resident gets the same amount of training as before, and still radically reduce the work hours from 80 per week to 40 per week. The additional recruitment each year is never more than 6,000 new residents or 15% of the current number of residents.

It takes several years to effect this transition. This is unavoidable if we are trying to avoid massive increases in recruitment, though if we were prepared to simply double the number of admitted residents each year we could immediately transition to 40-hour work weeks in a single cohort and the available resident-hours would then strictly increase every year.

This plan is likely not the optimal one; I don’t know enough about the details of how costly it would be to admit more residents, and it’s possible that some residents might actually prefer a briefer, more intense residency rather than a longer, less stressful one. (Though it’s worth noting that most people greatly underestimate the harms of stress and sleep deprivation, and doctors don’t seem to be any better in this regard.)

But this plan does prove one thing: There are solutions to this problem. It can be done. If our medical system isn’t solving this problem, it is not because solutions do not exist—it is because they are choosing not to take them.

How to change minds

Aug 29 JDN 2459456

Think for a moment about the last time you changed your mind on something important. If you can’t think of any examples, that’s not a good sign. Think harder; look back further. If you still can’t find any examples, you need to take a deep, hard look at yourself and how you are forming your beliefs. The path to wisdom is not found by starting with the right beliefs, but by starting with the wrong ones and recognizing them as wrong. No one was born getting everything right.

If you remember changing your mind about something, but don’t remember exactly when, that’s not a problem. Indeed, this is the typical case, and I’ll get to why in a moment. Try to remember as much as you can about the whole process, however long it took.

If you still can’t specifically remember changing your mind, try to imagine a situation in which you would change your mind—and if you can’t do that, you should be deeply ashamed and I have nothing further to say to you.

Thinking back to that time: Why did you change your mind?

It’s possible that it was something you did entirely on your own, through diligent research of primary sources or even your own mathematical proofs or experimental studies. This is occasionally something that happens; as an active researcher, it has definitely happened to me. But it’s clearly not the typical case of what changes people’s minds, and it’s quite likely that you have never experienced it yourself.

The far more common scenario—even for active researchers—is far more mundane: You changed your mind because someone convinced you. You encountered a persuasive argument, and it changed the way you think about things.

In fact, it probably wasn’t just one persuasive argument; it was probably many arguments, from multiple sources, over some span of time. It could be as little as minutes or hours; it could be as long as years.

Probably the first time someone tried to change your mind on that issue, they failed. The argument may even have degenerated into shouting and name-calling. You both went away thinking that the other side was composed of complete idiots or heartless monsters. And then, a little later, thinking back on the whole thing, you remembered one thing they said that was actually a pretty good point.

This happened again with someone else, and again with yet another person. And each time your mind changed just a little bit—you became less certain of some things, or incorporated some new information you didn’t know before. The towering edifice of your worldview would not be toppled by a single conversation—but a few bricks here and there did get taken out and replaced.

Or perhaps you weren’t even the target of the conversation; you simply overheard it. This seems especially common in the age of social media, where public and private spaces become blurred and two family members arguing about politics can blow up into a viral post that is viewed by millions. Perhaps you changed your mind not because of what was said to you, but because of what two other people said to one another; perhaps the one you thought was on your side just wasn’t making as many good arguments as the one on the other side.

Now, you may be thinking: Yes, people like me change our minds, because we are intelligent and reasonable. But those people, on the other side, aren’t like that. They are stubborn and foolish and dogmatic and stupid.

And you know what? You probably are an especially intelligent and reasonable person. If you’re reading this blog, there’s a good chance that you are at least above-average in your level of education, rationality, and open-mindedness.

But no matter what beliefs you hold, I guarantee you there is someone out there who shares many of them and is stubborn and foolish and dogmatic and stupid. And furthermore, there is probably someone out there who disagrees with many of your beliefs and is intelligent and open-minded and reasonable.

This is not to say that there’s no correlation between your level of reasonableness and what you actually believe. Obviously some beliefs are more rational than others, and rational people are more likely to hold those beliefs. (If this weren’t the case, we’d be doomed.) Other things equal, an atheist is more reasonable than a member of the Taliban; a social democrat is more reasonable than a neo-Nazi; a feminist is more reasonable than a misogynist; a member of the Human Rights Campaign is more reasonable than a member of the Westboro Baptist Church. But reasonable people can be wrong, and unreasonable people can be right.

You should be trying to seek out the most reasonable people who disagree with you. And you should be trying to present yourself as the most reasonable person who expresses your own beliefs.

This can be difficult—especially that first part, as the world (or at least the world spanned by Facebook and Twitter) seems to be filled with people who are astonishingly dogmatic and unreasonable. Often you won’t be able to find any reasonable disagreement. Often you will find yourself in threads full of rage, hatred and name-calling, and you will come away disheartened, frustrated, or even despairing for humanity. The whole process can feel utterly futile.

And yet, somehow, minds change.

Support for same-sex marriage in the US rose from 27% to 70% just since 1997.

Read that date again: 1997. Less than 25 years ago.

The proportion of new marriages which were interracial has risen from 3% in 1967 to 19% today. Given the racial demographics of the US, this is almost at the level of random assortment.

Ironically I think that the biggest reason people underestimate the effectiveness of rational argument is the availability heuristic: We can’t call to mind any cases where we changed someone’s mind completely. We’ve never observed a pi-radian turnaround in someone’s whole worldview, and thus, we conclude that nobody ever changes their mind about anything important.

But in fact most people change their minds slowly and gradually, and are embarrassed to admit they were wrong in public, so they change their minds in private. (One of the best single changes we could make toward improving human civilization would be to make it socially rewarded to publicly admit you were wrong. Even the scientific community doesn’t do this nearly as well as it should.) Often changing your mind doesn’t even really feel like changing your mind; you just experience a bit more doubt, learn a bit more, and repeat the process over and over again until, years later, you believe something different than you did before. You moved 0.1 or even 0.01 radians at a time, until at last you came all the way around.

It may be in fact that some people’s minds cannot be changed—either on particular issues, or even on any issue at all. But it is so very, very easy to jump to that conclusion after a few bad interactions, that I think we should intentionally overcompensate in the opposite direction: Only give up on someone after you have utterly overwhelming evidence that their mind cannot ever be changed in any way.

I can’t guarantee that this will work. Perhaps too many people are too far gone.

But I also don’t see any alternative. If the truth is to prevail, it will be by rational argument. This is the only method that systematically favors the truth. All other methods give equal or greater power to lies.

When to give up

Jun 6 JDN 2459372

Perseverance is widely regarded as a virtue, and for good reason. Often one of the most important deciding factors in success is the capacity to keep trying after repeated failure. I think this has been a major barrier for me personally; many things came easily to me when I was young, and I internalized the sense that if something doesn’t come easily, it must be beyond my reach.

Yet it’s also worth noting that this is not the only deciding factor—some things really are beyond our capabilities. Indeed, some things are outright impossible. And we often don’t know what is possible and what isn’t.

This raises the question: When should we persevere, and when should we give up?

There is actually reason to think that people often don’t give up when they should. Steven Levitt (of Freakonomics fame)recently published a study that asked people who were on the verge of a difficult decision to flip a coin, and then base their decision on the coin flip: Heads, make a change; tails, keep things as they are. Many didn’t actually follow the coin flip—but enough did that there was a statistical difference between those who saw heads and those who saw tails. The study found that the people who flipped heads and made a change were on average happier a couple of years later than the people who flipped tails and kept things as they were.

This question is particularly salient for me lately, because the academic job market has gone so poorly for me. I’ve spent most of my life believing that academia is where I belong; my intellect and my passion for teaching and research has convinced me and many others that this is the right path for me. But now that I have a taste of what it is actually like to apply for tenure-track jobs and submit papers to journals, I am utterly miserable. I hate every minute of it. I’ve spent the entire past year depressed and feeling like I have accomplished absolutely nothing.

In theory, once one actually gets tenure it’s supposed to get easier. But that could be a long way away—or it might never happen at all. As it is, there’s basically no chance I’ll get a tenure track position this year, and it’s unclear what my chances would be if I tried again next year.

If I could actually get a paper published, that would no doubt improve my odds of landing a better job next year. But I haven’t been able to do that, and each new rejection cuts so deep that I can barely stand to look at my papers anymore, much less actually continue submitting them. And apparently even tenured professors still get their papers rejected repeatedly, which means that this pain will never go away. I simply cannot imagine being happy if this is what I am expected to do for the rest of my life.

I found this list of criteria for when you should give up something—and most of them fit me. I’m not sure I know in my heart it can’t work out, but I increasingly suspect that. I’m not sure I want it anymore, now that I have a better idea of what it’s really like. Pursuing it is definitely making me utterly miserable. I wouldn’t say it’s the only reason, but I definitely do worry what other people will think if I quit; I feel like I’d be letting a lot of people down. I also wonder who I am without it, where I belong if not here. I don’t know what other paths are out there, but maybe there is something better. This constant stream of failure and rejection has definitely made me feel like I hate myself. And above all, when I imagine quitting, I absolutely feel an enormous sense of relief.

Publishing in journals seems to be the thing that successful academics care about most, and it means almost nothing to me anymore. I only want it because of all the pressure to have it, because of all the rewards that come from having it. It has become fully instrumental to me, with no intrinsic meaning or value. I have no particular desire to be lauded by the same system that lauded Fischer Black or Kenneth Rogoff—both of whose egregious and easily-avoidable mistakes are responsible for the suffering of millions people around the world.

I want people to read my ideas. But people don’t actually read journals. They skim them. They read the abstracts. They look at the graphs and regression tables. (You have the meeting that should have been an email? I raise you the paper that should have been a regression table.) They see if there’s something in there that they should be citing for their own work, and if there is, maybe then they actually read the paper—but everyone is so hyper-specialized that only a handful of people will ever actually want to cite any given paper. The vast majority of research papers are incredibly tedious to read and very few people actually bother. As a method for disseminating ideas, this is perhaps slightly better than standing on a street corner and shouting into a megaphone.

I would much rather write books; people sometimes actually read books, especially when they are written for a wide audience and hence not forced into the straitjacket of standard ‘scientific writing’ that no human being actually gets any enjoyment out of writing or reading. I’ve seen a pretty clear improvement in writing quality of papers written by Nobel laureates—after they get their Nobels or similar accolades. Once they establish themselves, they are free to actually write in ways that are compelling and interesting, rather than having to present everything in the most dry, tedious way possible. If your paper reads like something that a normal person would actually find interesting or enjoyable to read, you will be—as I have been—immediately told that you must remove all such dangerous flavor until the result is as tasteless as possible.

No, the purpose of research journals is not to share ideas. Its function is not to share, but to evaluate. And it isn’t even really to evaluate research—it’s to evaluate researchers. It’s to outsource the efforts of academic hiring to an utterly unaccountable and arbitrary system run mostly by for-profit corporations. It may have some secondary effect of evaluating ideas for validity; at least the really awful ideas are usually excluded. But its primary function is to decide the academic pecking order.

I had thought that scientific peer review was supposed to select for truth. Perhaps sometimes it does. It seems to do so reasonably well in the natural sciences, at least. But in the social sciences? That’s far less clear. Peer-reviewed papers are much more likely to be accurate than any randomly-selected content; but there are still a disturbingly large number of peer-reviewed published papers that are utterly wrong, and some unknown but undoubtedly vast number of good papers that have never seen the light of day.

Then again, when I imagine giving up on an academic career, I don’t just feel relief—I also feel regret and loss. I feel like I’ve wasted years of my life putting together a dream that has now crumbled in my hands. I even feel some anger, some sense that I was betrayed by those who told me that this was about doing good research when it turns out it’s actually about being thick-skinned enough that you can take an endless assault of rejections. It feels like I’ve been running a marathon, and I just rounded a curve to discover that the last five miles must be ridden on horseback, when I don’t have a horse, I have no equestrian training, and in fact I’m allergic to horses.

I wish someone had told me it would be like this. Maybe they tried and I didn’t listen. They did say that papers would get rejected. They did say that the tenure track was high-pressure and publish-or-perish was a major source of anxiety. But they never said that it would tear at my soul like this. They never said that I would have to go through multiple rounds of agony, self-doubt, and despair in order to get even the slighest recognition for my years of work. They never said that the whole field would treat me like I’m worthless because I can’t satisfy the arbitrary demands of a handful of anonymous reviewers. They never said that I would begin to feel worthless after several rounds of this.

That’s really what I want to give up on. I want to give up on hitching my financial security, my career, my future, my self-worth to a system as capricious as peer review.

I don’t want to give up on research. I don’t want to give up on teaching. I still believe strongly in discovering new truths and sharing them with others. I’m just increasingly realizing that academia isn’t nearly as good at that as I thought it was.

It isn’t even that I think it’s impossible for me to succeed in academia. I think that if I continued trying to get a tenure-track job, I would land one eventually. Maybe next year. Or maybe I’d spend a few years at a postdoc first. And I’d probably manage to publish some paper in some reasonably respectable journal at some point in the future. But I don’t know how long it would take, or how good a journal it would be—and I’m already past the point where I really don’t care anymore, where I can’t afford to care, where if I really allowed myself to care it would only devastate me when I inevitably fail again. Now that I see what is really involved in the process, how arduous and arbitrary it is, publishing in a journal means almost nothing to me. I want to be validated; I want to be appreciated; I want to be recognized. But the system is set up to provide nothing but rejection, rejection, rejection. If even the best work won’t be recognized immediately and even the worst work can make it with enough tries, then the whole system begins to seem meaningless. It’s just rolls of the dice. And I didn’t sign up to be a gambler.

The job market will probably be better next year than it was this year. But how much better? Yes, there will be more openings, but there will also be more applicants: Everyone who would normally be on the market, plus everyone like me who didn’t make it this year, plus everyone who decided to hold back this year because they knew they wouldn’t make it (as I probably should have done). Yes, in a normal year, I could be fairly confident of getting some reasonably decent position—but this wasn’t a normal year, and next year won’t be one either, and the one after that might still not be. If I can’t get a paper published in a good journal between now and then—and I’m increasingly convinced that I can’t—then I really can’t expect my odds to be greatly improved from what they were this time around. And if I don’t know that this terrible gauntlet is going to lead to something good, I’d really much rather avoid it altogether. It was miserable enough when I went into it being (over)confident that it would work out all right.

Perhaps the most important question when deciding whether to give up is this: What will happen if you do? What alternatives do you have? If giving up means dying, then don’t give up. (“Learn to let go” is very bad advice to someone hanging from the edge of a cliff.) But while it may feel that way sometimes, rarely does giving up on a career or a relationship or a project yield such catastrophic results.

When people are on the fence about making a change and then do so, even based on the flip of a coin, it usually makes them better off. Note that this is different from saying you should make all your decisions randomly; if you are confident that you don’t want to make a change, don’t make a change. This advice is for people who feel like they want a change but are afraid to take the chance, people who find themselves ambivalent about what direction to go next—people like me.

I don’t know where I should go next. I don’t know where I belong. I know it isn’t Wall Street. I’m pretty sure it’s not consulting. Maybe it’s nonprofits. Maybe it’s government. Maybe it’s freelance writing. Maybe it’s starting my own business. I guess I’d still consider working in academia; if Purdue called me back to say they made a terrible mistake and they want me after all, I’d probably take the offer. But since such an outcome is now vanishingly unlikely, perhaps it’s time, after all, to give up.

Selectivity is a terrible measure of quality

May 23 JDN 2459358

How do we decide which universities and research journals are the best? There are a vast number of ways we could go about this—and there are in fact many different ranking systems out there, though only a handful are widely used. But one primary criterion which seems to be among the most frequently used is selectivity.

Selectivity is a very simple measure: What proportion of people who try to get in, actually get in? For universities this is admission rates for applicants; for journals it is acceptance rates for submitted papers.

The top-rated journals in economics have acceptance rates of 1-7%. The most prestigious universities have acceptance rates of 4-10%. So a reasonable ballpark is to assume a 95% chance of not getting accepted in either case. Of course, some applicants are more or less qualified, and some papers are more or less publishable; but my guess is that most applicants are qualified and most submitted papers are publishable. So these low acceptance rates mean refusing huge numbers of qualified people.


Selectivity is an objective, numeric score that can be easily generated and compared, and is relatively difficult to fake. This may accouunt for its widespread appeal. And it surely has some correlation with genuine quality: Lots of people are likely to apply to a school because it is good, and lots of people are likely to submit to a journal because it is good.

But look a little bit closer, and it becomes clear that selectivity is really a terrible measure of quality.


One, it is extremely self-fulfilling. Once a school or a journal becomes prestigious, more people will try to get in there, and that will inflate its selectivity rating. Harvard is extremely selective because Harvard is famous and high-rated. Why is Harvard so high-rated? Well, in part because Harvard is extremely selective.

Two, it incentivizes restricting the number of applicants accepted.

Ivy League schools have vast endowments, and could easily afford to expand their capacity, thus employing more faculty and educating more students. But that would require reducing their acceptance rates and hence jeopardizing their precious selectivity ratings. If the goal is to give as many people as possible the highest quality education, then selectivity is a deeply perverse incentive: It specifically incentivizes not educating too many students.

Similarly, most journals include something in their rejection letters about “limited space”, which in the age of all-digital journals is utter nonsense. Journals could choose to publish ten, twenty, fifty times as many papers as they currently do—or half, or a tenth. They could publish everything that gets submitted, or only publish one paper a year. It’s an entirely arbitrary decision with no real constraints. They choose what proportion of papers to publish entirely based primarily on three factors that have absolutely nothing to do with limited space: One, they want to publish enough papers to make it seem like they are putting out regular content; two, they want to make sure they publish anything that will turn out to be a major discovery (though they honestly seem systematically bad at predicting that); and three, they want to publish as few papers as possible within those constraints to maximize their selectivity.

To be clear, I’m not saying that journals should publish everything that gets submitted. Actually I think too many papers already get published—indeed, too many get written. The incentives in academia are to publish as many papers in top journals as possible, rather than to actually do the most rigorous and ground-breaking research. The best research often involves spending long periods of time making very little visible progress, and it does not lend itself to putting out regular publications to impress tenure committees and grant agencies.

The number of scientific papers published each year has grown at about 5% per year since 1900. The number of peer-reviewed journals has grown at an increasing rate, from about 3% per year for most of the 20th century to over 6% now. These are far in excess of population growth, technological advancement, or even GDP growth; this many scientific papers is obviously unsustainable. There are now 300 times as many scientific papers published per year as there were in 1900—while the world population has only increased by about 5-fold during that time. Yes, the number of scientists has also increased—but not that fast. About 8 million people are scientists, publishing an average of 2 million articles per year—one per scientist every four years. But the number of scientist jobs grows at just over 1%—basically tracking population growth or the job market in general. If papers published continue to grow at 5% while the number of scientists increases at 1%, then in 100 years each scientist will have to publish 48 times as many papers as today, or about 1 every month.


So the problem with research journals isn’t so much that journals aren’t accepting enough papers, as that too many people are submitting papers. Of course the real problem is that universities have outsourced their hiring decisions to journal editors. Rather than actually evaluating whether someone is a good teacher or a good researcher (or accepting that they can’t and hiring randomly), universities have trusted in the arbitrary decisions of research journals to decide whom they should hire.

But selectivity as a measure of quality means that journals have no reason not to support this system; they get their prestige precisely from the fact that scientists are so pressured to publish papers. The more papers get submitted, the better the journals look for rejecting them.

Another way of looking at all this is to think about what the process of acceptance or rejection entails. It is inherently a process of asymmetric information.

If we had perfect information, what would the acceptance rate of any school or journal be? 100%, regardless of quality. Only the applicants who knew they would get accepted would apply. So the total number of admitted students and accepted papers would be exactly the same, but all the acceptance rates would rise to 100%.

Perhaps that’s not realistic; but what if the application criteria were stricter? For instance, instead of asking you your GPA and SAT score, Harvard’s form could simply say: “Anyone with a GPA less than 4.0 or an SAT score less than 1500 need not apply.” That’s practically true anyway. But Harvard doesn’t have an incentive to say it out loud, because then applicants who know they can’t meet that standard won’t bother applying, and Harvard’s precious selectivity number will go down. (These are far from sufficient, by the way; I was valedictorian and had a 1590 on my SAT and still didn’t get in.)

There are other criteria they’d probably be even less willing to emphasize, but are no less significant: “If your family income is $20,000 or less, there is a 95% chance we won’t accept you.” “Other things equal, your odds of getting in are much better if you’re Black than if you’re Asian.”

For journals it might be more difficult to express the criteria clearly, but they could certainly do more than they do. Journals could more strictly delineate what kind of papers they publish: This one only for pure theory, that one only for empirical data, this one only for experimental results. They could choose more specific content niches rather than literally dozens of journals all being ostensibly about “economics in general” (the American Economic Review, the Quarterly Journal of Economics, the Journal of Political Economy, the Review of Economic Studies, the European Economic Review, the International Economic Review, Economic Inquiry… these are just the most prestigious). No doubt there would still have to be some sort of submission process and some rejections—but if they really wanted to reduce the number of submissions they could easily do so. The fact is, they want to have a large number of submissions that they can reject.

What this means is that rather than being a measure of quality, selectivity is primarily a measure of opaque criteria. It’s possible to imagine a world where nearly every school and every journal accept less than 1% of applicants; this would occur if the criteria for acceptance were simply utterly unknown and everyone had to try hundreds of places before getting accepted.


Indeed, that’s not too dissimilar to how things currently work in the job market or the fiction publishing market. The average job opening receives a staggering 250 applications. In a given year, a typical literary agent receives 5000 submissions and accepts 10 clients—so about one in every 500.

For fiction writing I find this somewhat forgivable, if regrettable; the quality of a novel is a very difficult thing to assess, and to a large degree inherently subjective. I honestly have no idea what sort of submission guidelines one could put on an agency page to explain to authors what distinguishes a good novel from a bad one (or, not quite the same thing, a successful one from an unsuccessful one).

Indeed, it’s all the worse because a substantial proportion of authors don’t even follow the guidelines that they do include! The most common complaint I hear from agents and editors at writing conferences is authors not following their submission guidelines—such basic problems as submitting content from the wrong genre, not formatting it correctly, having really egregious grammatical errors. Quite frankly I wish they’d shut up about it, because I wanted to hear what would actually improve my chances of getting published, not listen to them rant about the thousands of people who can’t bother to follow directions. (And I’m pretty sure that those people aren’t likely to go to writing conferences and listen to agents give panel discussions.)

But for the job market? It’s really not that hard to tell who is qualified for most jobs. If it isn’t something highly specialized, most people could probably do it, perhaps with a bit of training. If it is something highly specialized, you can restrict your search to people who already have the relevant education or training. In any case, having experience in that industry is obviously a plus. Beyond that, it gets much harder to assess quality—but also much less necessary. Basically anyone with an advanced degree in the relevant subject or a few years of experience at that job will probably do fine, and you’re wasting effort by trying to narrow the field further. If it is very hard to tell which candidate is better, that usually means that the candidates really aren’t that different.

To my knowledge, not a lot of employers or fiction publishers pride themselves on their selectivity. Indeed, many fiction publishers have a policy of simply refusing unsolicited submissions, relying upon literary agents to pre-filter their submissions for them. (Indeed, even many agents refuse unsolicited submissions—which raises the question: What is a debut author supposed to do?) This is good, for if they did—if Penguin Random House (or whatever that ludicrous all-absorbing conglomerate is calling itself these days; ah, what was it like in that bygone era, when anti-trust enforcement was actually a thing?) decided to start priding itself on its selectivity of 0.05% or whatever—then the already massively congested fiction industry would probably grind to a complete halt.

This means that by ranking schools and journals based on their selectivity, we are partly incentivizing quality, but mostly incentivizing opacity. The primary incentive is for them to attract as many applicants as possible, even knowing full well that they will reject most of these applicants. They don’t want to be too clear about what they will accept or reject, because that might discourage unqualified applicants from trying and thus reduce their selectivity rate. In terms of overall welfare, every rejected application is wasted human effort—but in terms of the institution’s selectivity rating, it’s a point in their favor.