Mind reading is not optional

Nov 20 JDN 2459904

I have great respect for cognitive-behavioral therapy (CBT), and it has done a lot of good for me. (It is also astonishingly cost-effective; its QALY per dollar rate compares favorably to almost any other First World treatment, and loses only to treating high-impact Third World diseases like malaria and schistomoniasis.)

But there are certain aspects of it that have always been frustrating to me. Standard CBT techniques often present as ‘cognitive distortions‘ what are in fact clearly necessary heuristics without which it would be impossible to function.

Perhaps the worst of these is so-called ‘mind reading‘. The very phrasing of it makes it sound ridiculous: Are you suggesting that you have some kind of extrasensory perception? Are you claiming to be a telepath?

But in fact ‘mind reading’ is simply the use of internal cognitive models to forecast the thoughts, behaviors, and expectations of other human beings. And without it, it would be completely impossible to function in human society.

For instance, I have had therapists tell me that it is ‘mind reading’ for me to anticipate that people will have tacit expectations for my behavior that they will judge me for failing to meet, and I should simply wait for people to express their expectations rather than assuming them. I admit, life would be much easier if I could do that. But I know for a fact that I can’t. Indeed, I used to do that, as a child, and it got me in trouble all the time. People were continually upset at me for not doing things they had expected me to do but never bothered to actually mention. They thought these expectations were “obvious”; they were not, at least not to me.

It was often little things, and in hindsight some of these things seem silly: I didn’t know what a ‘made bed’ was supposed to look like, so I put it in a state that was functional for me, but that was not considered ‘making the bed’. (I have since learned that my way was actually better: It’s good to let sheets air out before re-using them.) I was asked to ‘clear the sink’, so I moved the dishes out of the sink and left them on the counter, not realizing that the implicit command was for me to wash those dishes, dry them, and put them away. I was asked to ‘bring the dinner plates to the table’, so I did that, and left them in a stack there, not realizing that I should be setting them out in front of each person’s chair and also bringing flatware. Of course I know better now. But how was I supposed to know then? It seems like I was expected to, though.

Most people just really don’t seem to realize how many subtle, tacit expectations are baked into every single task. I think neurodivergence is quite relevant here; I have a mild autism spectrum disorder, and so I think rather differently than most people. If you are neurotypical, then you probably can forecast other people’s expectations fairly well automatically, and so they may seem obvious to you. In fact, they may seem so obvious that you don’t even realize you’re doing it. Then when someone like me comes along and is consciously, actively trying to forecast other people’s expectations, and sometimes doing it poorly, you go and tell them to stop trying to forecast. But if they were to do that, they’d end up even worse off than they are. What you really need to be telling them is how to forecast better—but that would require insight into your own forecasting methods which you aren’t even consciously aware of.

Seriously, stop and think for a moment all of the things other people expect you to do every day that are rarely if ever explicitly stated. How you are supposed to dress, how you are supposed to speak, how close you are supposed to stand to other people, how long you are supposed to hold eye contact—all of these are standards you will be expected to meet, whether or not any of them have ever been explicitly explained to you. You may do this automatically; or you may learn to do it consciously after being criticized for failing to do it. But one way or another, you must forecast what other people will expect you to do.

To my knowledge, no one has ever explicitly told me not to wear a Starfleet uniform to work. I am not aware of any part of the university dress code that explicitly forbids such attire. But I’m fairly sure it would not be a good idea. To my knowledge, no one has ever explicitly told me not to burst out into song in the middle of a meeting. But I’m still pretty sure I shouldn’t do that. To my knowledge, no one has ever explicitly told me what the ‘right of way’ rules are for walking down a crowded sidewalk, who should be expected to move out of the way of whom. But people still get mad if you mess up and bump into them.

Even when norms are stated explicitly, it is often as a kind of last resort, and the mere fact that you needed to have a norm stated is often taken as a mark against your character. I have been explicitly told in various contexts not to talk to myself or engage in stimming leg movements; but the way I was told has generally suggested that I would have been judged better if I hadn’t had to be told, if I had simply known the way that other people seem to know. (Or is it that they never felt any particular desire to stim?)

In fact, I think a major part of developing social skills and becoming more functional, to the point where a lot of people actually now seem a bit surprised to learn I have an autism spectrum disorder, has been improving my ability to forecast other people’s expectations for my behavior. There are dozens if not hundreds of norms that people expect you to follow at any given moment; most people seem to intuit them so easily that they don’t even realize they are there. But they are there all the same, and this is painfully evident to those of us who aren’t always able to immediately intuit them all.

Now, the fact remains that my current mental models are surely imperfect. I am often wrong about what other people expect of me. I’m even prepared to believe that some of my anxiety comes from believing that people have expectations more demanding than what they actually have. But I can’t simply abandon the idea of forecasting other people’s expectations. Don’t tell me to stop doing it; tell me how to do it better.

Moreover, there is a clear asymmetry here: If you think people want more from you than they actually do, you’ll be anxious, but people will like you and be impressed by you. If you think people want less from you than they actually do, people will be upset at you and look down on you. So, in the presence of uncertainty, there’s a lot of pressure to assume that the expectations are high. It would be best to get it right, of course; but when you aren’t sure you can get it right, you’re often better off erring on the side of caution—which is to say, the side of anxiety.

In short, mind reading isn’t optional. If you think it is, that’s only because you do it automatically.

The United Kingdom in transition

Oct 30 JDN 2459883

When I first decided to move to Edinburgh, I certainly did not expect it to be such a historic time. The pandemic was already in full swing, but I thought that would be all. But this year I was living in the UK when its leadership changed in two historic ways:

First, there was the death of Queen Elizabeth II, and the coronation of King Charles III.

Second, there was the resignation of Boris Johnson, the appointment of Elizabeth Truss, and then, so rapidly I feel like I have whiplash, the resignation of Elizabeth Truss.

In other words, I have seen the end of the longest-reigning monarch and the rise and fall of the shortest-reigning prime minister in the history of the United Kingdom. The three hundred-year history of the United Kingdom.

The prior probability of such a 300-year-historic event happening during my own 3-year term in the UK is approximately 1%. Yet, here we are. A new king, one of a handful of genuine First World monarchs to be coronated in the 21st century. The others are the Netherlands, Belgium, Spain, Monaco, Andorra, and Luxembourg; none of these have even a third the population of the UK, and if we include every Commonwealth Realm (believe it or not, “realm” is in fact still the official term), Charles III is now king of a supranational union with a population of over 150 million people—half the size of the United States. (Yes, he’s your king too, Canada!) Note that Charles III is not king of the entire Commonwealth of Nations, which includes now-independent nations such as India, Pakistan, and South Africa; that successor to the British Empire contains 54 nations and has a population of over 2 billion.

I still can’t quite wrap my mind around this idea of having a king. It feels even more ancient and anachronistic than the 400-year-old university I work at. Of course I knew that we had a queen before, and that she was old and would presumably die at some point and probably be replaced; but that wasn’t really salient information to me until she actually did die and then there was a ten-mile-long queue to see her body and now next spring they will be swearing in this new guy as the monarch of the fourteen realms. It now feels like I’m living in one of those gritty satirical fractured fairy tales. Maybe it’s an urban fantasy setting; it feels a lot like Shrek, to be honest.

Yet other than feeling surreal, none of this has affected my life all that much. I haven’t even really felt the effects of inflation: Groceries and restaurant meals seem a bit more expensive than they were when we arrived, but it’s well within what our budget can absorb; we don’t have a car here, so we don’t care about petrol prices; and we haven’t even been paying more than usual in natural gas because of the subsidy programs. Actually it’s probably been good for our household finances that the pound is so weak and the dollar is so strong. I have been much more directly affected by the university union strikes: being temporary contract junior faculty (read: expendable), I am ineligible to strike and hence had to cross a picket line at one point.

Perhaps this is what history has always felt like for most people: The kings and queens come and go, but life doesn’t really change. But I honestly felt more directly affected by Trump living in the US than I did by Truss living in the UK.

This may be in part because Elizabeth Truss was a very unusual politician; she combined crazy far-right economic policy with generally fairly progressive liberal social policy. A right-wing libertarian, one might say. (As Krugman notes, such people are astonishingly rare in the electorate.) Her socially-liberal stance meant that she wasn’t trying to implement horrific hateful policies against racial minorities or LGBT people the way that Trump was, and for once her horrible economic policies were recognized immediately as such and quickly rescinded. Unlike Trump, Truss did not get the chance to appoint any supreme court justices who could go on to repeal abortion rights.

Then again, Truss couldn’t have appointed any judges if she’d wanted to. The UK Supreme Court is really complicated, and I honestly don’t understand how it works; but from what I do understand, the Prime Minister appoints the Lord Chancellor, the Lord Chancellor forms a commission to appoint the President of the Supreme Court, and the President of the Supreme Court forms a commission to appoint new Supreme Court judges. But I think the monarch is considered the ultimate authority and can veto any appointment along the way. (Or something. Sometimes I get the impression that no one truly understands the UK system, and they just sort of go with doing things as they’ve always been done.) This convoluted arrangement seems to grant the court considerably more political independence than its American counterpart; also, unlike the US Supreme Court, the UK Supreme Court is not allowed to explicitly overturn primary legislation. (Fun fact: The Lord Chancellor is also the Keeper of the Great Seal of the Realm, because Great Britain hasn’t quite figured out that the 13th century ended yet.)

It’s sad and ironic that it was precisely by not being bigoted and racist that Truss ensured she would not have sufficient public support for her absurd economic policies. There’s a large segment of the population of both the US and UK—aptly, if ill-advisedly, referred to by Clinton as “deplorables”—who will accept any terrible policy as long as it hurts the right people. But Truss failed to appeal to that crucial demographic, and so could find no one to support her. Hence, her approval rating fell to a dismal 10%, and she was outlasted by a head of lettuce.

At the time of writing, the new prime minister has not yet been announced, but the smart money is on Rishi Sunak. (I mean that quite literally; he’s leading in prediction markets.) He’s also socially liberal but fiscally conservative, but unlike Truss he seems to have at least some vague understanding of how economics works. Sunak is also popular in a way Truss never was (though that popularity has been declining recently). So I think we can expect to get new policies which are in the same general direction as what Truss wanted—lower taxes on the rich, more privatization, less spent on social services—but at least Sunak is likely to do so in a way that makes the math(s?) actually add up.

All of this is unfortunate, but largely par for the course for the last few decades. It compares quite favorably to the situation in the US, where somehow a large chunk of Americans either don’t believe that an insurrection attempt occurred, are fine with it, or blame the other side, and as the guardrails of democracy continue breaking, somehow gasoline prices appear to be one of the most important issues in the midterm election.

You know what? Living through history sucks. I don’t want to live in “interesting times” anymore.

On (gay) marriage

Oct 9 JDN 2459862

This post goes live on my first wedding anniversary. Thus, as you read this, I will have been married for one full year.

Honestly, being married hasn’t felt that different to me. This is likely because we’d been dating since 2012 and lived together for several years before actually getting married. It has made some official paperwork more convenient, and I’ve reached the point where I feel naked without my wedding band; but for the most part our lives have not really changed.

And perhaps this is as it should be. Perhaps the best way to really know that you should get married is to already feel as though you are married, and just finally get around to making it official. Perhaps people for whom getting married is a momentous change in their lives (as opposed to simply a formal announcement followed by a celebration) are people who really shouldn’t be getting married just yet.

A lot of things in my life—my health, my career—have not gone very well in this past year. But my marriage has been only a source of stability and happiness. I wouldn’t say we never have conflict, but quite honestly I was expecting a lot more challenges and conflicts from the way I’d heard other people talk about marriage in the past. All of my friends who have kids seem to be going through a lot of struggles as a result of that (which is one of several reasons we keep procrastinating on looking into adoption), but marriage itself does not appear to be any more difficult than friendship—in fact, maybe easier.

I have found myself oddly struck by how un-important it has been that my marriage is to a same-sex partner. I keep expecting people to care—to seem uncomfortable, to be resistant, or simply to be surprised—and it so rarely happens.

I think this is probably generational: We Millennials grew up at the precise point in history when the First World suddenly decided, all at once, that gay marriage was okay.

Seriously, look at this graph. I’ve made it combining this article using data from the General Social Survey, and this article from Pew:

Until around 1990—when I was 2 years old—support for same-sex marriage was stable and extremely low: About 10% of Americans supported it (presumably most of them LGBT!), and over 70% opposed it. Then, quite suddenly, attitudes began changing, and by 2019, over 60% of Americans supported it and only 31% opposed it.

That is, within a generation, we went from a country where almost no one supported gay marriage to a country where same-sex marriage is so popular that any major candidate who opposed it would almost certainly lose a general election. (They might be able to survive a Republican primary, as Republican support for same-sex marriage is only about 44%—about where it was among Democrats in the early 2000s.)

This is a staggering rate of social change. If development economics is the study of what happened in South Korea from 1950-2000, I think political science should be the study of what happened to attitudes on same-sex marriage in the US from 1990-2020.

And of course it isn’t just the US. Similar patterns can be found across Western Europe, with astonishingly rapid shifts from near-universal opposition to near-universal support within a generation.

I don’t think I have been able to fully emotionally internalize this shift. I grew up in a world where homophobia was mainstream, where only the most radical left-wing candidates were serious about supporting equal rights and representation for LGBT people. And suddenly I find myself in a world where we are actually accepted and respected as equals, and I keep waiting for the other shoe to drop. Aren’t you the same people who told me as a teenager that I was a sexual deviant who deserved to burn in Hell? But now you’re attending my wedding? And offering me joint life insurance policies? My own extended family members treat me differently now than they did when I was a teenager, and I don’t quite know how to trust that the new way is the true way and not some kind of facade that could rapidly disappear.

I think this sort of generational trauma may never fully heal, in which case it will be the generation after us—the Zoomers, I believe we’re calling them now—who will actually live in this new world we created, while the rest of us forever struggle to accept that things are not as we remember them. Once bitten, we remain forever twice shy, lest attitudes regress as suddenly as they advanced.

Then again, it seems that Zoomers may be turning against the institution of marriage in general. As the meme says: “Boomers: No gay marriage. Millennials: Yes gay marriage. Gen Z: Yes gay, no marriage.” Maybe that’s for the best; maybe the future of humanity is for personal relationships to be considered no business of the government at all. But for now at least, equal marriage is clearly much better than unequal marriage, and the First World seems to have figured that out blazing fast.

And of course the rest of the world still hasn’t caught up. While trends are generally in a positive direction, there are large swaths of the world where even very basic rights for LGBT people are opposed by most of the population. As usual, #ScandinaviaIsBetter, with over 90% support for LGBT rights; and, as usual, Sub-Saharan Africa is awful, with support in Kenya, Uganda and Nigeria not even hitting 20%.

Mindful of mindfulness

Sep 25 JDN 2459848

I have always had trouble with mindfulness meditation.

On the one hand, I find it extremely difficult to do: if there is one thing my mind is good at, it’s wandering. (I think in addition to my autism spectrum disorder, I may also have a smidgen of ADHD. I meet some of the criteria at least.) And it feels a little too close to a lot of practices that are obviously mumbo-jumbo nonsense, like reiki, qigong, and reflexology.

On the other hand, mindfulness meditation has been empirically shown to have large beneficial effects in study after study after study. It helps with not only depression, but also chronic pain. It even seems to improve immune function. The empirical data is really quite clear at this point. The real question is how it does all this.

And I am, above all, an empiricist. I bow before the data. So, when my new therapist directed me to an app that’s supposed to train me to do mindfulness meditation, I resolved that I would in fact give it a try.

Honestly, as of writing this, I’ve been using it less than a week; it’s probably too soon to make a good evaluation. But I did have some prior experience with mindfulness, so this was more like getting back into it rather than starting from scratch. And, well, I think it might actually be working. I feel a bit better than I did when I started.

If it is working, it doesn’t seem to me that the mechanism is greater focus or mental control. I don’t think I’ve really had time to meaningfully improve those skills, and to be honest, I have a long way to go there. The pre-recorded voice samples keep telling me it’s okay if my mind wanders, but I doubt the app developers planned for how much my mind can wander. When they suggest I try to notice each wandering thought, I feel like saying, “Do you want the complete stack trace, or just the final output? Because if I wrote down each terminal branch alone, my list would say something like ‘fusion reactors, ice skating, Napoleon’.”

I think some of the benefit is simply parasympathetic activation, that is, being more relaxed. I am, and have always been, astonishingly bad at relaxing. It’s not that I lack positive emotions: I can enjoy, I can be excited. Nor am I incapable of low-arousal emotions: I can get bored, I can be lethargic. I can also experience emotions that are negative and high-arousal: I can be despondent or outraged. But I have great difficulty reaching emotional states which are simultaneously positive and low-arousal, i.e. states of calm and relaxation. (See here for more on the valence/arousal model of emotional states.) To some extent I think this is due to innate personality: I am high in both Conscientiousness and Neuroticism, which basically amounts to being “high-strung“. But mindfulness has taught me that it’s also trainable, to some extent; I can get better at relaxing, and I already have.

And even more than that, I think the most important effect has been reminding and encouraging me to practice self-compassion. I am an intensely compassionate person, toward other people; but toward myself, I am brutal, demanding, unforgiving, even cruel. My internal monologue says terrible things to me that I wouldnever say to anyone else. (Or at least, not to anyone else who wasn’t a mass murderer or something. I wouldn’t feel particularly bad about saying “You are a failure, you are broken, you are worthless, you are unworthy of love” to, say, Josef Stalin. And yes, these are in fact things my internal monologue has said to me.) Whenever I am unable to master a task I consider important, my automatic reaction is to denigrate myself for failing; I think the greatest benefit I am getting from practicing meditation is being encouraged to fight that impulse. That is, the most important value added by the meditation app has not been in telling me how to focus on my own breathing, but in reminding me to forgive myself when I do it poorly.

If this is right (as I said, it’s probably too soon to say), then we may at last be able to explain why meditation is simultaneously so weird and tied to obvious mumbo-jumbo on the one hand, and also so effective on the other. The actual function of meditation is to be a difficult cognitive task which doesn’t require outside support.

And then the benefit actually comes from doing this task, getting slowly better at it—feeling that sense of progress—and also from learning to forgive yourself when you do it badly. The task probably could have been anything: Find paths through mazes. Fill out Sudoku grids. Solve integrals. But these things are hard to do without outside resources: It’s basically impossible to draw a maze without solving it in the process. Generating a Sudoku grid with a unique solution is at least as hard as solving one (which is NP-complete). By the time you know a given function is even integrable over elementary functions, you’ve basically integrated it. But focusing on your breath? That you can do anywhere, anytime. And the difficulty of controlling all your wandering thoughts may be less a bug than a feature: It’s precisely because the task is so difficult that you will have reason to practice forgiving yourself for failure.

The arbitrariness of the task itself is how you can get a proliferation of different meditation techniques, and a wide variety of mythologies and superstitions surrounding them all, but still have them all be about equally effective in the end. Because it was never really about the task at all. It’s about getting better and failing gracefully.

It probably also helps that meditation is relaxing. Solving integrals might not actually work as well as focusing on your breath, even if you had a textbook handy full of integrals to solve. Breathing deeply is calming; integration by parts isn’t. But lots of things are calming, and some things may be calming to one person but not to another.

It is possible that there is yet some other benefit to be had directly via mindfulness itself. If there is, it will surely have more to do with anterior cingulate activation than realignment of qi. But such a particular benefit isn’t necessary to explain the effectiveness of meditation, and indeed would be hard-pressed to explain why so many different kinds of meditation all seem to work about as well.

Because it was never about what you’re doing—it was always about how.

The injustice of talent

Sep 4 JDN 2459827

Consider the following two principles of distributive justice.

A: People deserve to be rewarded in proportion to what they accomplish.

B: People deserve to be rewarded in proportion to the effort they put in.

Both principles sound pretty reasonable, don’t they? They both seem like sensible notions of fairness, and I think most people would broadly agree with both them.

This is a problem, because they are mutually contradictory. We cannot possibly follow them both.

For, as much as our society would like to pretend otherwise—and I think this contradiction is precisely why our society would like to pretend otherwise—what you accomplish is not simply a function of the effort you put in.

Don’t get me wrong; it is partly a function of the effort you put in. Hard work does contribute to success. But it is neither sufficient, nor strictly necessary.

Rather, success is a function of three factors: Effort, Environment, and Talent.

Effort is the work you yourself put in, and basically everyone agrees you deserve to be rewarded for that.

Environment includes all the outside factors that affect you—including both natural and social environment. Inheritance, illness, and just plain luck are all in here, and there is general, if not universal, agreement that society should make at least some efforts to minimize inequality created by such causes.

And then, there is talent. Talent includes whatever capacities you innately have. It could be strictly genetic, or it could be acquired in childhood or even in the womb. But by the time you are an adult and responsible for your own life, these factors are largely fixed and immutable. This includes things like intelligence, disability, even height. The trillion-dollar question is: How much should we reward talent?

For talent clearly does matter. I will never swim like Michael Phelps, run like Usain Bolt, or shoot hoops like Steph Curry. It doesn’t matter how much effort I put in, how many hours I spend training—I will never reach their level of capability. Never. It’s impossible. I could certainly improve from my current condition; perhaps it would even be good for me to do so. But there are certain hard fundamental constraints imposed by biology that give them more potential in these skills than I will ever have.

Conversely, there are likely things I can do that they will never be able to do, though this is less obvious. Could Michael Phelps never be as good a programmer or as skilled a mathematician as I am? He certainly isn’t now. Maybe, with enough time, enough training, he could be; I honestly don’t know. But I can tell you this: I’m sure it would be harder for him than it was for me. He couldn’t breeze through college-level courses in differential equations and quantum mechanics the way I did. There is something I have that he doesn’t, and I’m pretty sure I was born with it. Call it spatial working memory, or mathematical intuition, or just plain IQ. Whatever it is, math comes easy to me in not so different a way from how swimming comes easy to Michael Phelps. I have talent for math; he has talent for swimming.

Moreover, these are not small differences. It’s not like we all come with basically the same capabilities with a little bit of variation that can be easily washed out by effort. We’d like to believe that—we have all sorts of cultural tropes that try to inculcate that belief in us—but it’s obviously not true. The vast majority of quantum physicists are people born with high IQ. The vast majority of pro athletes are people born with physical prowess. The vast majority of movie stars are people born with pretty faces. For many types of jobs, the determining factor seems to be talent.

This isn’t too surprising, actually—even if effort matters a lot, we would still expect talent to show up as the determining factor much of the time.

Let’s go back to that contest function model I used to analyze the job market awhile back (the one that suggests we spend way too much time and money in the hiring process). This time let’s focus on the perspective of the employees themselves.

Each employee has a level of talent, h. Employee X has talent hx and exerts effort x, producing output of a quality that is the product of these: hx x. Similarly, employee Z has talent hz and exerts effort z, producing output hz z.

Then, there’s a certain amount of luck that factors in. The most successful output isn’t necessarily the best, or maybe what should have been the best wasn’t because some random circumstance prevailed. But we’ll say that the probability an individual succeeds is proportional to the quality of their output.

So the probability that employee X succeeds is: hx x / ( hx x + hz z)

I’ll skip the algebra this time (if you’re interested you can look back at that previous post), but to make a long story short, in Nash equilibrium the two employees will exert exactly the same amount of effort.

Then, which one succeeds will be entirely determined by talent; because x = z, the probability that X succeeds is hx / ( hx + hz).

It’s not that effort doesn’t matter—it absolutely does matter, and in fact in this model, with zero effort you get zero output (which isn’t necessarily the case in real life). It’s that in equilibrium, everyone is exerting the same amount of effort; so what determines who wins is innate talent. And I gotta say, that sounds an awful lot like how professional sports works. It’s less clear whether it applies to quantum physicists.

But maybe we don’t really exert the same amount of effort! This is true. Indeed, it seems like actually effort is easier for people with higher talent—that the same hour spent running on a track is easier for Usain Bolt than for me, and the same hour studying calculus is easier for me than it would be for Usain Bolt. So in the end our equilibrium effort isn’t the same—but rather than compensating, this effect only serves to exaggerate the difference in innate talent between us.

It’s simple enough to generalize the model to allow for such a thing. For instance, I could say that the cost of producing a unit of effort is inversely proportional to your talent; then instead of hx / ( hx + hz ), in equilibrium the probability of X succeeding would become hx2 / ( hx2 + hz2). The equilibrium effort would also be different, with x > z if hx > hz.

Once we acknowledge that talent is genuinely important, we face an ethical problem. Do we want to reward people for their accomplishment (A), or for their effort (B)? There are good cases to be made for each.

Rewarding for accomplishment, which we might call meritocracy,will tend to, well, maximize accomplishment. We’ll get the best basketball players playing basketball, the best surgeons doing surgery. Moreover, accomplishment is often quite easy to measure, even when effort isn’t.

Rewarding for effort, which we might call egalitarianism, will give people the most control over their lives, and might well feel the most fair. Those who succeed will be precisely those who work hard, even if they do things they are objectively bad at. Even people who are born with very little talent will still be able to make a living by working hard. And it will ensure that people do work hard, which meritocracy can actually fail at: If you are extremely talented, you don’t really need to work hard because you just automatically succeed.

Capitalism, as an economic system, is very good at rewarding accomplishment. I think part of what makes socialism appealing to so many people is that it tries to reward effort instead. (Is it very good at that? Not so clear.)

The more extreme differences are actually in terms of disability. There’s a certain baseline level of activities that most people are capable of, which we think of as “normal”: most people can talk; most people can run, if not necessarily very fast; most people can throw a ball, if not pitch a proper curveball. But some people can’t throw. Some people can’t run. Some people can’t even talk. It’s not that they are bad at it; it’s that they are literally not capable of it. No amount of effort could have made Stephen Hawking into a baseball player—not even a bad one.

It’s these cases when I think egalitarianism becomes most appealing: It just seems deeply unfair that people with severe disabilities should have to suffer in poverty. Even if they really can’t do much productive work on their own, it just seems wrong not to help them, at least enough that they can get by. But capitalism by itself absolutely would not do that—if you aren’t making a profit for the company, they’re not going to keep you employed. So we need some kind of social safety net to help such people. And it turns out that such people are quite numerous, and our current system is really not adequate to help them.

But meritocracy has its pull as well. Especially when the job is really important—like surgery, not so much basketball—we really want the highest quality work. It’s not so important whether the neurosurgeon who removes your tumor worked really hard at it or found it a breeze; what we care about is getting that tumor out.

Where does this leave us?

I think we have no choice but to compromise, on both principles. We will reward both effort and accomplishment, to greater or lesser degree—perhaps varying based on circumstances. We will never be able to entirely reward accomplishment or entirely reward effort.

This is more or less what we already do in practice, so why worry about it? Well, because we don’t like to admit that it’s what we do in practice, and a lot of problems seem to stem from that.

We have people acting like billionaires are such brilliant, hard-working people just because they’re rich—because our society rewards effort, right? So they couldn’t be so successful if they didn’t work so hard, right? Right?

Conversely, we have people who denigrate the poor as lazy and stupid just because they are poor. Because it couldn’t possibly be that their circumstances were worse than yours? Or hey, even if they are genuinely less talented than you—do less talented people deserve to be homeless and starving?

We tell kids from a young age, “You can be whatever you want to be”, and “Work hard and you’ll succeed”; and these things simply aren’t true. There are limitations on what you can achieve through effort—limitations imposed by your environment, and limitations imposed by your innate talents.

I’m not saying we should crush children’s dreams; I’m saying we should help them to build more realistic dreams, dreams that can actually be achieved in the real world. And then, when they grow up, they either will actually succeed, or when they don’t, at least they won’t hate themselves for failing to live up to what you told them they’d be able to do.

If you were wondering why Millennials are so depressed, that’s clearly a big part of it: We were told we could be and do whatever we wanted if we worked hard enough, and then that didn’t happen; and we had so internalized what we were told that we thought it had to be our fault that we failed. We didn’t try hard enough. We weren’t good enough. I have spent years feeling this way—on some level I do still feel this way—and it was not because adults tried to crush my dreams when I was a child, but on the contrary because they didn’t do anything to temper them. They never told me that life is hard, and people fail, and that I would probably fail at my most ambitious goals—and it wouldn’t be my fault, and it would still turn out okay.

That’s really it, I think: They never told me that it’s okay not to be wildly successful. They never told me that I’d still be good enough even if I never had any great world-class accomplishments. Instead, they kept feeding me the lie that I would have great world-class accomplishments; and then, when I didn’t, I felt like a failure and I hated myself. I think my own experience may be particularly extreme in this regard, but I know a lot of other people in my generation who had similar experiences, especially those who were also considered “gifted” as children. And we are all now suffering from depression, anxiety, and Impostor Syndrome.

All because nobody wanted to admit that talent, effort, and success are not the same thing.

How to fix economics publishing

Aug 7 JDN 2459806

The current system of academic publishing in economics is absolutely horrible. It seems practically designed to undermine the mental health of junior faculty.

1. Tenure decisions, and even most hiring decisions, are almost entirely based upon publication in five (5) specific journals.

2. One of those “top five” journals is owned by Elsevier, a corrupt monopoly that has no basis for its legitimacy yet somehow controls nearly one-fifth of all scientific publishing.

3. Acceptance rates in all of these journals are between 5% and 10%—greatly decreased from what they were a generation or two ago. Given a typical career span, the senior faculty evaluating you on whether you were published in these journals had about a three times better chance to get their own papers published there than you do.

4. Submissions are only single-blinded, so while you have no idea who is reading your papers, they know exactly who you are and can base their decision on whether you are well-known in the profession—or simply whether they like you.

5. Simultaneous submissions are forbidden, so when submitting to journals you must go one at a time, waiting to hear back from one before trying the next.

6. Peer reviewers are typically unpaid and generally uninterested, and so procrastinate as long as possible on doing their reviews.

7. As a result, review times for a paper are often measured in months, for every single cycle.

So, a highly successful paper goes like this: You submit it to a top journal, wait three months, it gets rejected. You submit it to another one, wait another four months, it gets rejected. You submit it to a third one, wait another two months, and you are told to revise and resubmit. You revise and resubmit, wait another three months, and then finally get accepted.

You have now spent an entire year getting one paper published. And this was a success.

Now consider a paper that doesn’t make it into a top journal. You submit, wait three months, rejected; you submit again, wait four months, rejected; you submit again, wait two months, rejected. You submit again, wait another five months, rejected; you submit to the fifth and final top-five, wait another four months, and get rejected again.

Now, after a year and a half, you can turn to other journals. You submit to a sixth journal, wait three months, rejected. You submit to a seventh journal, wait four months, get told to revise and resubmit. You revise and resubmit, wait another two months, and finally—finally, after two years—actually get accepted, but not to a top-five journal. So it may not even help you get tenure, unless maybe a lot of people cite it or something.

And what if you submit to a seventh, an eighth, a ninth journal, and still keep getting rejected? At what point do you simply give up on that paper and try to move on with your life?

That’s a trick question: Because what really happens, at least to me, is I can’t move on with my life. I get so disheartened from all the rejections of that paper that I can’t bear to look at it anymore, much less go through the work of submitting it to yet another journal that will no doubt reject it again. But worse than that, I become so depressed about my academic work in general that I become unable to move on to any other research either. And maybe it’s me, but it isn’t just me: 28% of academic faculty suffer from severe depression, and 38% from severe anxiety. And that’s across all faculty—if you look just at junior faculty it’s even worse: 43% of junior academic faculty suffer from severe depression. When a problem is that prevalent, at some point we have to look at the system that’s making us this way.

I can blame the challenges of moving across the Atlantic during a pandemic, and the fact that my chronic migraines have been the most frequent and severe they have been in years, but the fact remains: I have accomplished basically nothing towards the goal of producing publishable research in the past year. I have two years left at this job; if I started right now, I might be able to get something published before my contract is done. Assuming that the project went smoothly, I could start submitting it as soon as it was done, and it didn’t get rejected as many times as the last one.

I just can’t find the motivation to do it. When the pain is so immediate and so intense, and the rewards are so distant and so uncertain, I just can’t bring myself to do the work. I had hoped that talking about this with my colleagues would help me cope, but it hasn’t; in fact it only makes me seem to feel worse, because so few of them seem to understand how I feel. Maybe I’m talking to the wrong people; maybe the ones who understand are themselves suffering too much to reach out to help me. I don’t know.

But it doesn’t have to be this way. Here are some simple changes that could make the entire process of academic publishing in economics go better:

1. Boycott Elsevier and all for-profit scientific journal publishers. Stop reading their journals. Stop submitting to their journals. Stop basing tenure decisions on their journals. Act as though they don’t exist, because they shouldn’t—and then hopefully soon they won’t.

2. Peer reviewers should be paid for their time, and in return required to respond promptly—no more than a few weeks. A lack of response should be considered a positive vote on that paper.

3. Allow simultaneous submissions; if multiple journals accept, let the author choose between them. This is already how it works in fiction publishing, which you’ll note has not collapsed.

4. Increase acceptance rates. You are not actually limited by paper constraints anymore; everything is digital now. Most of the work—even in the publishing process—already has to be done just to go through peer review, so you may as well publish it. Moreover, most papers that are submitted are actually worthy of publishing, and this whole process is really just an idiotic status hierarchy. If the prestige of your journal decreases because you accept more papers, we are measuring prestige wrong. Papers should be accepted something like 50% of the time, not 5-10%.

5. Double blind submissions, and insist on ethical standards that maintain that blinding. No reviewer should know whether they are reading the work of a grad student or a Nobel Laureate. Reputation should mean nothing; scientific rigor should mean everything.

And, most radical of all, what I really need in my life right now:

6. Faculty should not have to submit their own papers. Each university department should have administrative staff whose job it is to receive papers from their faculty, format them appropriately, and submit them to journals. They should deal with all rejections, and only report to the faculty member when they have received an acceptance or a request to revise and resubmit. Faculty should simply do the research, write the papers, and then fire and forget them. We have highly specialized skills, and our valuable time is being wasted on the clerical tasks of formatting and submitting papers, which many other people could do as well or better. Worse, we are uniquely vulnerable to the emotional impact of the rejection—seeing someone else’s paper rejected is an entirely different feeling from having your own rejected.

Do all that, and I think I could be happy to work in academia. As it is, I am seriously considering leaving and never coming back.

Krugman and rockets and feathers

Jul 17 JDN 2459797

Well, this feels like a milestone: Paul Krugman just wrote a column about a topic I’ve published research on. He didn’t actually cite our paper—in fact the literature review he links to is from 2014—but the topic is very much what we were studying: Asymmetric price transmission, ‘rockets and feathers’. He’s even talking about it from the perspective of industrial organization and market power, which is right in line with our results (and a bit different from the mainstream consensus among economic policy pundits).

The phenomenon is a well-documented one: When the price of an input (say, crude oil) rises, the price of outputs made from that input (say, gasoline) rise immediately, and basically one to one, sometimes even more than one to one. But when the price of an input falls, the price of outputs only falls slowly and gradually, taking a long time to converge to the same level as the input prices. Prices go up like a rocket, but down like a feather.

Many different explanations have been proposed to explain this phenomenon, and they aren’t all mutually exclusive. They include various aspects of market structure, substitution of inputs, and use of inventories to smooth the effects of prices.

One that I find particularly unpersuasive is the notion of menu costs: That it requires costly effort to actually change your prices, and this somehow results in the asymmetry. Most gas stations have digital price boards; it requires almost zero effort for them to change prices whenever they want. Moreover, there’s no clear reason this would result in asymmetry between raising and lowering prices. Some models extend the notion of “menu cost” to include expected customer responses, which is a much better explanation; but I think that’s far beyond the original meaning of the concept. If you fear to change your price because of how customers may respond, finding a cheaper way to print price labels won’t do a thing to change that.

But our paper—and Krugman’s article—is about one factor in particular: market power. We don’t see prices behave this way in highly competitive markets. We see it the most in oligopolies: Markets where there are only a small number of sellers, who thus have some control over how they set their prices.

Krugman explains it as follows:

When oil prices shoot up, owners of gas stations feel empowered not just to pass on the cost but also to raise their markups, because consumers can’t easily tell whether they’re being gouged when prices are going up everywhere. And gas stations may hang on to these extra markups for a while even when oil prices fall.

That’s actually a somewhat different mechanism from the one we found in our experiment, which is that asymmetric price transmission can be driven by tacit collusion. Explicit collusion is illegal: You can’t just call up the other gas stations and say, “Let’s all set the price at $5 per gallon.” But you can tacitly collude by responding to how they set their prices, and not trying to undercut them even when you could get a short-run benefit from doing so. It’s actually very similar to an Iterated Prisoner’s Dilemma: Cooperation is better for everyone, but worse for you as an individual; to get everyone to cooperate, it’s vital to severely punish those who don’t.

In our experiment, the participants in our experiment were acting as businesses setting their prices. The customers were fully automated, so there was no opportunity to “fool” them in this way. We also excluded any kind of menu costs or product inventories. But we still saw prices go up like rockets and down like feathers. Moreover, prices were always substantially higher than costs, especially during that phase when they are falling down like feathers.

Our explanation goes something like this: Businesses are trying to use their market power to maintain higher prices and thereby make higher profits, but they have to worry about other businesses undercutting their prices and taking all the business. Moreover, they also have to worry about others thinking that they are trying to undercut prices—they want to be perceived as cooperating, not defecting, in order to preserve the collusion and avoid being punished.

Consider how this affects their decisions when input prices change. If the price of oil goes up, then there’s no reason not to raise the price of gasoline immediately, because that isn’t violating the collusion. If anything, it’s being nice to your fellow colluders; they want prices as high as possible. You’ll want to raise the prices as high and fast as you can get away with, and you know they’ll do the same. But if the price of oil goes down, now gas stations are faced with a dilemma: You could lower prices to get more customers and make more profits, but the other gas stations might consider that a violation of your tacit collusion and could punish you by cutting their prices even more. Your best option is to lower prices very slowly, so that you can take advantage of the change in the input market, but also maintain the collusion with other gas stations. By slowly cutting prices, you can ensure that you are doing it together, and not trying to undercut other businesses.

Krugman’s explanation and ours are not mutually exclusive; in fact I think both are probably happening. They have one important feature in common, which fits the empirical data: Markets with less competition show greater degrees of asymmetric price transmission. The more concentrated the oligopoly, the more we see rockets and feathers.

They also share an important policy implication: Market power can make inflation worse. Contrary to what a lot of economic policy pundits have been saying, it isn’t ridiculous to think that breaking up monopolies or putting pressure on oligopolies to lower their prices could help reduce inflation. It probably won’t be as reliably effective as the Fed’s buying and selling of bonds to adjust interest rates—but we’re also doing that, and the two are not mutually exclusive. Besides, breaking up monopolies is a generally good thing to do anyway.

It’s not that unusual that I find myself agreeing with Krugman. I think what makes this one feel weird is that I have more expertise on the subject than he does.

How to pack the court

Jul 10 JDN 2459790

By now you have no doubt heard the news that Roe v. Wade was overturned. The New York Times has an annotated version of the full opinion.

My own views on abortion are like those of about 2/3 of Americans: More nuanced than can be neatly expressed by ‘pro-choice’ or ‘pro-life’, much more comfortable with first-trimester abortion (which is what 90% of abortions are, by the way) than later, and opposed to overturning Roe v. Wade in its entirety. I also find great appeal in Clinton’s motto on the issue: “safe, legal, and rare”.Several years ago I moderated an online discussion group that reached what we called the Twelve Week Compromise: Abortion would be legal for any reason up to 12 weeks of pregnancy, after which it would only be legal for extenuating circumstances including rape, incest, fetal nonviability, and severe health risk to the mother. This would render the vast majority of abortions legal without simply saying that it should be permitted without question. Roe v. Wade was actually slightly more permissive than this, but it was itself a very sound compromise.

But even if you didn’t like Roe v. Wade, you should be outraged at the manner in which it was overturned. If the Supreme Court can simply change its mind on rights that have been established for nearly 50 years, then none of our rights are safe. And in chilling comments, Clarence Thomas has declared that this is his precise intention: “In future cases, we should reconsider all of this Court’s substantive due process precedents, including Griswold, Lawrence, and Obergefell.” That is to say, Thomas wants to remove our rights to use contraception and have same-sex relationships. (If Lawrence were overturned, sodomy could be criminalized in several states!)

The good news here is that even the other conservative justices seem much less inclined to overturn these other precedents. Kavanaugh’s concurrent opinion explicitly states he has no intention of overturning “Griswold v. Connecticut, 381 U. S. 479 (1965); Eisenstadt v. Baird, 405 U. S. 438 (1972); Loving v. Virginia, 388 U. S. 1 (1967); and Obergefell v. Hodges, 576 U. S. 644 (2015)”. It seems quite notable that Thomas did not mention Loving v. Virginia, seeing as it was made around the same time as Roe v. Wade, based on very similar principles—and it affects him personally. And even if these precedents are unlikely to be overturned immediately, this ruling shows that the security of all of our rights can depend on the particular inclinations of individual justices.

The Supreme Court is honestly a terrible institution. Courts should not be more powerful than legislatures, lifetime appointments reek of monarchism, and the claim of being ‘apolitical’ that was dubious from the start is now obviously ludicrous. But precisely because it is so powerful, reforming it will be extremely difficult.

The first step is to pack the court. The question is no longer whether we should pack the court, but how, and why we didn’t do it sooner.

What does it mean to pack the court? Increase the number of justices, appointing new ones who are better than the current ones. (Since almost any randomly-selected American would be better than Clarence Thomas, Samuel Alito, or Brent Kavanaugh, this wouldn’t be hard.) This is 100% Constitutional, as the Constitution does not in any way restrict the number of justices. It can simply be done by an act of Congress.

But of course we can’t stop there. President Biden could appoint four more justices, and then whoever comes after him could appoint another three, and before we know it the Supreme Court has twenty-seven justices and each new President is expected to add a few more.

No, we need to fix the number of justices so that it can’t be increased any further. Ideally this would be done by Constitutional Amendment, though the odds of getting such a thing passed seem rather slim. But there is in fact a sensible way to add new justices now and then justify not adding any more later, and that is to tie justices to federal circuits.

There are currently 13 US federal circuit courts. If we added 4 more Supreme Court justices, there would be 13 Supreme Court justices. Each could even be assigned to be the nominal head of that federal circuit, and responsible for being the first to read appeals coming from that circuit.

Which justice goes where? Well, what if we let the circuits themselves choose? The selection could be made by a popular vote among the people who live there. Make the federal circuit a federal popular vote. The justice responsible for the federal circuit can also be the Chief Justice.

That would also require a Constitutional Amendment, but it would, at a stroke, fundamentally reform what the Supreme Court is and how its justices are chosen. For now, we could simply add three new justices, making the current number 13. Then they could decide amongst themselves who will get what circuit until we implement the full system to let circuits choose their justices.

I’m well aware that electing judges is problematic—but at this point I don’t think we have a choice. (I would also prefer to re-arrange the circuits: it’s weird that DC gets its own circuit instead of being part of circuit 4, and circuit 9 has way more people than circuit 1.) We can’t simply trust each new President to appoint a new justice whenever one happens to retire or die and then leave that justice in place for decades to come. Not in a world where someone like Donald Trump can be elected President.

A lot of centrist people are uncomfortable with such a move, seeing it as ‘playing dirty’. But it’s not. It’s playing hardball—taking seriously the threat that the current Republican Party poses to the future of American government and society, and taking substantive steps to fight that threat. (After its authoritarian shift that started in the mid 2000s but really took off under Trump, the Republican Party now has more in common with far-right extremist parties like Fidesz in Hungary than with mainstream center-right parties like the Tories.) But there is absolutely nothing un-Constitutional about this plan. It’s doing everything possible within the law.

We should have done this before they started overturning landmark precedents. But it’s not too late to do it before they overturn any more.

I finally have a published paper.

Jun 12 JDN 2459773

Here it is, my first peer-reviewed publication: “Imperfect Tactic Collusion and Asymmetric Price Transmission”, in the Journal of Economic Behavior and Organization.

Due to the convention in economics that authors are displayed alphabetically, I am listed third of four, and will be typically collapsed into “Bulutay et. al.”. I don’t actually think it should be “Julius et. al.”; I think Dave Hales did the most important work, and I wanted it to be “Hales et. al.”; but anything non-alphabetical is unusual in economics, and it would have taken a strong justification to convince the others to go along with it. This is a very stupid norm (and I attribute approximately 20% of Daron Acemoglu’s superstar status to it), but like any norm, it is difficult to dislodge.

I thought I would feel different when this day finally came. I thought I would feel joy, or at least satisfaction. I had been hoping that satisfaction would finally spur me forward in resubmitting my single-author paper, “Experimental Public Goods Games with Progressive Taxation”, so I could finally get a publication that actually does have “Julius (2022)” (or, at this rate, 2023, 2024…?). But that motivating satisfaction never came.

I did feel some vague sense of relief: Thank goodness, this ordeal is finally over and I can move on. But that doesn’t have the same motivating force; it doesn’t make me want to go back to the other papers I can now hardly bear to look at.

This reaction (or lack thereof?) could be attributed to circumstances: I have been through a lot lately. I was already overwhelmed by finishing my dissertation and going on the job market, and then there was the pandemic, and I had to postpone my wedding, and then when I finally got a job we had to suddenly move abroad, and then it was awful finding a place to live, and then we actually got married (which was lovely, but still stressful), and it took months to get my medications sorted with the NHS, and then I had a sudden resurgence of migraines which kept me from doing most of my work for weeks, and then I actually caught COVID and had to deal with that for a few weeks too. So it really isn’t too surprising that I’d be exhausted and depressed after all that.

Then again, it could be something deeper. I didn’t feel this way about my wedding. That genuinely gave me the joy and satisfaction that I had been expecting; I think it really was the best day of my life so far. So it isn’t as if I’m incapable of these feelings under my current state.

Rather, I fear that I am becoming more permanently disillusioned with academia. Now that I see how the sausage is made, I am no longer so sure I want to be one of the people making it. Publishing that paper didn’t feel like I had accomplished something, or even made some significant contribution to human knowledge. In fact, the actual work of publication was mostly done by my co-authors, because I was too overwhelmed by the job market at the time. But what I did have to do—and what I’ve tried to do with my own paper—felt like a miserable, exhausting ordeal.

More and more, I’m becoming convinced that a single experiment tells us very little, and we are being asked to present each one as if it were a major achievement when it’s more like a single brick in a wall.

But whatever new knowledge our experiments may have gleaned, that part was done years ago. We could have simply posted the draft as a working paper on the web and moved on, and the world would know just as much and our lives would have been a lot easier.

Oh, but then it would not have the imprimatur of peer review! And for our careers, that means absolutely everything. (Literally, when they’re deciding tenure, nothing else seems to matter.) But for human knowledge, does it really mean much? The more referee reports I’ve read, the more arbitrary they feel to me. This isn’t an objective assessment of scientific merit; it’s the half-baked opinion of a single randomly chosen researcher who may know next to nothing about the topic—or worse, have a vested interest in defending a contrary paradigm.

Yes, of course, what gets through peer review is of considerably higher quality than any randomly-selected content on the Internet. (The latter can be horrifically bad.) But is this not also true of what gets submitted for peer review? In fact, aren’t many blogs written by esteemed economists (say, Krugman? Romer? Nate Silver?) of considerably higher quality as well, despite having virtually none of the gatekeepers? I think Krugman’s blog is nominally edited by the New York Times, and Silver has a whole staff at FiveThirtyEight (they’re hiring, in fact!), but I’m fairly certain Romer just posts whatever he wants like I do. Of course, they had to establish their reputations (Krugman and Romer each won a Nobel). But still, it seems like maybe peer-review isn’t doing the most important work here.

Even blogs by far less famous economists (e.g. Miles Kimball, Brad DeLong) are also very good, and probably contribute more to advancing the knowledge of the average person than any given peer-reviewed paper, simply because they are more readable and more widely read. What we call “research” means going from zero people knowing a thing to maybe a dozen people knowing it; “publishing” means going from a dozen to at most a thousand; to go from a thousand to a billion, we call that “education”.

They all matter, of course; but I think we tend to overvalue research relative to education. A world where a few people know something is really not much better than a world where nobody does, while a world where almost everyone knows something can be radically superior. And the more I see just how far behind the cutting edge of research most economists are—let alone most average people—the more apparent it becomes to me that we are investing far too much in expanding that cutting edge (and far, far too much in gatekeeping who gets to do that!) and not nearly enough in disseminating that knowledge to humanity.

I think maybe that’s why finally publishing a paper felt so anticlimactic for me. I know that hardly anyone will ever actually read the damn thing. Just getting to this point took far more effort than it should have; dozens if not hundreds of hours of work, months of stress and frustration, all to satisfy whatever arbitrary criteria the particular reviewers happened to use so that we could all clear this stupid hurdle and finally get that line on our CVs. (And we wonder why academics are so depressed?) Far from being inspired to do the whole process again, I feel as if I have finally emerged from the torture chamber and may at last get some chance for my wounds to heal.

Even publishing fiction was not this miserable. Don’t get me wrong; it was miserable, especially for me, as I hate and fear rejection to the very core of my being in a way most people do not seem to understand. But there at least the subjectivity and arbitrariness of the process is almost universally acknowledged. Agents and editors don’t speak of your work being “flawed” or “wrong”; they don’t even say it’s “unimportant” or “uninteresting”. They say it’s “not a good fit” or “not what we’re looking for right now”. (Journal editors sometimes make noises like that too, but there’s always a subtext of “If this were better science, we’d have taken it.”) Unlike peer reviewers, they don’t come back with suggestions for “improvements” that are often pointless or utterly infeasible.

And unlike peer reviewers, fiction publishers acknowledge their own subjectivity and that of the market they serve. Nobody really thinks that Fifty Shades of Grey was good in any deep sense; but it was popular and successful, and that’s all the publisher really cares about. As a result, failing to be the next Fifty Shades of Grey ends up stinging a lot less than failing to be the next article in American Economic Review. Indeed, I’ve never had any illusions that my work would be popular among mainstream economists. But I once labored under the belief that it would be more important that it is true; and I guess I now consider that an illusion.

Moreover, fiction writers understand that rejection hurts; I’ve been shocked how few academics actually seem to. Nearly every writing conference I’ve ever been to has at least one seminar on dealing with rejection, often several; at academic conferences, I’ve literally never seen one. There seems to be a completely different mindset among academics—at least, the successful, tenured ones—about the process of peer review, what it means, even how it feels. When I try to talk with my mentors about the pain of getting rejected, they just… don’t get it. They offer me guidance on how to deal with anger at rejection, when that is not at all what I feel—what I feel is utter, hopeless, crushing despair.

There is a type of person who reacts to rejection with anger: Narcissists. (Look no further than the textbook example, Donald Trump.) I am coming to fear that I’m just not narcissistic enough to be a successful academic. I’m not even utterly lacking in narcissism: I am almost exactly average for a Millennial on the Narcissistic Personality Inventory. I score fairly high on Authority and Superiority (I consider myself a good leader and a highly competent individual) but very low on Exploitativeness and Self-Sufficiency (I don’t like hurting people and I know no man is an island). Then again, maybe I’m just narcissistic in the wrong way: I score quite low on “grandiose narcissism”, but relatively high on “vulnerable narcissism”. I hate to promote myself, but I find rejection devastating. This combination seems to be exactly what doesn’t work in academia. But it seems to be par for the course among writers and poets. Perhaps I have the mind of a scientist, but I have the soul of a poet. (Send me through the wormhole! Please? Please!?)

Will we ever have the space opera future?

May 22 JDN 2459722

Space opera has long been a staple of science fiction. Like many natural categories, it’s not that easy to define; it has something to do with interstellar travel, a variety of alien species, grand events, and a big, complicated world that stretches far beyond any particular story we might tell about it.

Star Trek is the paradigmatic example, and Star Wars also largely fits, but there are numerous of other examples, including most of my favorite science fiction worlds: Dune, the Culture, Mass Effect, Revelation Space, the Liaden, Farscape, Babylon 5, the Zones of Thought.

I think space opera is really the sort of science fiction I most enjoy. Even when it is dark, there is still something aspirational about it. Even a corrupt feudal transplanetary empire or a terrible interstellar war still means a universe where people get to travel the stars.

How likely is it that we—and I mean ‘we’ in the broad sense, humanity and its descendants—will actually get the chance to live in such a universe?

First, let’s consider the most traditional kind of space opera, the Star Trek world, where FTL is commonplace and humans interact as equals with a wide variety of alien species that are different enough to be interesting, but similar enough to be relatable.

This, sad to say, is extremely unlikely. FTL is probably impossible, or if not literally impossible then utterly infeasible by any foreseeable technology. Yes, the Alcubierre drive works in theory… all you need is tons of something that has negative mass.

And while, by sheer probability, there almost have to be other sapient lifeforms somewhere out there in this vast universe, our failure to contact or even find clear evidence of any of them for such a long period suggests that they are either short-lived or few and far between. Moreover, any who do exist are likely to be radically different from us and difficult to interact with at all, much less relate to on a personal level. Maybe they don’t have eyes or ears; maybe they live only in liquid hydrogen or molten lead; maybe they communicate entirely by pheromones that are toxic to us.

Does this mean that the aspirations of space opera are ultimately illusory? Is it just a pure fantasy that will forever be beyond us? Not necessarily.

I can see two other ways to create a very space-opera-like world, one of which is definitely feasible, and the other is very likely to be. Let’s start with the one that’s definitely feasible—indeed so feasible we will very likely get to experience it in our own lifetimes.

That is to make it a simulation. An MMO video game, in a way, but something much grander than any MMO that has yet been made. Not just EVE and No Man’s Sky, not just World of Warcraft and Minecraft and Second Life, but also Facebook and Instagram and Zoom and so much more. Oz from Summer Wars; OASIS from Ready Player One. A complete, multifaceted virtual reality in which we can spend most if not all of our lives. One complete with not just sight and sound, but also touch, smell, even taste.

Since it’s a simulation, we can make our own rules. If we want FTL and teleportation, we can have them. (And I would like to note that in fact teleportation is available in EVE, No Man’s Sky, World of Warcraft, Minecraft, and even Second Life. It’s easy to implement in a simulation, and it really seems to be something people want to have.) If we want to meet—or even be—people from a hundred different sapient species, some more humanoid than others, we can. Each of us could rule entire planets, command entire starfleets.

And we could do this, if not right now, today, then very, very soon—the VR hardware is finally maturing and the software capability already exists if there is a development team with the will and the skills (and the budget) to do it. We almost certainly will do this—in fact, we’ll do it hundreds or thousands of different ways. You need not be content with any particular space opera world, when you can choose from a cornucopia of them; and fantasy worlds too, and plenty of other kinds of worlds besides.

Yet, I admit, there is something missing from that future. While such a virtual-reality simulation might reach the point where it would be fair to say it’s no longer simply a “video game”, it still won’t be real. We won’t actually be Vulcans or Delvians or Gek or Asari. We will merely pretend to be. When we take off the VR suit at the end of the day, we will still be humans, and still be stuck here on Earth. And even if most of the toil of maintaining this society and economy can be automated, there will still be some time we have to spend living ordinary lives in ordinary human bodies.

So, is there some chance that we might really live in a space-opera future? Where we will meet actual, flesh-and-blood people who have blue skin, antennae, or six limbs? Where we will actually, physically move between planets, feeling the different gravity beneath our feet and looking up at the alien sky?

Yes. There is a way this could happen. Not now, not for awhile yet. We ourselves probably won’t live to see it. But if humanity manages to continue thriving for a few more centuries, and technology continues to improve at anything like its current pace, then that day may come.

We won’t have FTL, so we’ll be bounded by the speed of light. But the speed of light is still quite fast. It can get you to Mars in minutes, to Jupiter in hours, and even to Alpha Centauri in a voyage that wouldn’t shock Magellan or Zheng He. Leaving this arm of the Milky Way, let alone traveling to another galaxy, is out of the question (at least if you ever want to come back while anyone you know is still alive—actually as a one-way trip it’s surprisingly feasible thanks to time dilation).

This means that if we manage to invent a truly superior kind of spacecraft engine, one which combines the high thrust of a hydrolox rocket with the high specific impulse of an ion thruster—and that is physically possible, because it’s well within what nuclear rockets ought to be capable of—then we could travel between planets in our solar system, and maybe even to nearby solar systems, in reasonable amounts of time. The world of The Expanse could therefore be in reach (well, the early seasons anyway), where human colonies have settled on Mars and Ceres and Ganymede and formed their own new societies with their own unique cultures.

We may yet run into some kind of extraterrestrial life—bacteria probably, insects maybe, jellyfish if we’re lucky—but we probably ever won’t actually encounter any alien sapients. If there are any, they are probably too primitive to interact with us, or they died out millennia ago, or they’re simply too far away to reach.

But if we cannot find Vulcans and Delvians and Asari, then we can become them. We can modify ourselves with cybernetics, biotechnology, or even nanotechnology, until we remake ourselves into whatever sort of beings we want to be. We may never find a whole interplanetary empire ruled by a race of sapient felinoids, but if furry conventions are any indication, there are plenty of people who would make themselves into sapient felinoids if given the opportunity.

Such a universe would actually be more diverse than a typical space opera. There would be no “planets of hats“, no entire societies of people acting—or perhaps even looking—the same. The hybridization of different species is almost by definition impossible, but when the ‘species’ are cosmetic body mods, we can combine them however we like. A Klingon and a human could have a child—and for that matter the child could grow up and decide to be a Turian.

Honestly there are only two reasons I’m not certain we’ll go this route:

One, we’re still far too able and willing to kill each other, so who knows if we’ll even make it that long. There’s also still plenty of room for some sort of ecological catastrophe to wipe us out.

And two, most people are remarkably boring. We already live in a world where one could go to work every day wearing a cape, a fursuit, a pirate outfit, or a Starfleet uniform, and yet people don’t let you. There’s nothing infeasible about me delivering a lecture dressed as a Kzin Starfleet science officer, and nor would it even particularly impair my ability to deliver the lecture well; and yet I’m quite certain it would be greatly frowned upon if I were to do so, and could even jeopardize my career (especially since I don’t have tenure).

Would it be distracting to the students if I were to do something like that? Probably, at least at first. But once they got used to it, it might actually make them feel at ease. If it were a social norm that lecturers—and students—can dress however they like (perhaps limited by local decency regulations, though those, too, often seem overly strict), students might show up to class in bunny pajamas or pirate outfits or full-body fursuits, but would that really be a bad thing? It could in fact be a good thing, if it helps them express their own identity and makes them more comfortable in their own skin.

But no, we live in a world where the mainstream view is that every man should wear exactly the same thing at every formal occasion. I felt awkward at the AEA conference because my shirt had color.

This means that there is really one major obstacle to building the space opera future: Social norms. If we don’t get to live in this world one day, it will be because the world is ruled by the sort of person who thinks that everyone should be the same.