When to give up

Jun 6 JDN 2459372

Perseverance is widely regarded as a virtue, and for good reason. Often one of the most important deciding factors in success is the capacity to keep trying after repeated failure. I think this has been a major barrier for me personally; many things came easily to me when I was young, and I internalized the sense that if something doesn’t come easily, it must be beyond my reach.

Yet it’s also worth noting that this is not the only deciding factor—some things really are beyond our capabilities. Indeed, some things are outright impossible. And we often don’t know what is possible and what isn’t.

This raises the question: When should we persevere, and when should we give up?

There is actually reason to think that people often don’t give up when they should. Steven Levitt (of Freakonomics fame)recently published a study that asked people who were on the verge of a difficult decision to flip a coin, and then base their decision on the coin flip: Heads, make a change; tails, keep things as they are. Many didn’t actually follow the coin flip—but enough did that there was a statistical difference between those who saw heads and those who saw tails. The study found that the people who flipped heads and made a change were on average happier a couple of years later than the people who flipped tails and kept things as they were.

This question is particularly salient for me lately, because the academic job market has gone so poorly for me. I’ve spent most of my life believing that academia is where I belong; my intellect and my passion for teaching and research has convinced me and many others that this is the right path for me. But now that I have a taste of what it is actually like to apply for tenure-track jobs and submit papers to journals, I am utterly miserable. I hate every minute of it. I’ve spent the entire past year depressed and feeling like I have accomplished absolutely nothing.

In theory, once one actually gets tenure it’s supposed to get easier. But that could be a long way away—or it might never happen at all. As it is, there’s basically no chance I’ll get a tenure track position this year, and it’s unclear what my chances would be if I tried again next year.

If I could actually get a paper published, that would no doubt improve my odds of landing a better job next year. But I haven’t been able to do that, and each new rejection cuts so deep that I can barely stand to look at my papers anymore, much less actually continue submitting them. And apparently even tenured professors still get their papers rejected repeatedly, which means that this pain will never go away. I simply cannot imagine being happy if this is what I am expected to do for the rest of my life.

I found this list of criteria for when you should give up something—and most of them fit me. I’m not sure I know in my heart it can’t work out, but I increasingly suspect that. I’m not sure I want it anymore, now that I have a better idea of what it’s really like. Pursuing it is definitely making me utterly miserable. I wouldn’t say it’s the only reason, but I definitely do worry what other people will think if I quit; I feel like I’d be letting a lot of people down. I also wonder who I am without it, where I belong if not here. I don’t know what other paths are out there, but maybe there is something better. This constant stream of failure and rejection has definitely made me feel like I hate myself. And above all, when I imagine quitting, I absolutely feel an enormous sense of relief.

Publishing in journals seems to be the thing that successful academics care about most, and it means almost nothing to me anymore. I only want it because of all the pressure to have it, because of all the rewards that come from having it. It has become fully instrumental to me, with no intrinsic meaning or value. I have no particular desire to be lauded by the same system that lauded Fischer Black or Kenneth Rogoff—both of whose egregious and easily-avoidable mistakes are responsible for the suffering of millions people around the world.

I want people to read my ideas. But people don’t actually read journals. They skim them. They read the abstracts. They look at the graphs and regression tables. (You have the meeting that should have been an email? I raise you the paper that should have been a regression table.) They see if there’s something in there that they should be citing for their own work, and if there is, maybe then they actually read the paper—but everyone is so hyper-specialized that only a handful of people will ever actually want to cite any given paper. The vast majority of research papers are incredibly tedious to read and very few people actually bother. As a method for disseminating ideas, this is perhaps slightly better than standing on a street corner and shouting into a megaphone.

I would much rather write books; people sometimes actually read books, especially when they are written for a wide audience and hence not forced into the straitjacket of standard ‘scientific writing’ that no human being actually gets any enjoyment out of writing or reading. I’ve seen a pretty clear improvement in writing quality of papers written by Nobel laureates—after they get their Nobels or similar accolades. Once they establish themselves, they are free to actually write in ways that are compelling and interesting, rather than having to present everything in the most dry, tedious way possible. If your paper reads like something that a normal person would actually find interesting or enjoyable to read, you will be—as I have been—immediately told that you must remove all such dangerous flavor until the result is as tasteless as possible.

No, the purpose of research journals is not to share ideas. Its function is not to share, but to evaluate. And it isn’t even really to evaluate research—it’s to evaluate researchers. It’s to outsource the efforts of academic hiring to an utterly unaccountable and arbitrary system run mostly by for-profit corporations. It may have some secondary effect of evaluating ideas for validity; at least the really awful ideas are usually excluded. But its primary function is to decide the academic pecking order.

I had thought that scientific peer review was supposed to select for truth. Perhaps sometimes it does. It seems to do so reasonably well in the natural sciences, at least. But in the social sciences? That’s far less clear. Peer-reviewed papers are much more likely to be accurate than any randomly-selected content; but there are still a disturbingly large number of peer-reviewed published papers that are utterly wrong, and some unknown but undoubtedly vast number of good papers that have never seen the light of day.

Then again, when I imagine giving up on an academic career, I don’t just feel relief—I also feel regret and loss. I feel like I’ve wasted years of my life putting together a dream that has now crumbled in my hands. I even feel some anger, some sense that I was betrayed by those who told me that this was about doing good research when it turns out it’s actually about being thick-skinned enough that you can take an endless assault of rejections. It feels like I’ve been running a marathon, and I just rounded a curve to discover that the last five miles must be ridden on horseback, when I don’t have a horse, I have no equestrian training, and in fact I’m allergic to horses.

I wish someone had told me it would be like this. Maybe they tried and I didn’t listen. They did say that papers would get rejected. They did say that the tenure track was high-pressure and publish-or-perish was a major source of anxiety. But they never said that it would tear at my soul like this. They never said that I would have to go through multiple rounds of agony, self-doubt, and despair in order to get even the slighest recognition for my years of work. They never said that the whole field would treat me like I’m worthless because I can’t satisfy the arbitrary demands of a handful of anonymous reviewers. They never said that I would begin to feel worthless after several rounds of this.

That’s really what I want to give up on. I want to give up on hitching my financial security, my career, my future, my self-worth to a system as capricious as peer review.

I don’t want to give up on research. I don’t want to give up on teaching. I still believe strongly in discovering new truths and sharing them with others. I’m just increasingly realizing that academia isn’t nearly as good at that as I thought it was.

It isn’t even that I think it’s impossible for me to succeed in academia. I think that if I continued trying to get a tenure-track job, I would land one eventually. Maybe next year. Or maybe I’d spend a few years at a postdoc first. And I’d probably manage to publish some paper in some reasonably respectable journal at some point in the future. But I don’t know how long it would take, or how good a journal it would be—and I’m already past the point where I really don’t care anymore, where I can’t afford to care, where if I really allowed myself to care it would only devastate me when I inevitably fail again. Now that I see what is really involved in the process, how arduous and arbitrary it is, publishing in a journal means almost nothing to me. I want to be validated; I want to be appreciated; I want to be recognized. But the system is set up to provide nothing but rejection, rejection, rejection. If even the best work won’t be recognized immediately and even the worst work can make it with enough tries, then the whole system begins to seem meaningless. It’s just rolls of the dice. And I didn’t sign up to be a gambler.

The job market will probably be better next year than it was this year. But how much better? Yes, there will be more openings, but there will also be more applicants: Everyone who would normally be on the market, plus everyone like me who didn’t make it this year, plus everyone who decided to hold back this year because they knew they wouldn’t make it (as I probably should have done). Yes, in a normal year, I could be fairly confident of getting some reasonably decent position—but this wasn’t a normal year, and next year won’t be one either, and the one after that might still not be. If I can’t get a paper published in a good journal between now and then—and I’m increasingly convinced that I can’t—then I really can’t expect my odds to be greatly improved from what they were this time around. And if I don’t know that this terrible gauntlet is going to lead to something good, I’d really much rather avoid it altogether. It was miserable enough when I went into it being (over)confident that it would work out all right.

Perhaps the most important question when deciding whether to give up is this: What will happen if you do? What alternatives do you have? If giving up means dying, then don’t give up. (“Learn to let go” is very bad advice to someone hanging from the edge of a cliff.) But while it may feel that way sometimes, rarely does giving up on a career or a relationship or a project yield such catastrophic results.

When people are on the fence about making a change and then do so, even based on the flip of a coin, it usually makes them better off. Note that this is different from saying you should make all your decisions randomly; if you are confident that you don’t want to make a change, don’t make a change. This advice is for people who feel like they want a change but are afraid to take the chance, people who find themselves ambivalent about what direction to go next—people like me.

I don’t know where I should go next. I don’t know where I belong. I know it isn’t Wall Street. I’m pretty sure it’s not consulting. Maybe it’s nonprofits. Maybe it’s government. Maybe it’s freelance writing. Maybe it’s starting my own business. I guess I’d still consider working in academia; if Purdue called me back to say they made a terrible mistake and they want me after all, I’d probably take the offer. But since such an outcome is now vanishingly unlikely, perhaps it’s time, after all, to give up.

Drift-diffusion decision-making: The stock market in your brain

JDN 2456173 EDT 17:32.

Since I’ve been emphasizing the “economics” side of things a lot lately, I decided this week to focus more on the “cognitive” side. Today’s topic comes from cutting-edge research in cognitive science and neuroeconomics, so we still haven’t ironed out all the details.

The question we are trying to answer is an incredibly basic one: How do we make decisions? Given the vast space of possible behaviors human beings can engage in, how do we determine which ones we actually do?

There are actually two phases of decision-making.

The first phase is alternative generation, in which we come up with a set of choices. Some ideas occur to us, others do not; some are familiar and come to mind easily, others only appear after careful consideration. Techniques like brainstorming exist to help us with this task, but none of them are really very good; one of the most important bottlenecks in human cognition is the individual capacity to generate creative alternatives. The task is mind-bogglingly complex; the number of possible choices you could make at any given moment is already vast, and with each passing moment the number of possible behavioral sequences grows exponentially. Just think about all the possible sentences I could type write now, and then think about how incredibly narrow a space of possible behavioral options it is to assume that I’m typing sentences.

Most of the world’s innovation can ultimately be attributed to better alternative generation; particular with regard to social systems, but in many cases even with regard to technologies, the capability existed for decades or even centuries but the idea simply never occurred to anyone. (You can see this by looking at the work of Heron of Alexandria and Leonardo da Vinci; the capacity to build these machines existed, and a handful of individuals were creative enough to actually try it, but it never occurred to anyone that there could be enormous, world-changing benefits to expanding these technologies for mass production.)

Unfortunately, we basically don’t understand alternative generation at all. It’s an almost complete gap in our understanding of human cognition. It actually has a lot to do with some of the central unsolved problems of cognitive science and artificial intelligence; if we could create a computer that is capable of creative thought, we would basically make human beings obsolete once and for all. (Oddly enough, physical labor is probably where human beings would still be necessary the longest; robots aren’t yet very good at climbing stairs or lifting irregularly-shaped objects, much less giving haircuts or painting on canvas.)

The second part is what most “decision-making” research is actually about, and I’ll call it alternative selection. Once you have a list of two, three or four viable options—rarely more than this, as I’ll talk about more in a moment—how do you go about choosing the one you’ll actually do?

This is a topic that has undergone considerable research, and we’re beginning to make progress. The leading models right now are variants of drift-diffusion (hence the title of the post), and these models have the very appealing property that they are neurologically plausible, predictively accurate, and yet close to rationally optimal.

Drift-diffusion models basically are, as I said in the subtitle, a stock market in your brain. Picture the stereotype of the trading floor of the New York Stock Exchange, with hundreds of people bustling about, shouting “Buy!” “Sell!” “Buy!” with the price going up with every “Buy!” and down with every “Sell!”; in reality the NYSE isn’t much like that, and hasn’t been for decades, because everyone is staring at a screen and most of the trading is automated and occurs in microseconds. (It’s kind of like how if you draw a cartoon of a doctor, they will invariably be wearing a head mirror, but if you’ve actually been to a doctor lately, they don’t actually wear those anymore.)

Drift-diffusion, however, is like that. Let’s say we have a decision to make, “Yes” or “No”. Thousands of neurons devoted to that decision start firing, some saying “Yes”, exciting other “Yes” neurons and inhibiting “No” neurons, while others say “No”, exciting other “No” neurons and inhibiting other “Yes” neurons. New information feeds in, triggering some to “Yes” and others to “No”. The resulting process behaves like a random walk, specifically a trend random walk, where the intensity of the trend is determined by whatever criteria you are feeding into the decision. The decision will be made when a certain threshold is reached, say, 95% agreement among all neurons.

I wrote a little R program to demonstrate drift-diffusion models; the images I’ll be showing are R plots from that program. The graphs represent the aggregated “opinion” of all the deciding neurons; as you go from left to right, time passes, and the opinions “drift” toward one side or the other. For these graphs, the top of the graph represents the better choice.

It may actually be easiest to understand if you imagine that we are choosing a belief; new evidence accumulates that pushes us toward the correct answer (top) or the incorrect answer (bottom), because even a true belief will have some evidence that seems to be against it. You encounter this evidence more or less randomly (or do you?), and which belief you ultimately form will depend upon both how strong the evidence is and how thoughtful you are in forming your beliefs.

If the evidence is very strong (or in general, the two choices are very different), the trend will be very strong, and you’ll almost certainly come to a decision very quickly:

   strong_bias

If the evidence is weaker (the two choices are very similar), the trend will be much weaker, and it will take much longer to make a decision:

weak_bias

One way to make a decision faster would be to have a weaker threshold, like 75% agreement instead of 95%; but this has the downside that it can result in making the wrong choice. Notice how some of the paths go down to the bottom, which in this case is the worse choice:

low_threshold

But if there is actually no difference between the two options, a low threshold is good, because you don’t spend time waffling over a pointless decision. (I know that I’ve had a problem with that in real life, spending too long making a decision that ultimately is of minor importance; my drift thresholds are too high!) With a low threshold, you get it over with:

indifferent

With a high threshold, you can go on for ages:

ambivalent

This is the difference between indifferent about a decision and being ambivalent. If you are indifferent, you are dealing with two small amounts of utility and it doesn’t really matter which one you choose. If you are ambivalent, you are dealing with two large amounts of utility and it’s very important to get it right—but you aren’t sure which one to choose. If you are indifferent, you should use a low threshold and get it over with; but if you are ambivalent, it actually makes sense to keep your threshold high and spend a lot of time thinking about the problem in order to be sure you get it right.

It’s also possible to set a higher threshold for one option than the other; I think this is actually what we’re doing when we exhibit many cognitive biases like confirmation bias. If the decision you’re making is between keeping your current beliefs and changing them to something else, your diffusion space actually looks more like this:

confirmation_bias

You’ll only make the correct choice (top) if you set equal thresholds (meaning you reason fairly instead of exhibiting cognitive biases) and high thresholds (meaning you spend sufficient time thinking about the question). If I may change to a sports metaphor, people tend to move the goalposts—the team “change your mind” has to kick a lot further than the team “keep your current belief”.

We can also extend drift-diffusion models to changing your mind (or experiencing regret such as “buyer’s remorse“) if we assume that the system doesn’t actually cut off once it reaches a threshold; the threshold makes us take the action, but then our neurons keep on arguing it out in the background. We may hover near the threshold or soar off into absolute certainty—but on the other hand we may waffle all the way back to the other decision:

regret

There are all sorts of generalizations and extensions of drift-diffusion models, but these basic ones should give you a sense of how useful they are. More importantly, they are accurate; drift-diffusion models produce very sharp mathematical predictions about human behavior, and in general these predictions are verified in experiments.

The main reason we started using drift-diffusion models is that they account very well for the fact that decisions become more accurate when we spend more time on them. The way they do that is quite elegant: Under harsher time pressure, we use lower thresholds, which speeds up the process but also introduces more errors. When we don’t have time pressure, we use high thresholds and take a long time, but almost always make the right decision.

Under certain (rather narrow) circumstances, drift-diffusion models can actually be equivalent to the optimal Bayesian model. These models can also be extended for use in purchasing choices, and one day we will hopefully have a stock-market-in-the-brain model of actual stock market decisions!

Drift-diffusion models are based on decisions between two alternatives with only one relevant attribute under consideration, but they are being expanded to decisions with multiple attributes and decisions with multiple alternatives; the fact that this is difficult is in my opinion not a bug but a feature—decisions with multiple alternatives and attributes are actually difficult for human beings to make. The fact that drift-diffusion models have difficulty with the very situations that human beings have difficulty with provides powerful evidence that drift-diffusion models are accurately representing the processes that go on inside a human brain. I’d be worried if it were too easy to extend the models to complex decisions—it would suggest that our model is describing a more flexible decision process than the one human beings actually use. Human decisions really do seem to be attempts to shoehorn two-choice single-attribute decision methods onto more complex problems, and a lot of mistakes we make are attributable to that.

In particular, the phenomena of analysis paralysis and the paradox of choice are easily explained this way. Why is it that when people are given more alternatives, they often spend far more time trying to decide and often end up less satisfied than they were before? This makes sense if, when faced with a large number of alternatives, we spend time trying to compare them pairwise on every attribute, and then get stuck with a whole bunch of incomparable pairwise comparisons that we then have to aggregate somehow. If we could simply assign a simple utility value to each attribute and sum them up, adding new alternatives should only increase the time required by a small amount and should never result in a reduction in final utility.

When I have an important decision to make, I actually assemble a formal utility model, as I did recently when deciding on a new computer to buy (it should be in the mail any day now!). The hardest part, however, is assigning values to the coefficients in the model; just how much am I willing to spend for an extra gigabyte of RAM, anyway? How exactly do those CPU benchmarks translate into dollar value for me? I can clearly tell that this is not the native process of my mental architecture.

No, alas, we seem to be stuck with drift-diffusion, which is nearly optimal for choices with two alternatives on a single attribute, but actually pretty awful for multiple-alternative multiple-attribute decisions. But perhaps by better understanding our suboptimal processes, we can rearrange our environment to bring us closer to optimal conditions—or perhaps, one day, change the processes themselves!