Mental illness is different from physical illness.

Post 311 Oct 13 JDN 2458770

There’s something I have heard a lot of people say about mental illness that is obviously well-intentioned, but ultimately misguided: “Mental illness is just like physical illness.”

Sometimes they say it explicitly in those terms. Other times they make analogies, like “If you wouldn’t shame someone with diabetes for using insulin, why shame someone with depression for using SSRIs?”

Yet I don’t think this line of argument will ever meaningfully reduce the stigma surrounding mental illness, because, well, it’s obviously not true.

There are some characteristics of mental illness that are analogous to physical illness—but there are some that really are quite different. And these are not just superficial differences, the way that pancreatic disease is different from liver disease. No one would say that liver cancer is exactly the same as pancreatic cancer; but they’re both obviously of the same basic category. There are differences between physical and mental illness which are both obvious, and fundamental.

Here’s the biggest one: Talk therapy works on mental illness.

You can’t talk yourself out of diabetes. You can’t talk yourself out of myocardial infarct. You can’t even talk yourself out of migraine (though I’ll get back to that one in a little bit). But you can, in a very important sense, talk yourself out of depression.

In fact, talk therapy is one of the most effective treatments for most mental disorders. Cognitive behavioral therapy for depression is on its own as effective as most antidepressants (with far fewer harmful side effects), and the two combined are clearly more effective than either alone. Talk therapy is as effective as medication on bipolar disorder, and considerably better on social anxiety disorder.

To be clear: Talk therapy is not just people telling you to cheer up, or saying it’s “all in your head”, or suggesting that you get more exercise or eat some chocolate. Nor does it consist of you ruminating by yourself and trying to talk yourself out of your disorder. Cognitive behavioral therapy is a very complex, sophisticated series of techniques that require years of expert training to master. Yet, at its core, cognitive therapy really is just a very sophisticated form of talking.

The fact that mental disorders can be so strongly affected by talk therapy shows that there really is an important sense in which mental disorders are “all in your head”, and not just the trivial way that an axe wound or even a migraine is all in your head. It isn’t just the fact that it is physically located in your brain that makes a mental disorder different; it’s something deeper than that.

Here’s the best analogy I can come up with: Physical illness is hardware. Mental illness is software.

If a computer breaks after being dropped on the floor, that’s like an axe wound: An obvious, traumatic source of physical damage that is an unambiguous cause of the failure.

If a computer’s CPU starts overheating, that’s like a physical illness, like diabetes: There may be no particular traumatic cause, or even any clear cause at all, but there is obviously something physically wrong that needs physical intervention to correct.

But if a computer is suffering glitches and showing error messages when it tries to run particular programs, that is like mental illness: Something is wrong not on the low-level hardware, but on the high-level software.

These different types of problem require different types of solutions. If your CPU is overheating, you might want to see about replacing your cooling fan or your heat sink. But if your software is glitching while your CPU is otherwise running fine, there’s no point in replacing your fan or heat sink. You need to get a programmer in there to look at the code and find out where it’s going wrong. A talk therapist is like a programmer: The words they say to you are like code scripts they’re trying to get your processor to run correctly.

Of course, our understanding of computers is vastly better than our understanding of human brains, and as a result, programmers tend to get a lot better results than psychotherapists. (Interestingly they do actually get paid about the same, though! Programmers make about 10% more on average than psychotherapists, and both are solidly within the realm of average upper-middle-class service jobs.) But the basic process is the same: Using your expert knowledge of the system, find the right set of inputs that will fix the underlying code and solve the problem. At no point do you physically intervene on the system; you could do it remotely without ever touching it—and indeed, remote talk therapy is a thing.

What about other neurological illnesses, like migraine or fibromyalgia? Well, I think these are somewhere in between. They’re definitely more physical in some sense than a mental disorder like depression. There isn’t any cognitive content to a migraine the way there is to a depressive episode. When I feel depressed or anxious, I feel depressed or anxious about something. But there’s nothing a migraine is about. To use the technical term in cognitive science, neurological disorders lack the intentionality that mental disorders generally have. “What are you depressed about?” is a question you usually can answer. “What are you migrained about?” generally isn’t.

But like mental disorders, neurological disorders are directly linked to the functioning of the brain, and often seem to operate at a higher level of functional abstraction. The brain doesn’t have pain receptors on itself the way most of your body does; getting a migraine behind your left eye doesn’t actually mean that that specific lobe of your brain is what’s malfunctioning. It’s more like a general alert your brain is sending out that something is wrong, somewhere. And fibromyalgia often feels like it’s taking place in your entire body at once. Moreover, most neurological disorders are strongly correlated with mental disorders—indeed, the comorbidity of depression with migraine and fibromyalgia in particular is extremely high.

Which disorder causes the other? That’s a surprisingly difficult question. Intuitively we might expect the “more physical” disorder to be the primary cause, but that’s not always clear. Successful treatment for depression often improves symptoms of migraine and fibromyalgia as well (though the converse is also true). They seem to be mutually reinforcing one another, and it’s not at all clear which came first. I suppose if I had to venture a guess, I’d say the pain disorders probably have causal precedence over the mood disorders, but I don’t actually know that for a fact.

To stretch my analogy a little, it may be like a software problem that ends up causing a hardware problem, or a hardware problem that ends up causing a software problem. There actually have been a few examples of this, like games with graphics so demanding that they caused GPUs to overheat.

The human brain is a lot more complicated than a computer, and the distinction between software and hardware is fuzzier; we don’t actually have “code” that runs on a “processor”. We have synapses that continually fire on and off and rewire each other. The closest thing we have to code that gets processed in sequence would be our genome, and that is several orders of magnitude less complex than the structure of our brains. Aside from simply physically copying the entire brain down to every synapse, it’s not clear that you could ever “download” a mind, science fiction notwithstanding.

Indeed, anything that changes your mind necessarily also changes your brain; the effects of talking are generally subtler than the effects of a drug (and certainly subtler than the effects of an axe wound!), but they are nevertheless real, physical changes. (This is why it is so idiotic whenever the popular science press comes out with: “New study finds that X actually changes your brain!” where X might be anything from drinking coffee to reading romance novels. Of course it does! If it has an effect on your mind, it did so by having an effect on your brain. That’s the Basic Fact of Cognitive Science.) This is not so different from computers, however: Any change in software is also a physical change, in the form of some sequence of electrical charges that were moved from one place to another. Actual physical electrons are a few microns away from where they otherwise would have been because of what was typed into that code.

Of course I want to reduce the stigma surrounding mental illness. (For both selfish and altruistic reasons, really.) But blatantly false assertions don’t seem terribly productive toward that goal. Mental illness is different from physical illness; we can’t treat it the same.

Procrastination is an anxiety symptom

Aug 18 JDN 2458715

Why do we procrastinate? Some people are chronic procrastinators, while others only do it on occasion, but almost everyone procrastinates: We have something important to do, and we should be working on it, but we find ourselves doing anything else we can think of—cleaning is a popular choice—rather than actually getting to work. This continues until we get so close to the deadline that we have no choice but to rush through the work, lest it not get done at all. The result is more stress and lower-quality work. Why would we put ourselves through this?

There are a few different reasons why people may procrastinate. The one that most behavioral economists lean toward is hyperbolic discounting: Because we undervalue the future relative to the present, we set aside unpleasant tasks for later, when it seems they won’t be as bad.

This could be relevant in some cases, particularly for those who chronically procrastinate on a wide variety of tasks, but I find it increasingly unconvincing.

First of all, there’s the fact that many of the things we do while procrastinating are not particularly pleasant. Some people procrastinate by playing games, but even more procrastinate by cleaning house or reorganizing their desks. These aren’t enjoyable activities that you would want to do as soon as possible to maximize the joy.

Second, most people don’t procrastinate consistently on everything. We procrastinate on particular types of tasks—things we consider particularly important, as a matter of fact. I almost never procrastinate in general: I complete tasks early, I plan ahead, I am always (over)prepared. But lately I’ve been procrastinating on three tasks in particular: Revising my second-year paper to submit to journals, writing grant proposals, and finishing my third-year paper. These tasks are all academic, of course; they all involve a great deal of intellectual effort. But above all, they are high stakes. I didn’t procrastinate on homework for classes, but I’m procrastinating on finishing my dissertation.

Another common explanation for procrastination involves self-control: We can’t stop ourselves from doing whatever seems fun at the moment, when we should be getting down to work on what really matters.

This explanation is even worse: There is no apparent correlation between propensity to procrastinate and general impulsiveness—or, if anything, the correlation seems to be negative. The people I know who procrastinate the most consistently are the least impulsive; they tend to ponder and deliberate every decision, even small decisions for which the extra time spent clearly isn’t worth it.

The explanation I find much more convincing is that procrastination isn’t about self-control or time at all. It’s about anxiety. Procrastination is a form of avoidance: We don’t want to face the painful experience, so we stay away from it as long as we can.

This is certainly how procrastination feels for me: It’s not that I can’t stop myself from doing something fun, it’s that I can’t bring myself to face this particular task that is causing me overwhelming stress.

This also explains why it’s always something important that we procrastinate on: It’s precisely things with high stakes that are going to cause a lot of painful feelings. And anxiety itself is deeply linked to the fear of negative evaluation—which is exactly what you’re afraid of when submitting to a journal or applying for a grant. Usually it’s a bit more metaphorical than that, the “evaluation” of being judged by your peers; but here we are literally talking about a written evaluation from a reviewer.

This is why the most effective methods at reducing procrastination all involve reducing your anxiety surrounding the task. In fact, one of the most important is forgiving yourself for prior failings—including past procrastination. Students who were taught to forgive themselves for procrastinating were less likely to procrastinate in the future. If this were a matter of self-control, forgiving yourself should be counterproductive; but in fact it’s probably the most effective intervention.

Unsurprisingly, those with the highest stress level had the highest rates of procrastination (causality could run both ways there); but this is much less true for those who are good at practicing self-compassion. The idea behind self-compassion is very simple: Treat yourself as kindly as you would treat someone you care about.

I am extraordinarily bad at self-compassion. It is probably my greatest weakness. If we were to measure self-compassion by the gap between how kind you are to yourself and how kind you are to others, I would probably have one of the largest gaps in the world. Compassion for others has been a driving force in my life for as long as I can remember, and I put my money where my mouth is, giving at least 8% of my gross income to top-rated international charities every year. But compassion for myself feels inauthentic, even alien; I brutally punish myself for every failure, every moment of weakness. If someone else treated me the way I treat myself, I’d consider them abusive. It’s something I’ve struggled with for many years.

Really, the wonder is that I don’t procrastinate more; I think it’s because I’m already doing most of the things that people will tell you to do to avoid procrastination, like scheduling specific tasks to specific times and prioritizing a small number of important tasks each day. I even keep track of how I actually use my time (I call it “descriptive scheduling”, as opposed to conventional “normative scheduling”), and use that information to make my future schedules more realistic—thus avoiding or at least mitigating the planning fallacy. But when it’s just too intimidating to even look at the paper I’m supposed to be revising, none of that works.

If you too are struggling with procrastination (and odds of that are quite high), I’m afraid that I don’t have any brilliant advice for you today. I can recommend those scheduling techniques, and they may help; but the ultimate cause of procrastination is not bad scheduling or planning but something much deeper: anxiety about the task itself and being evaluated upon it. Procrastination is not laziness or lack of self-control: It’s an anxiety symptom.

Why do we need “publish or perish”?

June 23 JDN 2458658

This question may seem a bit self-serving, coming from a grad student who is struggling to get his first paper published in a peer-reviewed journal. But given the deep structural flaws in the academic publishing system, I think it’s worth taking a step back to ask just what peer-reviewed journals are supposed to be accomplishing.

The argument is often made that research journals are a way of sharing knowledge. If this is their goal, they have utterly and totally failed. Most papers are read by only a handful of people. When scientists want to learn about the research their colleagues are doing, they don’t read papers; they go to conferences to listen to presentations and look at posters. The way papers are written, they are often all but incomprehensible to anyone outside a very narrow subfield. When published by proprietary journals, papers are often hidden behind paywalls and accessible only through universities. As a knowledge-sharing mechanism, the peer-reviewed journal is a complete failure.

But academic publishing serves another function, which in practice is its only real function: Peer-reviewed publications are a method of evaluation. They are a way of deciding which researchers are good enough to be hired, get tenure, and receive grants. Having peer-reviewed publications—particularly in “top journals”, however that is defined within a given field—is a key metric that universities and grant agencies use to decide which researchers are worth spending on. Indeed, in some cases it seems to be utterly decisive.

We should be honest about this: This is an absolutely necessary function. It is uncomfortable to think about the fact that we must exclude a large proportion of competent, qualified people from being hired or getting tenure in academia, but given the large number of candidates and the small amounts of funding available, this is inevitable. We can’t hire everyone who would probably be good enough. We can only hire a few, and it makes sense to want those few to be the best. (Also, don’t fret too much: Even if you don’t make it into academia, getting a PhD is still a profitable investment. Economists and natural scientists do the best, unsurprisingly; but even humanities PhDs are still generally worth it. Median annual earnings of $77,000 is nothing to sneeze at: US median household income is only about $60,000. Humanities graduates only seem poor in relation to STEM or professional graduates; they’re still rich compared to everyone else.)

But I think it’s worth asking whether the peer review system is actually selecting the best researchers, or even the best research. Note that these are not the same question: The best research done in graduate school might not necessarily reflect the best long-run career trajectory for a researcher. A lot of very important, very difficult questions in science are just not the sort of thing you can get a convincing answer to in a couple of years, and so someone who wants to work on the really big problems may actually have a harder time getting published in graduate school or as a junior faculty member, even though ultimately work on the big problems is what’s most important for society. But I’m sure there’s a positive correlation overall: The kind of person who is going to do better research later is probably, other things equal, going to do better research right now.

Yet even accepting the fact that all we have to go on in assessing what you’ll eventually do is what you have already done, it’s not clear that the process of publishing in a peer-reviewed journal is a particularly good method of assessing the quality of research. Some really terrible research has gotten published in journals—I’m gonna pick on Daryl Bem, because he’s the worst—and a lot of really good research never made it into journals and is languishing on old computer hard drives. (The term “file drawer problem” is about 40 years obsolete; though to be fair, it was in fact coined about 40 years ago.)

That by itself doesn’t actually prove that journals are a bad mechanism. Even a good mechanism, applied to a difficult problem, is going to make some errors. But there are a lot of things about academic publishing, at least as currently constituted, that obviously don’t seem like a good mechanism, such as for-profit publishers, unpaid reviewiers, lack of double-blinded review, and above all, the obsession with “statistical significance” that leads to p-hacking.

Each of these problems I’ve listed has a simple fix (though whether the powers that be actually are willing to implement it is a different question: Questions of policy are often much easier to solve than problems of politics). But maybe we should ask whether the system is even worth fixing, or if it should simply be replaced entirely.

While we’re at it, let’s talk about the academic tenure system, because the peer-review system is largely an evaluation mechanism for the academic tenure system. Publishing in top journals is what decides whether you get tenure. The problem with “Publish or perish” isn’t the “publish”; it’s the perish”. Do we even need an academic tenure system?

The usual argument for academic tenure concerns academic freedom: Tenured professors have job security, so they can afford to say things that may be controversial or embarrassing to the university. But the way the tenure system works is that you only have this job security after going through a long and painful gauntlet of job insecurity. You have to spend several years prostrating yourself to the elders of your field before you can get inducted into their ranks and finally be secure.

Of course, job insecurity is the norm, particularly in the United States: Most employment in the US is “at-will”, meaning essentially that your employer can fire you for any reason at any time. There are specifically illegal reasons for firing (like gender, race, and religion); but it’s extremely hard to prove wrongful termination when all the employer needs to say is, “They didn’t do a good job” or “They weren’t a team player”. So I can understand how it must feel strange for a private-sector worker who could be fired at any time to see academics complain about the rigors of the tenure system.

But there are some important differences here: The academic job market is not nearly as competitive as the private sector job market. There simply aren’t that many prestigious universities, and within each university there are only a small number of positions to fill. As a result, universities have an enormous amount of power over their faculty, which is why they can get away with paying adjuncts salaries that amount to less than minimum wage. (People with graduate degrees! Making less than minimum wage!) At least in most private-sector labor markets in the US, the market is competitive enough that if you get fired, you can probably get hired again somewhere else. In academia that’s not so clear.

I think what bothers me the most about the tenure system is the hierarchical structure: There is a very sharp divide between those who have tenure, those who don’t have it but can get it (“tenure-track”), and those who can’t get it. The lines between professor, associate professor, assistant professor, lecturer, and adjunct are quite sharp. The higher up you are, the more job security you have, the more money you make, and generally the better your working conditions are overall. Much like what makes graduate school so stressful, there are a series of high-stakes checkpoints you need to get through in order to rise in the ranks. And several of those checkpoints are based largely, if not entirely, on publication in peer-reviewed journals.

In fact, we are probably stressing ourselves out more than we need to. I certainly did for my advancement to candidacy; I spent two weeks at such a high stress level I was getting migraines every single day (clearly on the wrong side of the Yerkes-Dodson curve), only to completely breeze through the exam.

I think I might need to put this up on a wall somewhere to remind myself:

Most grad students complete their degrees, and most assistant professors get tenure.

The real filters are admissions and hiring: Most applications to grad school are rejected (though probably most graduate students are ultimately accepted somewhere—I couldn’t find any good data on that in a quick search), and most PhD graduates do not get hired on the tenure track. But if you can make it through those two gauntlets, you can probably make it through the rest.

In our current system, publications are a way to filter people, because the number of people who want to become professors is much higher than the number of professor positions available. But as an economist, this raises a very big question: Why aren’t salaries falling?

You see, that’s how markets are supposed to work: When supply exceeds demand, the price is supposed to fall until the market clears. Lower salaries would both open up more slots at universities (you can hire more faculty with the same level of funding) and shift some candidates into other careers (if you can get paid a lot better elsewhere, academia may not seem so attractive). Eventually there should be a salary point at which demand equals supply. So why aren’t we reaching it?

Well, it comes back to that tenure system. We can’t lower the salaries of tenured faculty, not without a total upheaval of the current system. So instead what actually happens is that universities switch to using adjuncts, who have very low salaries indeed. If there were no tenure, would all faculty get paid like adjuncts? No, they wouldn’tbecause universities would have all that money they’re currently paying to tenured faculty, and all the talent currently locked up in tenured positions would be on the market, driving up the prevailing salary. What would happen if we eliminated tenure is not that all salaries would fall to adjunct level; rather, salaries would all adjust to some intermediate level between what adjuncts currently make and what tenured professors currently make.

What would the new salary be, exactly? That would require a detailed model of the supply and demand elasticities, so I can’t tell you without starting a whole new research paper. But a back-of-the-envelope calculation would suggest something like the overall current median faculty salary. This suggests a median salary somewhere around $75,000. This is a lot less than some professors make, but it’s also a lot more than what adjuncts make, and it’s a pretty good living overall.

If the salary for professors fell, the pool of candidates would decrease, and we wouldn’t need such harsh filtering mechanisms. We might decide we don’t need a strict evaluation system at all, and since the knowledge-sharing function of journals is much better served by other means, we could probably get rid of them altogether.

Of course, who am I kidding? That’s not going to happen. The people who make these rules succeeded in the current system. They are the ones who stand to lose high salaries and job security under a reform policy. They like things just the way they are.

Green New Deal Part 3: Guaranteeing education and healthcare is easy—why aren’t we doing it?

Apr 21 JDN 2458595

Last week was one of the “hard parts” of the Green New Deal. Today it’s back to one of the “easy parts”: Guaranteed education and healthcare.

“Providing all people of the United States with – (i) high-quality health care; […]

“Providing resources, training, and high-quality education, including higher education, to all people of the United States.”

Many Americans seem to think that providing universal healthcare would be prohibitively expensive. In fact, it would have literally negative net cost.
The US currently has the most bloated, expensive, inefficient healthcare system in the entire world. We spend almost $10,000 per person per year on healthcare, and get outcomes no better than France or the UK where they spend less than $5,000.
In fact, our public healthcare expenditures are currently higher than almost every other country. Our private expenditures are therefore pure waste; all they are doing is providing returns for the shareholders of corporations. If we were to simply copy the UK National Health Service and spend money in exactly the same way as they do, we would spend the same amount in public funds and almost nothing in private funds—and the UK has a higher mean lifespan than the US.
This is absolutely a no-brainer. Burn the whole system of private insurance down. Copy a healthcare system that actually works, like they use in every other First World country.
It wouldn’t even be that complicated to implement: We already have a single-payer healthcare system in the US; it’s called Medicare. Currently only old people get it; but old people use the most healthcare anyway. Hence, Medicare for All: Just lower the eligibility age for Medicare to 18 (if not zero). In the short run there would be additional costs for the transition, but in the long run we would save mind-boggling amounts of money, all while improving healthcare outcomes and extending our lifespans. Current estimates say that the net savings of Medicare for All would be about $5 trillion over the next 10 years. We can afford this. Indeed, the question is, as it was for infrastructure: How can we afford not to do this?
Isn’t this socialism? Yeah, I suppose it is. But healthcare is one of the few things that socialist countries consistently do extremely well. Cuba is a socialist country—a real socialist country, not a social democratic welfare state like Norway but a genuinely authoritarian centrally-planned economy. Cuba’s per-capita GDP PPP is a third of ours. Yet their life expectancy is actually higher than ours, because their healthcare system is just that good. Their per-capita healthcare spending is one-fourth of ours, and their health outcomes are better. So yeah, let’s be socialist in our healthcare. Socialists seem really good at healthcare.
And this makes sense, if you think about it. Doctors can do their jobs a lot better when they’re focused on just treating everyone who needs help, rather than arguing with insurance companies over what should and shouldn’t be covered. Preventative medicine is extremely cost-effective, yet it’s usually the first thing that people skimp on when trying to save money on health insurance. A variety of public health measures (such as vaccination and air quality regulation) are extremely cost-effective, but they are public goods that the private sector would not pay for by itself.
It’s not as if healthcare was ever really a competitive market anyway: When you get sick or injured, do you shop around for the best or cheapest hospital? How would you even go about that, when they don’t even post most of their prices and what prices they post are often wildly different than what you’ll actually pay?
The only serious argument I’ve heard against single-payer healthcare is a moral one: “Why should I have to pay for other people’s healthcare?” Well, I guess, because… you’re a human being? You should care about other human beings, and not want them to suffer and die from easily treatable diseases?
I don’t know how to explain to you that you should care about other people.

Single-payer healthcare is not only affordable: It would be cheaper and better than what we are currently doing. (In fact, almost anything would be cheaper and better than what we are currently doing—Obamacare was an improvement over the previous mess, but it’s still a mess.)
What about public education? Well, we already have that up to the high school level, and it works quite well.
Contrary to popular belief, the average public high school has better outcomes in terms of test scores and college placements than the average private high school. There are some elite private schools that do better, but they are extraordinarily expensive and they self-select only the best students. Public schools have to take all students, and they have a limited budget; but they have high quality standards and they require their teachers to be certified.
The flaws in our public school system are largely from it being not public enough, which is to say that schools are funded by their local property taxes instead of having their costs equally shared across whole states. This gives them the same basic problem as private schools: Rich kids get better schools.
If we removed that inequality, our educational outcomes would probably be among the best in the world—indeed, in our most well-funded school districts, they are. The state of Massachusetts which actually funds their public schools equally and well, gets international test scores just as good as the supposedly “superior” educational systems of Asian countries. In fact, this is probably even unfair to Massachusetts, as we know that China specifically selects the regions that have the best students to be the ones to take these international tests. Massachusetts is the best the US has to offer, but Shanghai is also the best China has to offer, so it’s only fair we compare apples to apples.
Public education has benefits for our whole society. We want to have a population of citizens, workers, and consumers who are well-educated. There are enormous benefits of primary and secondary education in terms of reducing poverty, improving public health, and increased economic growth.
So there’s my impassioned argument for why we should continue to support free, universal public education up to high school.
When it comes to college, I can’t be quite so enthusiastic. While there are societal benefits of college education, most of the benefits of college accrue to the individuals who go to college themselves.
The median weekly income of someone with a high school diploma is about $730; with a bachelor’s degree this rises to $1200; and with a doctoral or professional degree it gets over $1800. Higher education also greatly reduces your risk of being unemployed; while about 4% of the general population is unemployed, only 1.5% of people with doctorates or professional degrees are. Add that up over all the weeks of your life, and it’s a lot of money.
The net present value of a college education has been estimated at approximately $1 million. This result is quite sensitive to the choice of discount rate; at a higher discount rate you can get the net present value as “low” as $250,000.
With this in mind, the fact that the median student loan debt for a college graduate is about $30,000 doesn’t sound so terrible, does it? You’re taking out a loan for $30,000 to get something that will earn you between $250,000 and $1 million over the course of your life.
There is some evidence that having student loans delays homeownership; but this is a problem with our mortgage system, not our education system. It’s mainly the inability to finance a down payment that prevents people from buying homes. We should implement a system of block grants for first-time homeowners that gives them a chunk of money to make a down payment, perhaps $50,000. This would cost about as much as the mortgage interest tax deduction which mainly benefits the upper-middle class.
Higher education does have societal benefits as well. Perhaps the starkest I’ve noticed is how categorically higher education decided people’s votes on Donald Trump: Counties with high rates of college education almost all voted for Clinton, and counties with low rates of college education almost all voted for Trump. This was true even controlling for income and a lot of other demographic factors. Only authoritarianism, sexism and racism were better predictors of voting for Trump—and those could very well be mediating variables, if education reduces such attitudes.
If indeed it’s true that higher education makes people less sexist, less racist, less authoritarian, and overall better citizens, then it would be worth every penny to provide universal free college.
But it’s worth noting that even countries like Germany and Sweden which ostensibly do that don’t really do that: While college tuition is free for Swedish citizens and Germany provides free college for all students of any nationality, nevertheless the proportion of people in Sweden and Germany with bachelor’s degrees is actually lower than that of the United States. In Sweden the gap largely disappears if you restrict to younger cohorts—but in Germany it’s still there.
Indeed, from where I’m sitting, “universal free college” looks an awful lot like “the lower-middle class pays for the upper-middle class to go to college”. Social class is still a strong predictor of education level in Sweden. Among OECD countries, education seems to be the best at promoting upward mobility in Australia, and average college tuition in Australia is actually higher than average college tuition in the US (yes, even adjusting for currency exchange: Australian dollars are worth only slightly less than US dollars).
What does Australia do? They have a really good student loan system. You have to reach an annual income of about $40,000 per year before you need to make payments at all, and the loans are subsidized to be interest-free. Once you do owe payments, the debt is repaid at a rate proportional to your income—so effectively it’s not a debt at all but an equity stake.
In the US, students have been taking the desperate (and very cyberpunk) route of selling literal equity stakes in their education to Wall Street banks; this is a terrible idea for a hundred reasons. But having the government have something like an equity stake in students makes a lot of sense.
Because of the subsidies and generous repayment plans, the Australian government loses money on their student loan system, but so what? In order to implement universal free college, they would have spent an awful lot more than they are losing now. This way, the losses are specifically on students who got a lot of education but never managed to raise their income high enough—which means the government is actually incentivized to improve the quality of education or job-matching.
The cost of universal free college is considerable: That $1.3 trillion currently owed as student loans would be additional government debt or tax liability instead. Is this utterly unaffordable? No. But it’s not trivial either. We’re talking about roughly $60 billion per year in additional government spending, a bit less than what we currently spend on food stamps. An expenditure like that should have a large public benefit (as food stamps absolutely, definitely do!); I’m not convinced that free college would have such a benefit.
It would benefit me personally enormously: I currently owe over $100,000 in debt (about half from my undergrad and half from my first master’s). But I’m fairly privileged. Once I finally make it through this PhD, I can expect to make something like $100,000 per year until I retire. I’m not sure that benefiting people like me should be a major goal of public policy.
That said, I don’t think universal free college is a terrible policy. Done well, it could be a good thing. But it isn’t the no-brainer that single-payer healthcare is. We can still make sure that students are not overburdened by debt without making college tuition actually free.

How do you change a paradigm?

Mar 3 JDN 2458546

I recently attended the Institute for New Economic Thinking (INET) Young Scholars Initiative (YSI) North American Regional Convening (what a mouthful!). I didn’t present, so I couldn’t get funding for a hotel, so I commuted to LA each day. That was miserable; if I ever go again, it will be with funding.

The highlight of the conference was George Akerlof‘s keynote, which I knew would be the case from the start. The swag bag labeled “Rebel Without a Paradigm” was also pretty great (though not as great as the “Totes Bi” totes at the Human Rights Council Time to THRIVE conference).

The rest of the conference was… a bit strange, to be honest. They had a lot of slightly cheesy interactive activities and exhibits; the conference was targeted at grad students, but some of these would have drawn groans from my more jaded undergrads (and “jaded grad student” is a redundancy). The poster session was pathetically small; I think there were literally only three posters. (Had I known in time for the deadline, I could surely have submitted a poster.)

The theme of the conference was challenging the neoclassical paradigm. This was really the only unifying principle. So we had quite an eclectic mix of presenters: There were a few behavioral economists (like Akerlof himself), and some econophysicists and complexity theorists, but mostly the conference was filled with a wide variety of heterodox theorists, ranging all the way from Austrian to Marxist. Also sprinkled in were a few outright cranks, whose ideas were just total nonsense; fortunately these were relatively rare.

And what really struck me about listening to the heterodox theorists was how mainstream it made me feel. I went to a session on development economics, expecting randomized controlled trials of basic income and maybe some political economy game theory, and instead saw several presentations of neo-Marxist postcolonial theory. At the AEA conference I felt like a radical firebrand; at the YSI conference I felt like a holdout of the ancien regime. Is this what it feels like to push the envelope without leaping outside it?

The whole atmosphere of the conference was one of “Why won’t they listen to us!?” and I couldn’t help but feel like I kind of knew why. All this heterodox theory isn’t testable. It isn’t useful. It doesn’t solve the problem. Even if you are entirely correct that Latin America is poor because of colonial and neocolonial exploitation by the West (and I’m fairly certain that you’re not; standard of living under the Mexica wasn’t so great you know), that doesn’t tell me how to feed starving children in Nicaragua.

Indeed, I think it’s notable that the one Nobel Laureate they could find to speak for us was a behavioral economist. Behavioral economics has actually managed to penetrate into the mainstream somewhat. Not enough, not nearly quickly enough, to be sure—but it’s happening. Why is it happening? Because behavioral economics is testable, it’s useful, and it solves problems.

Indeed, behavioral economics is more testable than most neoclassical economics: We run lab experiments while they’re adding yet another friction or shock to the never-ending DSGE quagmire.

And we’ve already managed to solve some real policy problems this way, like Alvin Roth’s kidney matching system and Richard Thaler’s “Save More Tomorrow” program.

The (limited) success of behavioral economics came not because we continued to batter at the gates of the old paradigm demanding to be let in, but because we tied ourselves to the methodology of hard science and gathered irrefutable empirical data. We didn’t get as far as we have by complaining that economics is too much like physics; we actually made it more like physics. Physicists do experiments. They make sharp, testable predictions. They refute their hypotheses. And now, so do we.

That said, Akerlof was right when he pointed out that the insistence upon empirical precision has limited the scope of questions we are able to ask, and kept us from addressing some of the really vital economic problems in the world. And neoclassical theory is too narrow; in particular, the ongoing insistence that behavior must be modeled as perfectly rational and completely selfish is infuriating. That model has clearly failed at this point, and it’s time for something new.

So I do think there is some space for heterodox theory in economics. But there actually seems to be no shortage of heterodox theory; it’s easy to come up with ideas that are different from the mainstream. What we actually need is more ways to constrain theory with empirical evidence. The goal must be to have theory that actually predicts and explains the world better than neoclassical theory does—and that’s a higher bar than you might imagine. Neoclassical theory isn’t an abject failure; in fact, if we’d just followed the standard Keynesian models in the Great Recession, we would have recovered much faster. Most of this neo-Marxist theory struck me as not even wrong: the ideas were flexible enough that almost any observed outcome could be fit into them.

Galileo and Einstein didn’t just come up with new ideas and complain that no one listened to them. They developed detailed, mathematically precise models that could be experimentally tested—and when they were tested, they worked better than the old theory. That is the way to change a paradigm: Replace it with one that you can prove is better.

Impostor Syndrome

Feb 24 JDN 2458539

You probably have experienced Impostor Syndrome, even if you didn’t know the word for it. (Studies estimate that over 70% of the general population, and virtually 100% of graduate students, have experienced it at least once.)

Impostor Syndrome feels like this:

All your life you’ve been building up accomplishments, and people kept praising you for them, but those things were easy, or you’ve just gotten lucky so far. Everyone seems to think you are highly competent, but you know better: Now that you are faced with something that’s actually hard, you can’t do it. You’re not sure you’ll ever be able to do it. You’re scared to try because you know you’ll fail. And now you fear that at any moment, your whole house of cards is going to come crashing down, and everyone will see what a fraud and a failure you truly are.

The magnitude of that feeling varies: For most people it can be a fleeting experience, quickly overcome. But for some it is chronic, overwhelming, and debilitating.

It may surprise you that I am in the latter category. A few years ago, I went to a seminar on Impostor Syndrome, and they played a “Bingo” game where you collect spaces by exhibiting symptoms: I won.

In a group of about two dozen students who were there specifically because they were worried about Impostor Syndrome, I exhibited the most symptoms. On the Clance Impostor Phenomenon Scale, I score 90%. Anything above 60% is considered diagnostic, though there is no DSM disorder specifically for Impostor Syndrome.

Another major cause of Impostor Syndrome is being an underrepresented minority. Women, people of color, and queer people are at particularly high risk. While men are less likely to experience Impostor Syndrome, we tend to experience it more intensely when we do.

Aside from being a graduate student, which is basically coextensive with Impostor Syndrome, being a writer seems to be one of the strongest predictors of Impostor Syndrome. Megan McArdle of The Atlantic theorizes that it’s because we were too good in English class, or, more precisely, that English class was much too easy for us. We came to associate our feelings of competence and accomplishment with tasks simply coming so easily we barely even had to try.

But I think there’s a bigger reason, which is that writers face rejection letters. So many rejection letters. 90% of novels are rejected at the query stage; then a further 80% are rejected at the manuscript review stage; this means that a given query letter has about a 2% chance of acceptance. This means that even if you are doing everything right and will eventually get published, you can on average expected 50 rejection letters. I collected a little over 20 and ran out of steam, my will and self-confidence utterly crushed. But statistically I should have continued for at least 30 more. In fact, it’s worse than that; you should always expect to continue 50 more, up until you finally get accepted—this is a memoryless distribution. And if always having to expect to wait for 50 more rejection letters sounds utterly soul-crushing, that’s because it is.

And that’s something fiction writing has in common with academic research. Top journals in economics have acceptance rates between 3% and 8%. I’d say this means you need to submit between 13 and 34 times to get into a top journal, but that’s nonsense; there are only 5 top journals in economics. So it’s more accurate to say that with any given paper, no matter how many times you submit, you only have about a 30% chance of getting into a top journal. After that, your submissions will necessarily not be to top journals. There are enough good second-tier journals that you can probably get into one eventually—after submitting about a dozen times. And maybe a hiring or tenure committee will care about a second-tier publication. It might count for something. But it’s those top 5 journals that really matter. If for every paper you have in JEBO or JPubE, another candidate has a paper in AER or JPE, they’re going to hire the other candidate. Your paper could use better methodology on a more important question, and be better written—but if for whatever reason AER didn’t like it, that’s what will decide the direction of your career.

If I were trying to design a system that would inflict maximal Impostor Syndrome, I’m not sure I could do much better than this. I guess I’d probably have just one top journal instead of five, and I’d make the acceptance rate 1% instead of 3%. But this whole process of high-stakes checkpoints and low chances of getting on a tenure track that will by no means guarantee actually getting tenure? That’s already quite well-optimized. It’s really a brilliant design, if that’s the objective. You select a bunch of people who have experienced nothing but high achievement their whole lives. If they ever did have low achievement, for whatever reason (could be no fault of their own, you don’t care), you’d exclude them from the start. You give them a series of intensely difficult tasks—tasks literally no one else has ever done that may not even be possible—with minimal support and utterly irrelevant and useless “training”, and evaluate them constantly at extremely high stakes. And then at the end you give them an almost negligible chance of success, and force even those who do eventually succeed to go through multiple steps of failure and rejection beforehand. You really maximize the contrast between how long a streak of uninterrupted successes they must have had in order to be selected in the first place, and how many rejections they have to go through in order to make it to the next level.

(By the way, it’s not that there isn’t enough teaching and research for all these PhD graduates; that’s what universities want you to think. It’s that universities are refusing to open up tenure-track positions and instead relying upon adjuncts and lecturers. And the obvious reason for that is to save money.)

The real question is why we let them put us through this. I’m wondering that more and more every day.

I believe in science. I believe I could make a real contribution to human knowledge—at least, I think I still believe that. But I don’t know how much longer I can stand this gauntlet of constant evaluation and rejection.

I am going through a particularly severe episode of Impostor Syndrome at the moment. I am at an impasse in my third-year research paper, which is supposed to be done by the end of the summer. My dissertation committee wants me to revise my second-year paper to submit to journals, and I just… can’t do it. I have asked for help from multiple sources, and received conflicting opinions. At this point I can’t even bring myself to work on it.

I’ve been aiming for a career as an academic research scientist for as long as I can remember, and everyone tells me that this is what I should do and where I belong—but I don’t really feel like I belong anymore. I don’t know if I have a thick enough skin to get through all these layers of evaluation and rejection. Everyone tells me I’m good at this, but I don’t feel like I am. It doesn’t come easily the way I had come to expect things to come easily. And after I’ve done the research, written the paper—the stuff that I was told was the real work—there are all these extra steps that are actually so much harder, so much more painful—submitting to journals and being rejected over, and over, and over again, practically watching the graph of my career prospects plummet before my eyes.

I think that what really triggered my Impostor Syndrome was finally encountering things I’m not actually good at. It sounds arrogant when I say it, but the truth is, I had never had anything in my entire academic experience that felt genuinely difficult. There were things that were tedious, or time-consuming; there were other barriers I had to deal with, like migraines, depression, and the influenza pandemic. But there was never any actual educational content I had difficulty absorbing and understanding. Maybe if I had, I would be more prepared for this. But of course, if that were the case, they’d never let me into grad school at all. Just to be here, I had to have an uninterrupted streak of easy success after easy success—so now that it’s finally hard, I feel completely blindsided. I’m finally genuinely challenged by something academic, and I can’t handle it. There’s math I don’t know how to do; I’ve never felt this way before.

I know that part of the problem is internal: This is my own mental illness talking. But that isn’t much comfort. Knowing that the problem is me doesn’t exactly reduce the feeling of being a fraud and a failure. And even a problem that is 100% inside my own brain isn’t necessarily a problem I can fix. (I’ve had migraines in my brain for the last 18 years; I still haven’t fixed them.)

There is so much that the academic community could do so easily to make this problem better. Stop using the top 5 journals as a metric, and just look at overall publication rates. Referee publications double-blind, so that grad students know their papers will actually be read and taken seriously, rather than thrown out as soon as the referee sees they don’t already have tenure. Or stop obsessing over publications all together, and look at the detailed content of people’s work instead of maximizing the incentive to keep putting out papers that nobody will ever actually read. Open up more tenure-track faculty positions, and stop hiring lecturers and adjuncts. If you have to save money, do it by cutting salaries for administrators and athletic coaches. And stop evaluating constantly. Get rid of qualifying exams. Get rid of advancement exams. Start from the very beginning of grad school by assigning a mentor to each student and getting directly into working on a dissertation. Don’t make the applied econometrics researchers take exams in macro theory. Don’t make the empirical macroeconomists study game theory. Focus and customize coursework specifically on what grad students will actually need for the research they want to do, and don’t use grades at all. Remove the evaluative element completely. We should feel as though we are allowed to not know things. We should feel as though we are allowed to get things wrong. You are supposed to be teaching us, and you don’t seem to know how to do that; you just evaluate us constantly and expect us to learn on our own.

But none of those changes are going to happen. Certainly not in time for me, and probably not ever, because people like me who want the system to change are precisely the people the current system seems designed to weed out. It’s the ones who make it through the gauntlet, and convince themselves that it was their own brilliance and hard work that carried them through (not luck, not being a White straight upper-middle-class cis male, not even perseverance and resilience in the face of rejection), who end up making the policies for the next generation.

Because those who should be fixing the problem refuse to do so, that leaves the rest of us. What can we do to relieve Impostor Syndrome in ourselves or those around us?

You’d be right to take any advice I give now with a grain of salt; it’s obviously not working that well on me. But maybe it can help someone else. (And again I realize that “Don’t listen to me, I have no idea what I’m talking about” is exactly what someone with Impostor Syndrome would say.)

One of the standard techniques for dealing with Impostor Syndrome is called self-compassion. The idea is to be as forgiving to yourself as you would be to someone you love. I’ve never been good at this. I always hold myself to a much higher standard than I would hold anyone else—higher even than I would allow anyone to impose on someone else. After being told my whole life how brilliant and special I am, I internalized it in perhaps the most toxic way possible: I set my bar higher. Things that other people would count as great success I count as catastrophic failure. “Good enough” is never good enough.

Another good suggestion is to change your comparison set: Don’t compare yourself just to faculty or other grad students, compare yourself to the population as a whole. Others will tell you to stop comparing altogether, but I don’t know if that’s even possible in a capitalist labor market.

I’ve also had people encourage me to focus on my core motivations, remind myself what really matters and why I want to be a scientist in the first place. But it can be hard to keep my eye on that prize. Sometimes I wonder if I’ll ever be able to do the things I originally set out to do, or if it’s trying to fit other people’s molds and being rejected repeatedly over and over again for the rest of my life.

I think the best advice I’ve ever received on dealing with Impostor Syndrome was actually this: “Realize that nobody knows what they’re doing.” The people who are the very best at things… really aren’t all that good at them. If you look around carefully, the evidence of incompetence is everywhere. Look at all the books that get published that weren’t worth writing, all the songs that get recorded that weren’t worth singing. Think about the easily-broken electronic gadgets, the glitchy operating systems, the zero-day exploits, the data breaches, the traffic lights that are timed so badly they make the traffic jams worse. Remember that the leading cause of airplane crashes is pilot error, that medical mistakes are the third-leading cause of death in the United States. Think about every vending machine that ate your dollar, every time your cable went out in a storm. All those people around you who look like they are competent and successful? They aren’t. They are just as confused and ignorant and clumsy as you are. Most of them also feel like frauds, at least some of the time.

My first AEA conference

Jan 13 JDN 2458497

The last couple of weeks have been a bit of a whirlwind for me. I submitted a grant proposal, I have another, much more complicated proposal due next week, I submitted a paper to a journal, and somewhere in there I went to the AEA conference for the first time.

Going to the conference made it quite clear that the race and gender disparities in economics are quite real: The vast majority of the attendees were middle-aged White males, all wearing one of either two outfits: Sportcoat and khakis, or suit and tie. (And almost all of the suits were grey or black and almost all of the shirts were white or pastel. Had you photographed in greyscale you’d only notice because the hotel carpets looked wrong.) In an upcoming post I’ll go into more detail about this problem, what seems to be causing it, and what might be done to fix it.

But for now I just want to talk about the conference itself, and moreover, the idea of having conferences—is this really the best way to organize ourselves as a profession?

One thing I really do like about the AEA conference is actually something that separates it from other professions: The job market for economics PhDs is a very formalized matching system designed to be efficient and minimize opportunities for bias. It should be a model for other job markets. All the interviews are conducted in rapid succession, at the conference itself, so that candidates can interview for positions all over the country or even abroad.

I wasn’t on the job market yet, but I will be in a few years. I wanted to see what it’s like before I have to run that gauntlet myself.

But then again, why did we need face-to-face interviews at all? What do they actually tell us?

It honestly seems like a face-to-face interview is optimized to maximize opportunities for discrimination. Do you know them personally? Nepotism opportunity. Are they male or female? Sexism opportunity. Are they in good health? Ableism opportunity. Do they seem gay, or mention a same-sex partner? Homophobia opportunity. Is their gender expression normative? Transphobia opportunity. How old are they? Ageism opportunity. Are they White? Racism opportunity. Do they have an accent? Nationalism opportunity. Do they wear fancy clothes? Classism opportunity. There are other forms of bias we don’t even have simple names for: Do they look pregnant? Do they wear a wedding band? Are they physically attractive? Are they tall?

You can construct your resume review system to not include any of this information, by excluding names, pictures, and personal information. But you literally can’t exclude all of this information from a face-to-face interview, and this is the only hiring mechanism that suffers from this fundamental flaw.

If it were really about proving your ability to do the job, they could send you a take-home exam (a lot of tech companies actually do this): Here’s a small sample project similar to what we want you to do, and a reasonable deadline in which to do it. Do it, and we’ll see if it’s good enough.

If they want to offer an opportunity for you to ask or answer specific questions, that could be done via text chat—which could be on the one hand end-to-end encrypted against eavesdropping and on the other hand leave a clear paper trail in case they try to ask you anything they shouldn’t. If they start asking about your sexual interests in the digital interview, you don’t just feel awkward and wonder if you should take the job: You have something to show in court.

Even if they’re interested in things like your social skills and presentation style, those aren’t measured well by interviews anyway. And they probably shouldn’t even be as relevant to hiring as they are.

With that in mind, maybe bringing all the PhD graduates in economics in the entire United States into one hotel for three days isn’t actually necessary. Maybe all these face-to-face interviews aren’t actually all that great, because their small potential benefits are outweighed by their enormous potential biases.

The rest of the conference is more like other academic conferences, which seems even less useful.

The conference format seems like a strange sort of formality, a ritual that we go through. It’s clearly not the optimal way to present ongoing research—though perhaps it’s better than publishing papers in journals, which is our current gold standard. A whole bunch of different people give you brief, superficial presentations of their research, which may be only tangentially related to anything you’re interested in, and you barely even have time to think about it before they go on to the next once. Also, seven of these sessions are going on simultaneously, so unless you have a Time Turner, you have to choose which one to go to. And they are often changed at the last minute, so you may not even end up going to the one you thought you were going to.

I was really struck by how little experimental work was presented. I was under the impression that experimental economics was catching on, but despite specifically trying to go to experiment-related sessions (excluding the 8:00 AM session for migraine reasons), I only counted a handful of experiments, most of them in the field rather than the lab. There was a huge amount of theory and applied econometrics. I guess this isn’t too surprising, as those are the two main kinds of research that only cost a researcher’s time. I guess in some sense this is good news for me: It means I don’t have as much competition as I thought.

Instead of gathering papers into sessions where five different people present vaguely-related papers in far too little time, we could use working papers, or better yet a more sophisticated online forum where research could be discussed in real-time before it even gets written into a paper. We could post results as soon as we get them, and instead of conducting one high-stakes anonymous peer review at the time of publication, conduct dozens of little low-stakes peer reviews as the research is ongoing. Discussants could be turned into collaborators.

The most valuable parts of conferences always seem to be the parts that aren’t official sessions: Luncheons, receptions, mixers. There you get to meet other people in the field. And this can be valuable, to be sure. But I fear that the individual gain is far larger than the social gain: Most of the real benefits of networking get dissipated by the competition to be better-connected than the other candidates. The kind of working relationships that seem to be genuinely valuable are the kind formed by working at the same school for several years, not the kind that can be forged by meeting once at a conference reception.

I guess every relationship has to start somewhere, and perhaps more collaborations have started that way than I realize. But it’s also worth asking: Should we really be putting so much weight on relationships? Is that the best way to organize an academic discipline?

“It’s not what you know, it’s who you know” is an accurate adage in many professions, but it seems like research should be where we would want it least to apply. This is supposed to be about advancing human knowledge, not making friends—and certainly not maintaining the old boys’ club.