Solving the student debt problem

Aug 24 JDN 2460912

A lot of people speak about student debt as a “crisis”, which makes it sound like the problem is urgent and will have severe consequences if we don’t soon intervene. I don’t think that’s right. While it’s miserable to be unable to pay your student loans, student loans don’t seem to be driving people to bankruptcy or homelessness the way that medical bills do.

Instead I think what we have here is a long-term problem, something that’s been building for a long time and will slowly but surely continue getting worse if we don’t change course. (I guess you can still call it a “crisis” if you want; climate change is also like this, and arguably a crisis.)

But there is a problem here: Student loan balances are rising much faster than other kinds of debt, and the burden falls the worst on Black women and students who went to for-profit schools. A big part of the problem seems to be predatory schools that charge high prices and make big promises but offer poor results.

Making all this worse is the fact that some of the most important income-based repayment plans were overturned by a federal court, forcing everyone who was on them into forebearance. Income-based repayment was a big reason why student loans actually weren’t as bad a burden as their high loan balances might suggest; unlike a personal loan or a mortgage, if you didn’t have enough income to repay your student loans at the full amount, you could get on a plan that would let you make smaller payments, and if you paid on that plan for long enough—even if it didn’t add up to the full balance—your loans would be forgiven.

Now the forebearance is ending for a lot of borrowers, and so they are going into default; and most of that loan forgiveness has been ruled illegal. (Supposedly this is because Congress didn’t approve it. I’ll believe that was the reason when the courts overrule Trump’s tariffs, which clearly have just as thin a legal justification and will cause far more harm to us and the rest of the world.)

In theory, student loans don’t really seem like a bad idea.

College is expensive, because it requires highly-trained professors, who demand high salaries. (The tuition money also goes other places, of course….)

College is valuable, because it provides you with knowledge and skills that can improve your life and also increase your long-term earnings. It’s a big difference: Median salary for someone with a college degree is about $60k, while median salary for someone with only a high school diploma is about $34k.

Most people don’t have enough liquidity to pay for college.

So, we provide loans, so that people can pay for college, and then when they make more money after graduating, they can pay the loans back.

That’s the theory, anyway.

The problem is that average or even median salaries obscure a lot of variation. Some college graduates become doctors, lawyers, or stockbrokers and make huge salaries. Others can’t find jobs at all. In the absence of income-based repayment plans, all students have to pay back their loans in full, regardless of their actual income after graduation.

There is inherent risk in trying to build a career. Our loan system—especially with the recent changes—puts most of this risk on the student. We treat it as their fault they can’t get a good job, and then punish them with loans they can’t afford to repay.

In fact, right now the job market is pretty badfor recent graduates—while usually unemployment for recent college grads is lower than that of the general population, since about 2018 it has actually been higher. (It’s no longer sky-high like it was during COVID; 4.8% is not bad in the scheme of things.)

Actually the job market may even be worse than it looks, because new hires are actually the lowest rate they’ve been since 2020. Our relatively low unemployment currently seems to reflect a lack of layoffs, not a healthy churn of people entering and leaving jobs. People seem to be locked into their jobs, and if they do leave them, finding another is quite difficult.

What I think we need is a system that makes the government take on more of the risk, instead of the students.

There are lots of ways to do this. Actually, the income-based repayment systems we used to have weren’t too bad.

But there is actually a way to do it without student loans at all. College could be free, paid for by taxes.


Now, I know what you’re thinking: Isn’t this unfair to people who didn’t go to college? Why should they have to pay?

Who said they were paying?

There could simply be a portion of the income tax that you only pay if you have a bachelor’s degree. Then you would only pay this tax if you both graduated from college and make a lot of money.

I don’t think this would create a strong incentive not to get a bachelor’s degree; the benefits of doing so remain quite large, even if your taxes were a bit higher as a result.

It might create incentives to major in subjects that aren’t as closely linked to higher earnings—liberal arts instead of engineering, medicine, law, or business. But this I see as fundamentally a public good: The world needs people with liberal arts education. If the market fails to provide for them, the government should step in.

This plan is not as progressive as Elizabeth Warren’s proposal to use wealth taxes to fund free college; but it might be more politically feasible. The argument that people who didn’t go to college shouldn’t have to pay for people who did actually seems reasonable to me; but this system would ensure that in fact they don’t.

The transfer of wealth here would be from people who went to college and make a lot of money to people who went to college and don’t make a lot of money. It would be the government bearing some of the financial risk of taking on a career in an uncertain world.

How to teach people about vaccines

May 25 JDN 2460821

Vaccines are one of the greatest accomplishments in human history. They have saved hundreds of millions of lives with minimal cost and almost no downside at all. (For everyone who suffers a side effect from a vaccine, I guarantee you: Someone else would have had it much worse from the disease if they hadn’t been vaccinated.)

It’s honestly really astonishing just how much good vaccines have done for humanity.

Thus, it’s a bit of a mystery how there are so many people who oppose vaccines.

But this mystery becomes a little less baffling in light of behavioral economics. People assess the probability of an event mainly based on the availability heuristic: How many examples can they think of when it happened?

Precisely because vaccines have been so effective at preventing disease, we have now reached a point where diseases that were once commonplace are now virtually eradicated. Thus, parents considering whether to vaccinate their children think about whether they know anyone who has gotten sick from that disease, and they can’t think of anyone, so they assume that it’s not a real danger. Then, someone comes along and convinces them (based on utter lies that have been thoroughly debunked) that vaccines cause autism, and they get scared about autism, because they can think of someone they know who has autism.

But of course, the reason that they can’t think of anyone who died from measles or pertussis is because of the vaccines. So I think we need an educational campaign that makes these rates more vivid for people, which plays into the availability heuristic instead of against it.

Here’s my proposal for a little educational game that might help:

It functions quite similarly to a classic tabletop RPG like Dungeons & Dragons, only here the target numbers are based on real figures.


Gather a group of at least 100 people. (Too few, and the odds become small enough that you may get no examples of some diseases.)

Each person needs 3 10-sided dice. Preferably they would be different colors or somehow labeled, because we want one to represent the 100s digit, one the 10s digit, and one the 1s digit. (The numbers you can roll thus range uniformly from 0 to 999.) In TTRPG parlance, this is called a d1000.

Give each person a worksheet that looks like this:

DiseaseBefore vaccine: Caught?Before vaccine: Died?After vaccine: Caught?After vaccine: Died?
Diptheria



Measles



Mumps



Pertussis



Polio



Rubella



Smallpox



Tetanus



Hep A



Hep B



Pneumococca



Varicella



In the first round, use the figures for before the vaccine. In the second round, use the figures for after the vaccine.

For each disease in each round, there will be a certain roll that people need to get in order to either not contract the disease: Roll that number or higher, and you are okay; roll below it, and you catch the disease.


Likewise, there will be a certain roll they need to get to survive if they contract it: Roll that number or higher, and you get sick but survive; roll below it, and you die.

Each time, name a disease, and then tell people what they need to roll to not catch it.

Have them all roll, and if they catch it, check off that box.

Then, for everyone who catches it, have them roll again to see if they survive it. If they die, check that box.

Based on the historical incidences which I have converted into lifetime prevalences, the target numbers are as follows:

DiseaseBefore vaccine: Roll to not catchBefore vaccine: Roll to surviveAfter vaccine: Roll to not catchAfter vaccine: Roll to survive
Diptheria138700
Measles244100
Mumps66020
Pertussis1232042
Polio208900
Rubella191190
Smallpox201200
Tetanus1800171
Hep A37141
Hep B22444
Pneumococca1910311119
Varicella95011640

What you should expect to see for a group of 100 is something like this (of course the results are random, so it won’t be this exactly):

DiseaseBefore vaccine: Number caughtBefore vaccine: Number diedAfter vaccine: Number caughtAfter vaccine: Number died
Diptheria1000
Measles24000
Mumps7000
Pertussis12100
Polio2000
Rubella2000
Smallpox2000
Tetanus0000
Hep A4000
Hep B2000
Pneumococca2111
Varicella950160

You’ll find that not a lot of people have checked those “dead” boxes either before or after the vaccine. So if you just look at death rates, the difference may not seem that stark.

(Of course, over a world as big as ours, it adds up: The difference between the 0.25% death rate of pertussis before the vaccine and 0% today is 20 million people—roughly the number of people who live in the New York City metro area.)

But I think people will notice that a lot more people got sick in the “before-vaccine” world than the “after-vaccine” world. Moreover, those that did get sick will find themselves rolling the dice on dying; they’ll probably be fine, but you never know for sure.

Make sure people also notice that (except for pneumococca), if you do get sick, the roll you need to survive is a lot higher without the vaccine. (If anyone does get unlucky enough to get tetanus in the first round, they’re probably gonna die!)

If anyone brings up autism, you can add an extra round where you roll for that too.

The supposedly “epidemic” prevalence of autism today is… 3.2%.

(Honestly I expected higher than that, but then, I hang around with a lot of queer and neurodivergent people. (So the availability heuristic got me too!))

Thus, what’s the roll to not get autism? 32.

Even with the expansive diagnostic criteria that include a lot of borderline cases like yours truly, you still only need to roll 32 on this d1000 to not get autism.

This means that only about 3 people in your group of 100 should end up getting autism, most likely fewer than the number who were saved from getting measles, mumps, and rubella by the vaccine, comparable to the number saved from getting most of the other diseases—and almost certainly fewer than the number saved from getting varicella.

So even if someone remains absolutely convinced that vaccines cause autism, you can now point out that vaccines also clearly save billions of people from getting sick and millions from dying.

Also, there are different kinds of autism. Some forms might not even be considered a disability if society were more accommodating; others are severely debilitating.

Recently clinicians have started to categorize “profound autism”, the kind that is severely debilitating. This constitutes about 25% of children with autism—but it’s a falling percentage over time, because broader diagnostic criteria are including more people as autistic, but not changing the number who are severely debilitated. (It is controversial exactly what should constitute “profound autism”, but I do think the construct is useful; there’s a big difference between someone like me who can basically function normally with some simple accommodations, and someone who never even learns to talk.)

So you can have the group do another roll, specifically for profound autism; that target number is now only 8.

There’s also one more demonstration you can do.

Aggregating over all these diseases, we can find the overall chance of dying from any of these diseases before and after the vaccine.

Have everyone roll for that, too:

Before the vaccines, the target number is 8. Afterward, it is 1.

If autism was brought up, make that comparison explicit.

Even if 100% of autism cases were caused by vaccines (which, I really must say, is ridiculous, as there’s no credible evidence that vaccines cause autism at all) that would still mean the following:

You are trading off a 32 in 1000 chance of your child being autistic and an 8 in 1000 chance of your child being profoundly autistic, against a 7 in 1000 chance of your child dying.

If someone is still skeptical of vaccines at this point, you should ask them point-blank:

Do you really think that being autistic is one-fifth as bad as dying?

Do you really think that being profoundly autistic is as bad as dying?

What does “can” mean, anyway?

Apr 7 JDN 2460409

I don’t remember where, but I believe I once heard a “philosopher” defined as someone who asks the sort of question everyone knows the answer to, and doesn’t know the answer.

By that definition, I’m feeling very much a philosopher today.

“can” is one of the most common words in the English language; the Oxford English Corpus lists it as the 53rd most common word. Similar words are found in essentially every language, and nearly always rank among their most common.

Yet when I try to precisely define what we mean by this word, it’s surprisingly hard.

Why, you might even say I can’t.

The very concept of “capability” is surprisingly slippery—just what is someone capable of?

My goal in this post is basically to make you as confused about the concept as I am.

I think that experiencing disabilities that include executive dysfunction has made me especially aware of just how complicated the concept of ability really is. This also relates back to my previous post questioning the idea of “doing your best”.

Here are some things that “can” might mean, or even sometimes seems to mean:

1. The laws of physics do not explicitly prevent it.

This seems far too broad. By this definition, you “can” do almost anything—as long as you don’t make free energy, reduce entropy, or exceed the speed of light.

2. The task is something that other human beings have performed in the past.

This is surely a lot better; it doesn’t say that I “can” fly to Mars or turn into a tree. But by this definition, I “can” sprint as fast as Usain Bolt and swim as long as Michael Phelps—which certainly doesn’t seem right. Indeed, not only would I say I can’t do that; I’d say I couldn’t do that, no matter how hard I tried.

3. The task is something that human beings in similar physical condition to my own have performed in the past.

Okay, we’re getting warmer. But just what do we mean, “similar condition”? No one else in the world is in exactly the same condition I am.

And even if those other people are in the same physical condition, their mental condition could be radically different. Maybe they’re smarter than I am, or more creative—or maybe they just speak Swahili. It doesn’t seem right to say that I can speak Swahili. Maybe I could speak Swahili, if I spent a lot of time and effort learning it. But at present, I can’t.

4. The task is something that human beings in similar physical and mental condition to my own have performed in the past.

Better still. This seems to solve the most obvious problems. It says that I can write blog posts (check), and I can’t speak Swahili (also check).

But it’s still not specific enough. For, even if we can clearly define what constitutes “people like me” (can we?), there are many different circumstances in which people like me have been in, and what they did has varied quite a bit, depending on those circumstances.

People in extreme emergencies have performed astonishingly feats of strength, such as lifting cars. Maybe I could do something like that, should the circumstance arise? But it certainly doesn’t seem right to say that I can lift cars.

5. The task is something that human beings in similar physical and mental condition to my own have performed in the past, in circumstances similar to my own.

That solves the above problems (provided we can sufficiently define “similar” for both people and circumstances). But it actually raises a different problem: If the circumstances were so similar, shouldn’t their behavior and mine be the same?

By that metric, it seems like the only way to know if I can do something is to actually do it. If I haven’t actually done it—in that mental state, in those circumstances—then I can’t really say I could have done it. At that point, “can” becomes a really funny way of saying “do”.

So it seems we may have narrowed down a little too much here.

And what about the idea that I could speak Swahili, if I studied hard? That seems to be something broader; maybe it’s this:

6. The task is something that human beings who are in physical or mental condition that is attainable from my own condition have performed in the past.

But now we have to ask, what do we mean by “attainable”? We come right back to asking about capability again: What kind of effort can I make in order to learn Swahili, train as a pilot, or learn to SCUBA dive?

Maybe I could lift a car, if I had to do it to save my life or the life of a loved one. But without the adrenaline rush of such emergency, I might be completely unable to do it, and even with that adrenaline rush, I’m sure the task would injure me severely. Thus, I don’t think it’s fair to say I can lift cars.

So how much can I lift? I have found that I can, as part of a normal workout, bench-press about 80 pounds. But I don’t think is the limit of what I can lift; it’s more like what I can lift safely and comfortably for multiple sets of multiple reps without causing myself undue pain. For a single rep, I could probably do considerably more—though how much more is quite hard to say. 100 pounds? 120? (There are online calculators that supposedly will convert your multi-rep weight to a single-rep max, but for some reason, they don’t seem to be able to account for multiple sets for some reason. If I do 4 sets of 10 reps, is that 10 reps, or 40 reps? This is the difference between my one-rep max being 106 and it being 186. The former seems closer to the truth, but is probably still too low.)

If I absolutely had to—say, something that heavy has fallen on me and lifting it is the only way to escape—could I bench-press my own weight of about 215 pounds? I think so. But I’m sure it would hurt like hell, and I’d probably be sore for days afterward.

Now, consider tasks that require figuring something out, something I don’t currently know but could conceivably learn or figure out. It doesn’t seem right to say that I can solve the P/NP problem or the Riemann Hypothesis. But it does seem right to say that I can at least work on those problems—I know enough about them that I can at least get started, if perhaps not make much real progress. Whereas most people, while they could theoretically read enough books about mathematics to one day know enough that they could do this, are not currently in a state where they could even begin to do that.

Here’s another question for you to ponder:

Can I write a bestselling novel?

Maybe that’s no fair. Making it a bestseller depends on all sorts of features of the market that aren’t entirely under my control. So let’s make it easier:

Can I write a novel?

I have written novels. So at first glance it seems obvious that I can write a novel.

But there are many days, especially lately, on which I procrastinate my writing and struggle to get any writing done. On such a day, can I write a novel? If someone held a gun to my head and demanded that I write the novel, could I get it done?

I honestly don’t know.

Maybe there’s some amount of pressure that would in fact compel me, even on the days of my very worst depression, to write the novel. Or maybe if you put that gun to my head, I’d just die. I don’t know.

But I do know one thing for sure: It would hurt.

Writing a novel on my worst days would require enormous effort and psychological pain—and honestly, I think it wouldn’t feel all that different from trying to lift 200 pounds.

Now we are coming to the real heart of the matter:

How much cost am I expected to pay, for it to still count as within my ability?

There are many things that I can do easily, that don’t really require much effort. But this varies too.

On most days, brushing my teeth is something I just can do—I remember to do it, I choose to do it, it happens; I don’t feel like I have exerted a great deal of effort or paid any substantial cost.

But there are days when even brushing my teeth is hard. Generally I do make it happen, so evidently I can do it—but it is no longer free and effortless the way it usually is.

There are other things which require effort, but are generally feasible, such as working out. Working out isn’t easy (essentially by design), but if I put in the effort, I can make it happen.

But again, some days are much harder than others.

Then there are things which require so much effort they feel impossible, even if they theoretically aren’t.

Right now, that’s where I’m at with trying to submit my work to journals or publishers. Each individual action is certainly something I should be physically able to take. I know the process of what to do—I’m not trying to solve the Riemann Hypothesis here. I have even done it before.

But right now, today, I don’t feel like I can do it. There may be some sense in which I “can”, but it doesn’t feel relevant.

And I felt the same way yesterday, and the day before, and pretty much every day for at least the past year.

I’m not even sure if there is an amount of pressure that could compel me to do it—e.g. if I had a gun to my head. Maybe there is. But I honestly don’t know for sure—and if it did work, once again, it would definitely hurt.

Others in the disability community have a way of describing this experience, which probably sounds strange if you haven’t heard it before:

“Do you have enough spoons?”

(For D&D fans, I’ve also heard others substitute “spell slots”.)

The idea is this: Suppose you are endowed with a certain number of spoons, which you can consume as a resource in order to achieve various tasks. The only way to replenish your spoons is rest.

Some tasks are cheap, requiring only 1 or 2 spoons. Others may be very costly, requiring 10, or 20, or perhaps even 50 or 100 spoons.

But the number of spoons you start with each morning may not always be the same. If you start with 200, then a task that requires 2 will seem trivial. But if you only started with 5, even those 2 will feel like a lot.

As you deplete your available spoons, you will find you need to ration which tasks you are able to complete; thus, on days when you wake up with fewer spoons, things that you would ordinarily do may end up not getting done.

I think submitting to a research journal is a 100-spoon task, and I simply haven’t woken up with more than 50 spoons in any given day within the last six months.

I don’t usually hear it formulated this way, but for me, I think the cost varies too.

I think that on a good day, brushing my teeth is a 0-spoon task (a “cantrip”, if you will); I could do it as many times as necessary without expending any detectable effort. But on a very bad day, it will cost me a couple of spoons just to do that. I’ll still get it done, but I’ll feel drained by it. I couldn’t keep doing it indefinitely. It will prevent me from being able to do something else, later in the day.

Writing is something that seems to vary a great deal in its spoon cost. On a really good day when I’m feeling especially inspired, I might get 5000 words written and feel like I’ve only spent 20 spoons; while on a really bad day, that same 20 spoons won’t even get me a single paragraph.

It may occur to you to ask:

What is the actual resource being depleted here?

Just what are the spoons, anyway?

That, I really can’t say.

I don’t think it’s as simple as brain glucose, though there were a few studies that seemed to support such a view. If it were, drinking something sugary ought to fix it, and generally that doesn’t work (and if you do that too often, it’s bad for your health). Even weirder is that, for some people, just tasting sugar seems to help with self-control. My own guess is that if your particular problem is hypoglycemia, drinking sugar works, and otherwise, not so much.

There could be literally some sort of neurotransmitter reserves that get depleted, or receptors that get overloaded; but I suspect it’s not even that simple either. These are the models we use because they’re the best we have—but the brain is in reality far more complicated than any of our models.

I’ve heard people say “I ran out of serotonin today”, but I’m fairly sure they didn’t actually get their cerebrospinal fluid tested first. (And since most of your serotonin is actually in your gut, if they really ran out they should be having severe gastrointestinal symptoms.) (I had my cerebrospinal fluid tested once; most agonizing pain of my life. To say that I don’t recommend the experience is such an understatement, it’s rather like saying Hell sounds like a bad vacation spot. Indeed, if I believed in Hell, I would have to imagine it feels like getting a spinal tap every day for eternity.)

So for now, the best I can say is, I really don’t know what spoons are. And I still don’t entirely know what “can” means. But at least maybe now you’re as confused as I am.

Reflections at the crossroads

Jan 21 JDN 2460332

When this post goes live, I will have just passed my 36th birthday. (That means I’ve lived for about 1.1 billion seconds, so in order to be as rich as Elon Musk, I’d need to have made, on average, since birth, $200 per second—$720,000 per hour.)

I certainly feel a lot better turning 36 than I did 35. I don’t have any particular additional accomplishments to point to, but my life has already changed quite a bit, in just that one year: Most importantly, I quit my job at the University of Edinburgh, and I am currently in the process of moving out of the UK and back home to Michigan. (We moved the cat over Christmas, and the movers have already come and taken most of our things away; it’s really just us and our luggage now.)

But I still don’t know how to field the question that people have been asking me since I announced my decision to do this months ago:

“What’s next?”

I’m at a crossroads now, trying to determine which path to take. Actually maybe it’s more like a roundabout; it has a whole bunch of different paths, surely not just two or three. The road straight ahead is labeled “stay in academia”; the others at the roundabout are things like “freelance writing”, “software programming”, “consulting”, and “tabletop game publishing”. There’s one well-paved and superficially enticing road that I’m fairly sure I don’t want to take, labeled “corporate finance”.

Right now, I’m just kind of driving around in circles.

Most people don’t seem to quit their jobs without a clear plan for where they will go next. Often they wait until they have another offer in hand that they intend to take. But when I realized just how miserable that job was making me, I made the—perhaps bold, perhaps courageous, perhaps foolish—decision to get out as soon as I possibly could.

It’s still hard for me to fully understand why working at Edinburgh made me so miserable. Many features of an academic career are very appealing to me. I love teaching, I like doing research; I like the relatively flexible hours (and kinda need them, because of my migraines).

I often construct formal decision models to help me make big choices—generally it’s a linear model, where I simply rate each option by its relative quality in a particular dimension, then try different weightings of all the different dimensions. I’ve used this successfully to pick out cars, laptops, even universities. I’m not entrusting my decisions to an algorithm; I often find myself tweaking the parameters to try to get a particular result—but that in itself tells me what I really want, deep down. (Don’t do that in research—people do, and it’s bad—but if the goal is to make yourself happy, your gut feelings are important too.)

My decision models consistently rank university teaching quite high. It generally only gets beaten by freelance writing—which means that maybe I should give freelance writing another try after all.

And yet, my actual experience at Edinburgh was miserable.

What went wrong?

Well, first of all, I should acknowledge that when I separate out the job “university professor” into teaching and research as separate jobs in my decision model, and include all that goes into both jobs—not just the actual teaching, but the grading and administrative tasks; not just doing the research, but also trying to fund and publish it—they both drop lower on the list, and research drops down a lot.

Also, I would rate them both even lower now, having more direct experience of just how awful the exam-grading, grant-writing and journal-submitting can be.

Designing and then grading an exam was tremendously stressful: I knew that many of my students’ futures rested on how they did on exams like this (especially in the UK system, where exams are absurdly overweighted! In most of my classes, the final exam was at least 60% of the grade!). I struggled mightily to make the exam as fair as I could, all the while knowing that it would never really feel fair and I didn’t even have the time to make it the best it could be. You really can’t assess how well someone understands an entire subject in a multiple-choice exam designed to take 90 minutes. It’s impossible.

The worst part of research for me was the rejection.

I mentioned in a previous post how I am hypersensitive to rejection; applying for grants and submitting to journals was clearly the worst feelings of rejection I’ve felt in any job. It felt like they were evaluting not only the value of my work, but my worth as a scientist. Failure felt like being told that my entire career was a waste of time.

It was even worse than the feeling of rejection in freelance writing (which is one of the few things that my model tells me is bad about freelancing as a career for me, along with relatively low and uncertain income). I think the difference is that a book publisher is saying “We don’t think we can sell it.”—’we’ and ‘sell’ being vital. They aren’t saying “this is a bad book; it shouldn’t exist; writing it was a waste of time.”; they’re just saying “It’s not a subgenre we generally work with.” or “We don’t think it’s what the market wants right now.” or even “I personally don’t care for it.”. They acknowledge their own subjective perspective and the fact that it’s ultimately dependent on forecasting the whims of an extremely fickle marketplace. They aren’t really judging my book, and they certainly aren’t judging me.

But in research publishing, it was different. Yes, it’s all in very polite language, thoroughly spiced with sophisticated jargon (though some reviewers are more tactful than others). But when your grant application gets rejected by a funding agency or your paper gets rejected by a journal, the sense really basically is “This project is not worth doing.”; “This isn’t good science.”; “It was/would be a waste of time and money.”; “This (theory or experiment you’ve spent years working on) isn’t interesting or important.” Nobody ever came out and said those things, nor did they come out and say “You’re a bad economist and you should feel bad.”; but honestly a couple of the reviews did kinda read to me like they wanted to say that. They thought that the whole idea that human beings care about each other is fundamentally stupid and naive and not worth talking about, much less running experiments on.

It isn’t so much that I believed them that my work was bad science. I did make some mistakes along the way (but nothing vital; I’ve seen far worse errors by Nobel Laureates). I didn’t have very large samples (because every person I add to the experiment is money I have to pay, and therefore funding I have to come up with). But overall I do believe that my work is sufficiently rigorous to be worth publishing in scientific journals.

It’s more that I came to feel that my work is considered bad, that the kind of work I wanted to do would forever be an uphill battle against an implacable enemy. I already feel exhausted by that battle, and it had only barely begun. I had thought that behavioral economics was a more successful paradigm by now, that it had largely displaced the neoclassical assumptions that came before it; but I was wrong. Except specifically in journals dedicated to experimental and behavioral economics (of which prestigious journals are few—I quickly exhausted them), it really felt like a lot of the feedback I was getting amounted to, “I refuse to believe your paradigm.”.

Part of the problem, also, was that there simply aren’t that many prestigious journals, and they don’t take that many papers. The top 5 journals—which, for whatever reason, command far more respect than any other journals among economists—each accept only about 5-10% of their submissions. Surely more than that are worth publishing; and, to be fair, much of what they reject probably gets published later somewhere else. But it makes a shockingly large difference in your career how many “top 5s” you have; other publications almost don’t matter at all. So once you don’t get into any of those (which of course I didn’t), should you even bother trying to publish somewhere else?

And what else almost doesn’t matter? Your teaching. As long as you show up to class and grade your exams on time (and don’t, like, break the law or something), research universities basically don’t seem to care how good a teacher you are. That was certainly my experience at Edinburgh. (Honestly even their responses to professors sexually abusing their students are pretty unimpressive.)

Some of the other faculty cared, I could tell; there were even some attempts to build a community of colleagues to support each other in improving teaching. But the administration seemed almost actively opposed to it; they didn’t offer any funding to support the program—they wouldn’t even buy us pizza at the meetings, the sort of thing I had as an undergrad for my activist groups—and they wanted to take the time we spent in such pedagogy meetings out of our grading time (probably because if they didn’t, they’d either have to give us less grading, or some of us would be over our allotted hours and they’d owe us compensation).

And honestly, it is teaching that I consider the higher calling.

The difference between 0 people knowing something and 1 knowing it is called research; the difference between 1 person knowing it and 8 billion knowing it is called education.

Yes, of course, research is important. But if all the research suddenly stopped, our civilization would stagnate at its current level of technology, but otherwise continue unimpaired. (Frankly it might spare us the cyberpunk dystopia/AI apocalypse we seem to be hurtling rapidly toward.) Whereas if all education suddenly stopped, our civilization would slowly decline until it ultimately collapsed into the Stone Age. (Actually it might even be worse than that; even Stone Age cultures pass on knowledge to their children, just not through formal teaching. If you include all the ways parents teach their children, it may be literally true that humans cannot survive without education.)

Yet research universities seem to get all of their prestige from their research, not their teaching, and prestige is the thing they absolutely value above all else, so they devote the vast majority of their energy toward valuing and supporting research rather than teaching. In many ways, the administrators seem to see teaching as an obligation, as something they have to do in order to make money that they can spend on what they really care about, which is research.

As such, they are always making classes bigger and bigger, trying to squeeze out more tuition dollars (well, in this case, pounds) from the same number of faculty contact hours. It becomes impossible to get to know all of your students, much less give them all sufficient individual attention. At Edinburgh they even had the gall to refer to their seminars as “tutorials” when they typically had 20+ students. (That is not tutoring!)And then of course there were the lectures, which often had over 200 students.

I suppose it could be worse: It could be athletics they spend all their money on, like most Big Ten universities. (The University of Michigan actually seems to strike a pretty good balance: they are certainly not hurting for athletic funding, but they also devote sizeable chunks of their budget to research, medicine, and yes, even teaching. And unlike virtually all other varsity athletic programs, University of Michigan athletics turns a profit!)

If all the varsity athletics in the world suddenly disappeared… I’m not convinced we’d be any worse off, actually. We’d lose a source of entertainment, but it could probably be easily replaced by, say, Netflix. And universities could re-focus their efforts on academics, instead of acting like a free training and selection system for the pro leagues. The University of California, Irvine certainly seemed no worse off for its lack of varsity football. (Though I admit it felt a bit strange, even to a consummate nerd like me, to have a varsity League of Legends team.)

They keep making the experience of teaching worse and worse, even as they cut faculty salaries and make our jobs more and more precarious.

That might be what really made me most miserable, knowing how expendable I was to the university. If I hadn’t quit when I did, I would have been out after another semester anyway, and going through this same process a bit later. It wasn’t even that I was denied tenure; it was never on the table in the first place. And perhaps because they knew I wouldn’t stay anyway, they didn’t invest anything in mentoring or supporting me. Ostensibly I was supposed to be assigned a faculty mentor immediately; I know the first semester was crazy because of COVID, but after two and a half years I still didn’t have one. (I had a small research budget, which they reduced in the second year; that was about all the support I got. I used it—once.)

So if I do continue on that “academia” road, I’m going to need to do a lot of things differently. I’m not going to put up with a lot of things that I did. I’ll demand a long-term position—if not tenure-track, at least renewable indefinitely, like a lecturer position (as it is in the US, where the tenure-track position is called “assistant professor” and “lecturer” is permanent but not tenured; in the UK, “lecturers” are tenure-track—except at Oxford, and as of 2021, Cambridge—just to confuse you). Above all, I’ll only be applying to schools that actually have some track record for valuing teaching and supporting their faculty.

And if I can’t find any such positions? Then I just won’t apply at all. I’m not going in with the “I’ll take what I can get” mentality I had last time. Our household finances are stable enough that I can afford to wait awhile.

But maybe I won’t even do that. Maybe I’ll take a different path entirely.

For now, I just don’t know.

A new direction

Dec 31 JDN 2460311

CW: Spiders [it’ll make sense in context]

My time at the University of Edinburgh is officially over. For me it was a surprisingly gradual transition: Because of the holiday break, I had already turned in my laptop and ID badge over a week ago, and because my medical leave, I hadn’t really done much actual work for quite some time. But this is still a momentous final deadline; it’s really, truly, finally over.

I now know with some certainty that leaving Edinburgh early was the right choice, and if anything I should have left sooner or never taken the job in the first place. (It seems I am like Randall Munroe after all.) But what I don’t know is where to go next.

We won’t be starving or homeless. My husband still has his freelance work, and my mother has graciously offered to let us stay in her spare room for awhile. We have some savings to draw upon. Our income will be low enough that payments on my student loans will be frozen. We’ll be able to get by, even if I can’t find work for awhile. But I certainly don’t want to live like that forever.

I’ve been trying to come up with ideas for new career paths, including ones I would never have considered before. Right now I am considering: Back into academia (but much choosier about what sort of school and position), into government or an international aid agency, re-training to work in software development, doing my own freelance writing (then I must decide: fiction or nonfiction? Commercial publishing, or self-published?), publishing our own tabletop games (we have one almost ready for crowdfunding, and another that I could probably finish relatively quickly), opening a game shop or escape room, or even just being a stay-at-home parent (surely the hardest to achieve financially; and while on the one hand it seems like an awful waste of a PhD, on the other hand it would really prove once and for all that I do understand the sunk cost fallacy, and therefore be a sign of my ultimate devotion to behavioral economics). The one mainstream option for an econ PhD that I’m not seriously considering is the private sector: If academia was this soul-sucking, I’m not sure I could survive corporate America.

Maybe none of these are yet the right answer. Or maybe some combination is.

What I’m really feeling right now is a deep uncertainty.

Also, fear. Fear of the unknown. Fear of failure. Fear of rejection. Almost any path I could take involves rejection—though of different kinds, and surely some more than others.

I’ve always been deeply and intensely affected by rejection. Some of it comes from formative experiences I had as a child and a teenager; some of it may simply be innate, the rejection-sensitive dysphoria that often comes with ADHD (which I now believe I have, perhaps mildly). (Come to think of it, even those formative experiences may have hit so hard because of my innate predisposition.)

But wherever it comes from, my intense fear of rejection is probably my greatest career obstacle. In today’s economy, just applying for a job—any job—requires bearing dozens of rejections. Openings get hundreds of applicants, so even being fully qualified is no guarantee of anything.

This makes it far more debilitating than most other kinds of irrational fear. I am also hematophobic, but that doesn’t really get in my way all that much; in the normal course of life, one generally tries to avoid bleeding anyway. (Now that MSM can donate blood, it does prevent me from doing that; and I do feel a little bad about that, since there have been blood shortages recently.)

But rejection phobia basically feels like this:

Imagine you are severely arachnophobic, just absolutely terrified of spiders. You are afraid to touch them, afraid to look at them, afraid to be near them, afraid to even think about them too much. (Given how common it is, you may not even have to imagine.)

Now, imagine (perhaps not too vividly, if you are genuinely arachnophobic!) that every job, every job, in every industry, regardless of what skills are required or what the work entails, requires you to first walk through a long hallway which is covered from floor to ceiling in live spiders. This is simply a condition of employment in our society: Everyone must be able to walk through the hallway full of spiders. Some jobs have longer hallways than others, some have more or less aggressive spiders, and almost none of the spiders are genuinely dangerous; but every job, everywhere, requires passing through a hallway of spiders.

That’s basically how I feel right now.

Freelance writing is the most obvious example—we could say this is an especially long hallway with especially large and aggressive spiders. To succeed as a freelance writer requires continually submitting work you have put your heart and soul into, and receiving in response curtly-worded form rejection letters over and over and over, every single time. And even once your work is successful, there will always be critics to deal with.

Yet even a more conventional job, say in academia or government, requires submitting dozens of applications and getting rejected dozens of times. Sometimes it’s also a curt form letter; other times, you make it all the way through multiple rounds of in-depth interviews and still get turned down. The latter honestly stings a lot more than the former, even though it’s in some sense a sign of your competence: they wouldn’t have taken you that far if you were unqualified; they just think they found someone better. (Did they actually? Who knows?) But investing all that effort for zero reward feels devastating.

The other extreme might be becoming a stay-at-home parent. There aren’t as many spiders in this hallway. While biological children aren’t really an option for us, foster agencies really can’t afford to be choosy. Since we don’t have any obvious major red flags, we will probably be able to adopt if we choose to—there will be bureaucratic red tape, no doubt, but not repeated rejections. But there is one very big rejection—one single, genuinely dangerous spider that lurks in a dark corner of the hallway: What if I am rejected by the child? What if they don’t want me as their parent?

Another alternative is starting a business—such as selling our own games, or opening an escape room. Even self-publishing has more of this character than traditional freelance writing. The only direct, explicit sort of rejection we’d have to worry about there is small business loans; and actually with my PhD and our good credit, we could reasonably expect to get accepted sooner or later. But there is a subtler kind of rejection: What if the market doesn’t want us? What if the sort of games or books (or escape experiences, or whatever) we have to offer just aren’t what the world seems to want? Most startup businesses fail quickly; why should ours be any different? (I wonder if I’d be able to get a small business loan on the grounds that I forecasted only a 50% chance of failing in the first year, instead of the baseline 80%. Somehow, I suspect not.)

I keep searching for a career option with no threat of rejection, and it just… doesn’t seem to exist. The best I can come up with is going off the grid and living as hermits in the woods somewhere. (This sounds pretty miserable for totally different reasons—as well as being an awful, frankly unconscionable waste of my talents.) As long as I continue to live within human society and try to contribute to the world, rejection will rear its ugly head.

Ultimately, I think my only real option is to find a way to cope with rejection—or certain forms of rejection. The hallways full of spiders aren’t going away. I have to find a way to walk through them.

Homeschooling and too much freedom

Nov 19 JDN 2460268

Allowing families to homeschool their children increases freedom, quite directly and obviously. This is a large part of the political argument in favor of homeschooling, and likely a large part of why homeschooling is so popular within the United States in particular.

In the US, about 3% of people are homeschooled. This seems like a small proportion, but it’s enough to have some cultural and political impact, and it’s considerably larger than the proportion who are homeschooled in most other countries.

Moreover, homeschooling rates greatly increased as a result of COVID, and it’s anyone’s guess when, or even whether, they will go back down. I certainly hope they do; here’s why.

A lot of criticism about homeschooling involves academic outcomes: Are the students learning enough English and math? This is largely unfounded; statistically, academic outcomes of homeschooled students don’t seem to be any worse than those of public school students; by some measures, they are actually better.Nor is there clear evidence that homeschooled kids are any less developed socially; most of them get that social development through other networks, such as churches and sports teams.

No, my concern is not that they won’t learn enough English and math. It’s that they won’t learn enough history and science. Specifically, the parts of history and science that contradict the religious beliefs of the parents who are homeschooling them.

One way to study this would be to compare test scores by homeschooled kids on, say, algebra and chemistry (which do not directly threaten Christian evangelical beliefs) to those on, say, biology and neuroscience (which absolutely, fundamentally do). Lying somewhere in between are physics (F=ma is no threat to Christianity, but the Big Bang is) and history (Christian nationalists happily teach that Thomas Jefferson wrote the Declaration of Independence, but often omit that he owned slaves). If homeschooled kids are indeed indoctrinated, we should see particular lacunas in their knowledge where the facts contradict their ideology. In any case, I wasn’t able to find any such studies.

But even if their academic outcomes are worse in certain domains, so what? What about the freedom of parents to educate their children how they choose? What about the freedom of children to not be subjected to the pain of public school?

It will come as no surprise to most of you that I did well in school. In almost everything, really: math, science, philosophy, English, and Latin were my best subjects, and I earned basically flawless grades in them. But I also did very well in creative writing, history, art, and theater, and fairly well in music. My only poor performance was in gym class (as I’ve written about before).

It may come as some surprise when I tell you that I did not particularly enjoy school. In elementary school I had few friends—and one of my closest ended up being abusive to me. Middle school I mostly enjoyed—despite the onset of my migraines. High school started out utterly miserable, though it got a little better—a little—once I transferred to Community High School. Throughout high school, I was lonely, stressed, anxious, and depressed most of the time, and had migraine headaches of one intensity or another nearly every single day. (Sadly, most of that is true now as well; but I at least had a period of college and grad school where it wasn’t, and hopefully I will again once this job is behind me.)

I was good at school. I enjoyed much of the content of school. But I did not particularly enjoy school.

Thus, I can quite well understand why it is tempting to say that kids should be allowed to be schooled at home, if that is what they and their parents want. (Of course, a problem already arises there: What if child and parent disagree? Whose choice actually matters? In practice, it’s usually the parent’s.)

On the whole, public school is a fairly toxic social environment: Cliquish, hyper-competitive, stressful, often full of conflict between genders, races, classes, sexual orientations, and of course the school-specific one, nerds versus jocks (I’d give you two guesses which team I was on, but you’re only gonna need one). Public school sucks.

Then again, many of these problems and conflicts persist into adult life—so perhaps it’s better preparation than we care to admit. Maybe it’s better to be exposed to bias and conflict so that you can learn to cope with them, rather than sheltered from them.

But there is a more important reason why we may need public school, why it may even be worth coercing parents and children into that system against their will.

Public school forces you to interact with people different from you.

At a public school, you cannot avoid being thrown in the same classroom with students of other races, classes, and religions. This is of course more true if your school system is diverse rather than segregated—and all the more reason that the persistent segregation of many of our schools is horrific—but it’s still somewhat true even in a relatively homogeneous school. I was fortunate enough to go to a public school in Ann Arbor, where there was really quite substantial diversity. But even where there is less diversity, there is still usually some diversity—if not race, then class, or religion.

Certainly any public school has more diversity than homeschooling, where parents have the power to specifically choose precisely which other families their children will interact with, and will almost always choose those of the same race, class, and—above all—religious denomination as themselves.

The result is that homeschooled children often grow up indoctrinated into a dogmatic, narrow-minded worldview, convinced that the particular beliefs they were raised in are the objectively, absolutely correct ones and all others are at best mistaken and at worst outright evil. They are trained to reject conflict and dissent, to not even expose themselves to other people’s ideas, because those are seen as dangerous—corrupting.

Moreover, for most homeschooling parents—not all, but most—this is clearly the express intent. They want to raise their children in a particular set of beliefs. They want to inoculate them against the corrupting influences of other ideas. They are not afraid of their kids being bullied in school; they are afraid of them reading books that contradict the Bible.

This article has the headline “Homeschooled children do not grow up to be more religious”, yet its core finding is exactly the opposite of that:

The Cardus Survey found that homeschooled young adults were not noticeably different in their religious lives from their peers who had attended private religious schools, though they were more religious than peers who had attended public or Catholic schools.

No more religious than private religious schools!? That’s still very religious. No, the fair comparison is to public schools, which clearly show lower rates of religiosity among the same demographics. (The interesting case is Catholic schools; they, it turns out, also churn out atheists with remarkable efficiency; I credit the Jesuit norm of top-quality liberal education.) This is clear evidence that religious homeschooling does make children more religious, and so does most private religious education.

Another finding in that same article sounds good, but is misleading:

Indiana University professor Robert Kunzman, in his careful study of six homeschooling families, found that, at least for his sample, homeschooled children tended to become more tolerant and less dogmatic than their parents as they grew up.


This is probably just regression to the mean. The parents who give their kids religious homeschooling are largely the most dogmatic and intolerant, so we would expect by sheer chance that their kids were less dogmatic and intolerant—but probably still pretty dogmatic and intolerant. (Also, do I have to pount out that n=6 barely even constitutes a study!?) This is like the fact that the sons of NBA players are usually shorter than their fathers—but still quite tall.

Homeschooling is directly linked to a lot of terrible things: Young-Earth Creationism, Christian nationalism, homophobia, and shockingly widespread child abuse.

While most right-wing families don’t homeschool, most homeschooling families are right-wing: Between 60% and 70% of homeschooling families vote Republican in most elections. More left-wing voters are homeschooling now with the recent COVID-driven surge in homeschooling, but the right-wing still retains a strong majority for now.

Of course, there are a growing number of left-wing and non-religious families who use homeschooling. Does this mean that the threat of indoctrination is gone? I don’t think so. I once knew someone who was homeschooled by a left-wing non-religious family and still ended up adopting an extremely narrow-minded extremist worldview—simply a left-wing non-religious one. In some sense a left-wing non-religious narrow-minded extremism is better than a right-wing religious narrow-minded extremism, but it’s still narrow-minded extremism. Whatever such a worldview gets right is mainly by the Stopped Clock Principle. It still misses many important nuances, and is still closed to new ideas and new evidence.

Of course this is not a necessary feature of homeschooling. One absolutely could homeschool children into a worldview that is open-minded and tolerant. Indeed, I’m sure some parents do. But statistics suggest that most do not, and this makes sense: When parents want to indoctrinate their children into narrow-minded worldviews, homeschooling allows them to do that far more effectively than if they had sent their children to public school. Whereas if you want to teach your kids open-mindedness and tolerance, exposing them to a diverse environment makes that easier, not harder.

In other words, the problem is that homeschooling gives parents too much control; in a very real sense, this is too much freedom.

When can freedom be too much? It seems absurd at first. But there are at least two cases where it makes sense to say that someone has too much freedom.

The first is paternalism: Sometimes people really don’t know what’s best for them, and giving them more freedom will just allow them to hurt themselves. This notion is easily abused—it has been abused many times, for example against disabled people and colonized populations. For that reason, we are right to be very skeptical of it when applied to adults of sound mind. But what about children? That’s who we are talking about after all. Surely it’s not absurd to suggest that children don’t always know what’s best for them.

The second is the paradox of tolerance: The freedom to take away other people’s freedom is not a freedom we can afford to protect. And homeschooling that indoctrinates children into narrow-minded worldviews is a threat to other people’s freedom—not only those who will be oppressed by a new generation of extremists, but also the children themselves who are never granted the chance to find their own way.

Both reasons apply in this case: paternalism for the children, the paradox of tolerance for the parents. We have a civic responsibility to ensure that children grow up in a rich and diverse environment, so that they learn open-mindedness and tolerance. This is important enough that we should be willing to impose constraints on freedom in order to achieve it. Democracy cannot survive a citizenry who are molded from birth into narrow-minded extremists. There are parents who want to mold their children that way—and we cannot afford to let them.

From where I’m sitting, that means we need to ban homeschooling, or at least very strictly regulate it.

Knowing When to Quit

Sep 10 JDN 2460198

At the time of writing this post, I have officially submitted my letter of resignation at the University of Edinburgh. I’m giving them an entire semester of notice, so I won’t actually be leaving until December. But I have committed to my decision now, and that feels momentous.

Since my position here was temporary to begin with, I’m actually only leaving a semester early. Part of me wanted to try to stick it out, continue for that one last semester and leave on better terms. Until I sent that letter, I had that option. Now I don’t, and I feel a strange mix of emotions: Relief that I have finally made the decision, regret that it came to this, doubt about what comes next, and—above all—profound ambivalence.

Maybe it’s the very act of quitting—giving up, being a quitter—that feels bad. Even knowing that I need to get out of here, it hurts to have to be the one to quit.

Our society prizes grit and perseverance. Since I was a child I have been taught that these are virtues. And to some extent, they are; there certainly is such a thing as giving up too quickly.

But there is also such a thing as not knowing when to quit. Sometimes things really aren’t going according to plan, and you need to quit before you waste even more time and effort. And I think I am like Randall Monroe in this regard; I am more inclined to stay when I shouldn’t than quit when I shouldn’t:

Sometimes quitting isn’t even as permanent as it is made out to be. In many cases, you can go back later and try again when you are better prepared.

In my case, I am unlikely to ever work at the University of Edinburgh again, but I haven’t yet given up on ever having a career in academia. Then again, I am by no means as certain as I once was that academia is the right path for me. I will definitely be searching for other options.

There is a reason we are so enthusiastically sold on the virtue of perseverance. Part of how our society sells the false narrative of meritocracy is by claiming that people who succeed did so because they tried harder or kept on trying.

This is not entirely false; all other things equal, you are more likely to succeed if you keep on trying. But in some ways that just makes it more seductive and insidious.

For the real reason most people hit home runs in life is they were born on third base. The vast majority of success in life is determined by circumstances entirely outside individual control.


Even having the resources to keep trying is not guaranteed for everyone. I remember a great post on social media pointing out that entrepreneurship is like one of those carnival games:

Entrepreneurship is like one of those carnival games where you throw darts or something.

Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about ‘meritocracy’ and the salutary effects of hard work.

Poor kids aren’t visiting the carnival. They’re the ones working it.

The odds of succeeding on any given attempt are slim—but you can always pay for more tries. A middle-class person can afford to try once; mostly those attempts will fail, but a few will succeed and then go on to talk about how their brilliant talent and hard work made the difference. A rich person can try as many times as they like, and when they finally succeed, they can credit their success to perseverance and a willingness to take risks. But the truth is, they didn’t have any exceptional reserves of grit or courage; they just had exceptional reserves of money.

In my case, I was not depleting money (if anything, I’m probably losing out financially by leaving early, though that very much depends on how the job market goes for me): It was something far more valuable. I was whittling away at my own mental health, depleting my energy, draining my motivation. The resource I was exhausting was my very soul.

I still have trouble articulating why it has been so painful for me to work here. It’s so hard to point to anything in particular.

The most obvious downsides were things I knew at the start: The position is temporary, the pay is mediocre, and I had to move across the Atlantic and live thousands of miles from home. And I had already heard plenty about the publish-or-perish system of research publication.

Other things seem like minor annoyances: They never did give me a good office (I have to share it with too many people, and there isn’t enough space, so in fact I rarely use it at all). They were supposed to assign me a faculty mentor and never did. They kept rearranging my class schedule and not telling me things until immediately beforehand.

I think what it really comes down to is I didn’t realize how much it would hurt. I knew that I was moving across the Atlantic—but I didn’t know how isolated and misunderstood I would feel when I did. I knew that publish-or-perish was a problem—but I didn’t know how agonizing it would be for me in particular. I knew I probably wouldn’t get very good mentorship from the other faculty—but I didn’t realize just how bad it would be, or how desperately I would need that support I didn’t get.

I either underestimated the severity of these problems, or overestimated my own resilience. I thought I knew what I was going into, and I thought I could take it. But I was wrong. I couldn’t take it. It was tearing me apart. My only answer was to leave.

So, leave I shall. I have now committed to doing so.

I don’t know what comes next. I don’t even know if I’ve made the right choice. Perhaps I’ll never truly know. But I made the choice, and now I have to live with it.

Why we need critical thinking

Jul 9 JDN 2460135

I can’t find it at the moment, but awhile ago I read a surprisingly compelling post on social media (I think it was Facebook, but it could also have been Reddit) questioning the common notion that we should be teaching more critical thinking in school.

I strongly believe that we should in fact be teaching more critical thinking in school—actually I think we should replace large chunks of the current math curriculum with a combination of statistics, economics and critical thinking—but it made me realize that we haven’t done enough to defend why that is something worth doing. It’s just become a sort of automatic talking point, like, “obviously you would want more critical thinking, why are you even asking?”

So here’s a brief attempt to explain why critical thinking is something that every citizen ought to be good at, and hence why it’s worthwhile to teach it in primary and secondary school.

Critical thinking, above all, allows you to detect lies. It teaches you to look past the surface of what other people are saying and determine whether what they are saying is actually true.

And our world is absolutely full of lies.

We are constantly lied to by advertising. We are constantly lied to by spam emails and scam calls. Day in and day out, people with big smiles promise us the world, if only we will send them five easy payments of $19.99.

We are constantly lied to by politicians. We are constantly lied to by religious leaders (it’s pretty much their whole job actually).

We are often lied to by newspapers—sometimes directly and explicitly, as in fake news, but more often in subtler ways. Most news articles in the mainstream press are true in the explicit facts they state, but are missing important context; and nearly all of them focus on the wrong things—exciting, sensational, rare events rather than what’s actually important and likely to affect your life. If newspapers were an accurate reflection of genuine risk, they’d have more articles on suicide than homicide, and something like one million articles on climate change for every one on some freak accident (like that submarine full of billionaires).

We are even lied to by press releases on science, which likewise focus on new, exciting, sensational findings rather than supported, established, documented knowledge. And don’t tell me everyone already knows it; just stating basic facts about almost any scientific field will shock and impress most of the audience, because they clearly didn’t learn this stuff in school (or, what amounts to the same thing, don’t remember it). This isn’t just true of quantum physics; it’s even true of economics—which directly affects people’s lives.

Critical thinking is how you can tell when a politician has distorted the views of his opponent and you need to spend more time listening to that opponent speak. Critical thinking could probably have saved us from electing Donald Trump President.

Critical thinking is how you tell that a supplement which “has not been evaluated by the FDA” (which is to say, nearly all of them) probably contains something mostly harmless that maybe would benefit you if you were deficient in it, but for most people really won’t matter—and definitely isn’t something you can substitute for medical treatment.

Critical thinking is how you recognize that much of the history you were taught as a child was a sanitized, simplified, nationalist version of what actually happened. But it’s also how you recognize that simply inverting it all and becoming the sort of anti-nationalist who hates your own country is at least as ridiculous. Thomas Jefferson was both a pioneer of democracy and a slaveholder. He was both a hero and a villain. The world is complicated and messy—and nothing will let you see that faster than critical thinking.


Critical thinking tells you that whenever a new “financial innovation” appears—like mortgage-backed securities or cryptocurrency—it will probably make obscene amounts of money for a handful of insiders, but will otherwise be worthless if not disastrous to everyone else. (And maybe if enough people had good critical thinking skills, we could stop the next “innovation” from getting so far!)

More widespread critical thinking could even improve our job market, as interviewers would no longer be taken in by the candidates who are best at overselling themselves, and would instead pay more attention to the more-qualified candidates who are quiet and honest.

In short, critical thinking constitutes a large portion of what is ordinarily called common sense or wisdom; some of that simply comes from life experience, but a great deal of it is actually a learnable skill set.

Of course, even if it can be learned, that still raises the question of how it can be taught. I don’t think we have a sound curriculum for teaching critical thinking, and in my more cynical moments I wonder if many of the powers that be like it that way. Knowing that many—not all, but many—politicians make their careers primarily from deceiving the public, it’s not so hard to see why those same politicians wouldn’t want to support teaching critical thinking in public schools. And it’s almost funny to me watching evangelical Christians try to justify why critical thinking is dangerous—they come so close to admitting that their entire worldview is totally unfounded in logic or evidence.

But at least I hope I’ve convinced you that it is something worthwhile to know, and that the world would be better off if we could teach it to more people.

Statisticacy

Jun 11 JDN 2460107

I wasn’t able to find a dictionary that includes the word “statisticacy”, but it doesn’t trigger my spell-check, and it does seem to have the same form as “numeracy”: numeric, numerical, numeracy, numerate; statistic, statistical, statisticacy, statisticate. It definitely still sounds very odd to my ears. Perhaps repetition will eventually make it familiar.

For the concept is clearly a very important one. Literacy and numeracy are no longer a serious problem in the First World; basically every adult at this point knows how to read and do addition. Even worldwide, 90% of men and 83% of women can read, at least at a basic level—which is an astonishing feat of our civilization by the way, well worthy of celebration.

But I have noticed a disturbing lack of, well, statisticacy. Even intelligent, educated people seem… pretty bad at understanding statistics.

I’m not talking about sophisticated econometrics here; of course most people don’t know that, and don’t need to. (Most economists don’t know that!) I mean quite basic statistical knowledge.

A few years ago I wrote a post called “Statistics you should have been taught in high school, but probably weren’t”; that’s the kind of stuff I’m talking about.

As part of being a good citizen in a modern society, every adult should understand the following:

1. The difference between a mean and a median, and why average income (mean) can increase even though most people are no richer (median).

2. The difference between increasing by X% and increasing by X percentage points: If inflation goes from 4% to 5%, that is an increase of 20% ((5/4-1)*100%), but only 1 percentage point (5%-4%).

3. The meaning of standard error, and how to interpret error bars on a graph—and why it’s a huge red flag if there aren’t any error bars on a graph.

4. Basic probabilistic reasoning: Given some scratch paper, a pen, and a calculator, everyone should be able to work out the odds of drawing a given blackjack hand, or rolling a particular number on a pair of dice. (If that’s too easy, make it a poker hand and four dice. But mostly that’s just more calculation effort, not fundamentally different.)

5. The meaning of exponential growth rates, and how they apply to economic growth and compound interest. (The difference between 3% interest and 6% interest over 30 years is more than double the total amount paid.)

I see people making errors about this sort of thing all the time.

Economic news that celebrates rising GDP but wonders why people aren’t happier (when real median income has been falling since 2019 and is only 7% higher than it was in 1999, an annual growth rate of 0.2%).

Reports on inflation, interest rates, or poll numbers that don’t clearly specify whether they are dealing with percentages or percentage points. (XKCD made fun of this.)

Speaking of poll numbers, any reporting on changes in polls that isn’t at least twice the margin of error of the polls in question. (There’s also a comic for this; this time it’s PhD Comics.)

People misunderstanding interest rates and gravely underestimating how much they’ll pay for their debt (then again, this is probably the result of strategic choices on the part of banks—so maybe the real failure is regulatory).

And, perhaps worst of all, the plague of science news articles about “New study says X”. Things causing and/or cancer, things correlated with personality types, tiny psychological nudges that supposedly have profound effects on behavior.

Some of these things will even turn out to be true; actually I think this one on fibromyalgia, this one on smoking, and this one on body image are probably accurate. But even if it’s a properly randomized experiment—and especially if it’s just a regression analysis—a single study ultimately tells us very little, and it’s irresponsible to report on them instead of telling people the extensive body of established scientific knowledge that most people still aren’t aware of.

Basically any time an article is published saying “New study says X”, a statisticate person should ignore it and treat it as random noise. This is especially true if the finding seems weird or shocking; such findings are far more likely to be random flukes than genuine discoveries. Yes, they could be true, but one study just doesn’t move the needle that much.

I don’t remember where it came from, but there is a saying about this: “What is in the textbooks is 90% true. What is in the published literature is 50% true. What is in the press releases is 90% false.” These figures are approximately correct.

If their goal is to advance public knowledge of science, science journalists would accomplish a lot more if they just opened to a random page in a mainstream science textbook and started reading it on air. Admittedly, I can see how that would be less interesting to watch; but then, their job should be to find a way to make it interesting, not to take individual studies out of context and hype them up far beyond what they deserve. (Bill Nye did this much better than most science journalists.)

I’m not sure how much to blame people for lacking this knowledge. On the one hand, they could easily look it up on Wikipedia, and apparently choose not to. On the other hand, they probably don’t even realize how important it is, and were never properly taught it in school even though they should have been. Many of these things may even be unknown unknowns; people simply don’t realize how poorly they understand. Maybe the most useful thing we could do right now is simply point out to people that these things are important, and if they don’t understand them, they should get on that Wikipedia binge as soon as possible.

And one last thing: Maybe this is asking too much, but I think that a truly statisticate person should be able to solve the Monty Hall Problem and not be confused by the result. (Hint: It’s very important that Monty Hall knows which door the car is behind, and would never open that one. If he’s guessing at random and simply happens to pick a goat, the correct answer is 1/2, not 2/3. Then again, it’s never a bad choice to switch.)

The mental health crisis in academia

Apr 30 JDN 2460065

Why are so many academics anxious and depressed?

Depression and anxiety are much more prevalent among both students and faculty than they are in the general population. Unsurprisingly, women seem to have it a bit worse than men, and trans people have it worst of all.

Is this the result of systemic failings of the academic system? Before deciding that, one thing we should consider is that very smart people do seem to have a higher risk of depression.

There is a complex relationship between genes linked to depression and genes linked to intelligence, and some evidence that people of especially high IQ are more prone to depression; nearly 27% of Mensa members report mood disorders, compared to 10% of the general population.

(Incidentally, the stereotype of the weird, sickly nerd has a kernel of truth: the correlations between intelligence and autism, ADHD, allergies, and autoimmune disorders are absolutely real—and not at all well understood. It may be a general pattern of neural hyper-activation, not unlike what I posit in my stochastic overload model. The stereotypical nerd wears glasses, and, yes, indeed, myopia is also correlated with intelligence—and this seems to be mostly driven by genetics.)

Most of these figures are at least a few years old. If anything things are only worse now, as COVID triggered a surge in depression for just about everyone, academics included. It remains to be seen how much of this large increase will abate as things gradually return to normal, and how much will continue to have long-term effects—this may depend in part on how well we manage to genuinely restore a normal way of life and how well we can deal with long COVID.

If we assume that academics are a similar population to Mensa members (admittedly a strong assumption), then this could potentially explain why 26% of academic faculty are depressed—but not why nearly 40% of junior faculty are. At the very least, we junior faculty are about 50% more likely to be depressed than would be explained by our intelligence alone. And grad students have it even worse: Nearly 40% of graduate students report anxiety or depression, and nearly 50% of PhD students meet the criteria for depression. At the very least this sounds like a dual effect of being both high in intelligence and low in status—it’s those of us who have very little power or job security in academia who are the most depressed.

This suggests that, yes, there really is something wrong with academia. It may not be entirely the fault of the system—perhaps even a well-designed academic system would result in more depression than the general population because we are genetically predisposed. But it really does seem like there is a substantial environmental contribution that academic institutions bear some responsibility for.

I think the most obvious explanation is constant evaluation: From the time we are students at least up until we (maybe, hopefully, someday) get tenure, academics are constantly being evaluated on our performance. We know that this sort of evaluation contributes to anxiety and depression.

Don’t other jobs evaluate performance? Sure. But not constantly the way that academia does. This is especially obvious as a student, where everything you do is graded; but it largely continues once you are faculty as well.

For most jobs, you are concerned about doing well enough to keep your job or maybe get a raise. But academia has this continuous forward pressure: if you are a grad student or junior faculty, you can’t possibly keep your job; you must either move upward to the next stage or drop out. And academia has become so hyper-competitive that if you want to continue moving upward—and someday getting that tenure—you must publish in top-ranked journals, which have utterly opaque criteria and ever-declining acceptance rates. And since there are so few jobs available compared to the number of applicants, good enough is never good enough; you must be exceptional, or you will fail. Two thirds of PhD graduates seek a career in academia—but only 30% are actually in one three years later. (And honestly, three years is pretty short; there are plenty of cracks left to fall through between that and a genuinely stable tenured faculty position.)

Moreover, our skills are so hyper-specialized that it’s very hard to imagine finding work anywhere else. This grants academic institutions tremendous monopsony power over us, letting them get away with lower pay and worse working conditions. Even with an economics PhD—relatively transferable, all things considered—I find myself wondering who would actually want to hire me outside this ivory tower, and my feeble attempts at actually seeking out such employment have thus far met with no success.

I also find academia painfully isolating. I’m not an especially extraverted person; I tend to score somewhere near the middle range of extraversion (sometimes called an “ambivert”). But I still find myself craving more meaningful contact with my colleagues. We all seem to work in complete isolation from one another, even when sharing the same office (which is awkward for other reasons). There are very few consistent gatherings or good common spaces. And whenever faculty do try to arrange some sort of purely social event, it always seems to involve drinking at a pub and nobody is interested in providing any serious emotional or professional support.

Some of this may be particular to this university, or to the UK; or perhaps it has more to do with being at a certain stage of my career. In any case I didn’t feel nearly so isolated in graduate school; I had other students in my cohort and adjacent cohorts who were going through the same things. But I’ve been here two years now and so far have been unable to establish any similarly supportive relationships with colleagues.

There may be some opportunities I’m not taking advantage of: I’ve skipped a lot of research seminars, and I stopped going to those pub gatherings. But it wasn’t that I didn’t try them at all; it was that I tried them a few times and quickly found that they were not filling that need. At seminars, people only talked about the particular research project being presented. At the pub, people talked about almost nothing of serious significance—and certainly nothing requiring emotional vulnerability. The closest I think I got to this kind of support from colleagues was a series of lunch meetings designed to improve instruction in “tutorials” (what here in the UK we call discussion sections); there, at least, we could commiserate about feeling overworked and dealing with administrative bureaucracy.

There seem to be deep, structural problems with how academia is run. This whole process of universities outsourcing their hiring decisions to the capricious whims of high-ranked journals basically decides the entire course of our careers. And once you get to the point I have, now so disheartened with the process of publishing research that I can’t even engage with it, it’s not at all clear how it’s even possible to recover. I see no way forward, no one to turn to. No one seems to care how well I teach, if I’m not publishing research.

And I’m clearly not the only one who feels this way.