The role of innate activation in stochastic overload

Mar 26 JDN 2460030

Two posts ago I introduced my stochastic overload model, which offers an explanation for the Yerkes-Dodson effect by positing that additional stress increases sympathetic activation, which is useful up until the point where it starts risking an overload that forces systems to shut down and rest.

The central equation of the model is actually quite simple, expressed either as an expectation or as an integral:

Y = E[x + s | x + s < 1] P[x + s < 1]

Y = \int_{0}^{1-s} (x+s) dF(x)

The amount of output produced is the expected value of innate activation plus stress activation, times the probability that there is no overload. Increased stress raises this expectation value (the incentive effect), but also increases the probability of overload (the overload effect).

The model relies upon assuming that the brain starts with some innate level of activation that is partially random. Exactly what sort of Yerkes-Dodson curve you get from this model depends very much on what distribution this innate activation takes.

I’ve so far solved it for three types of distribution.

The simplest is a uniform distribution, where within a certain range, any level of activation is equally probable. The probability density function looks like this:

Assume the distribution has support between a and b, where a < b.

When b+s < 1, then overload is impossible, and only the incentive effect occurs; productivity increases linearly with stress.

The expected output is simply the expected value of a uniform distribution from a+s to b+s, which is:

E[x + s] = (a+b)/2+s

Then, once b+s > 1, overload risk begins to increase.

In this range, the probability of avoiding overload is:

P[x + s < 1] = F(1-s) = (1-s-a)/(b-a)

(Note that at b+s=1, this is exactly 1.)

The expected value of x+s in this range is:

E[x + s | x + s < 1] = (1-s)(1+s)/(2(b-a))

Multiplying these two together:

Y = [(1-s)(1+s)(1-s-a)]/[2(b-a)^2]

Here is what that looks like for a=0, b=1/2:

It does have the right qualitative features: increasing, then decreasing. But its sure looks weird, doesn’t it? It has this strange kinked shape.

So let’s consider some other distributions.

The next one I was able to solve it for is an exponential distribution, where the most probable activation is zero, and then higher activation always has lower probability than lower activation in an exponential decay:

For this it was actually easiest to do the integral directly (I did it by integrating by parts, but I’m sure you don’t care about all the mathematical steps):

Y = \int_{0}^{1-s} (x+s) dF(x)

Y = (1/λ+s) – (1/ λ + 1)e^(-λ(1-s))

The parameter λdecides how steeply your activation probability decays. Someone with low λ is relatively highly activated all the time, while someone with high λ is usually not highly activated; this seems like it might be related to the personality trait neuroticism.

Here are graphs of what the resulting Yerkes-Dodson curve looks like for several different values of λ:

λ = 0.5:

λ = 1:

λ = 2:

λ = 4:

λ = 8:

The λ = 0.5 person has high activation a lot of the time. They are actually fairly productive even without stress, but stress quickly overwhelms them. The λ = 8 person has low activation most of the time. They are not very productive without stress, but can also bear relatively high amounts of stress without overloading.

(The low-λ people also have overall lower peak productivity in this model, but that might not be true in reality, if λ is inversely correlated with some other attributes that are related to productivity.)

Neither uniform nor exponential has the nice bell-curve shape for innate activation we might have hoped for. There is another class of distributions, beta distributions, which do have this shape, and they are sort of tractable—you need something called an incomplete beta function, which isn’t an elementary function but it’s useful enough that most statistical packages include it.

Beta distributions have two parameters, α and β. They look like this:

Beta distributions are quite useful in Bayesian statistics; if you’re trying to estimate the probability of a random event that either succeeds or fails with a fixed probability (a Bernoulli process), and so far you have observed a successes and b failures, your best guess of its probability at each trial is a beta distribution with α = a+1 and β = b+1.

For beta distributions with parameters α and β, the result comes out to (I is that incomplete beta function I mentioned earlier):

Y = I(1-s, α+1, β) + I(1-s, α, β)

For whole number values of α andβ, the incomplete beta function can be computed by hand (though it is more work the larger they are); here’s an example with α = β = 2.

The innate activation probability looks like this:

And the result comes out like this:

Y = 2(1-s)^3 – 3/2(1-s)^4 + 3s(1-s)^2 – 2s(1-s)^3

This person has pretty high innate activation most of the time, so stress very quickly overwhelms them. If I had chosen a much higher β, I could change that, making them less likely to be innately so activated.

These are the cases I’ve found to be relatively tractable so far. They all have the right qualitative pattern: Increasing stress increases productivity for awhile, then begins decreasing it once overload risk becomes too high. They also show a general pattern where people who are innately highly activated (neurotic?) are much more likely to overload and thus much more sensitive to stress.

What happens when a bank fails

Mar 19 JDN 2460023

As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.

This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:

1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.

2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.

You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.

The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.

In fact, while this one is exceptionally large, bank failures are not really all that uncommon. There have been nearly 100 failures of banks with assets over $1 billion in the US alone just since the 1970s. The FDIC exists to handle bank failures, and generally does the job well.

Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.

The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.

Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.

As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.

In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.

The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.


Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.

Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.

In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.

Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.

Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.

Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.

But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?

David Friedman of the Cato Institute had some especially harsh words on this, but honestly I find them hard to disagree with:

Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.

Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.

The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.

It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)

There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.

Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).

We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.

The stochastic overload model

The stochastic overload model

Mar 12 JDN 2460016

The next few posts are going to be a bit different, a bit more advanced and technical than usual. This is because, for the first time in several months at least, I am actually working on what could be reasonably considered something like theoretical research.

I am writing it up in the form of blog posts, because actually writing a paper is still too stressful for me right now. This also forces me to articulate my ideas in a clearer and more readable way, rather than dive directly into a morass of equations. It also means that even if I do never actually get around to finishing a paper, the idea is out there, and maybe someone else could make use of it (and hopefully give me some of the credit).

I’ve written previously about the Yerkes-Dodson effect: On cognitively-demanding tasks, increased stress increases performance, but only to a point, after which it begins decreasing it again. The effect is well-documented, but the mechanism is poorly understood.

I am currently on the wrong side of the Yerkes-Dodson curve, which is why I’m too stressed to write this as a formal paper right now. But that also gave me some ideas about how it may work.

I have come up with a simple but powerful mathematical model that may provide a mechanism for the Yerkes-Dodson effect.

This model is clearly well within the realm of a behavioral economic model, but it is also closely tied to neuroscience and cognitive science.

I call it the stochastic overload model.

First, a metaphor: Consider an engine, which can run faster or slower. If you increase its RPMs, it will output more power, and provide more torque—but only up to a certain point. Eventually it hits a threshold where it will break down, or even break apart. In real engines, we often include safety systems that force the engine to shut down as it approaches such a threshold.

I believe that human brains function on a similar principle. Stress increases arousal, which activates a variety of processes via the sympathetic nervous system. This activation improves performance on both physical and cognitive tasks. But it has a downside; especially on cognitively demanding tasks which required sustained effort, I hypothesize that too much sympathetic activation can result in a kind of system overload, where your brain can no longer handle the stress and processes are forced to shut down.

This shutdown could be brief—a few seconds, or even a fraction of a second—or it could be prolonged—hours or days. That might depend on just how severe the stress is, or how much of your brain it requires, or how prolonged it is. For purposes of the model, this isn’t vital. It’s probably easiest to imagine it being a relatively brief, localized shutdown of a particular neural pathway. Then, your performance in a task is summed up over many such pathways over a longer period of time, and by the law of large numbers your overall performance is essentially the average performance of all your brain systems.

That’s the “overload” part of the model. Now for the “stochastic” part.

Let’s say that, in the absence of stress, your brain has a certain innate level of sympathetic activation, which varies over time in an essentially chaotic, unpredictable—stochastic—sort of way. It is never really completely deactivated, and may even have some chance of randomly overloading itself even without outside input. (Actually, a potential role in the model for the personality trait neuroticism is an innate tendency toward higher levels of sympathetic activation in the absence of outside stress.)

Let’s say that this innate activation is x, which follows some kind of known random distribution F(x).

For simplicity, let’s also say that added stress s adds linearly to your level of sympathetic activation, so your overall level of activation is x + s.

For simplicity, let’s say that activation ranges between 0 and 1, where 0 is no activation at all and 1 is the maximum possible activation and triggers overload.

I’m assuming that if a pathway shuts down from overload, it doesn’t contribute at all to performance on the task. (You can assume it’s only reduced performance, but this adds complexity without any qualitative change.)

Since sympathetic activation improves performance, but can result in overload, your overall expected performance in a given task can be computed as the product of two terms:

[expected value of x + s, provided overload does not occur] * [probability overload does not occur]

E[x + s | x + s < 1] P[x + s < 1]

The first term can be thought of as the incentive effect: Higher stress promotes more activation and thus better performance.

The second term can be thought of as the overload effect: Higher stress also increases the risk that activation will exceed the threshold and force shutdown.

This equation actually turns out to have a remarkably elegant form as an integral (and here’s where I get especially technical and mathematical):

\int_{0}^{1-s} (x+s) dF(x)

The integral subsumes both the incentive effect and the overload effect into one term; you can also think of the +s in the integrand as the incentive effect and the 1-s in the limit of integration as the overload effect.

For the uninitated, this is probably just Greek. So let me show you some pictures to help with your intuition. These are all freehand sketches, so let me apologize in advance for my limited drawing skills. Think of this as like Arthur Laffer’s famous cocktail napkin.

Suppose that, in the absence of outside stress, your innate activation follows a distribution like this (this could be a normal or logit PDF; as I’ll talk about next week, logit is far more tractable):

As I start adding stress, this shifts the distribution upward, toward increased activation:

Initially, this will improve average performance.

But at some point, increased stress actually becomes harmful, as it increases the probability of overload.

And eventually, the probability of overload becomes so high that performance becomes worse than it was with no stress at all:

The result is that overall performance, as a function of stress, looks like an inverted U-shaped curve—the Yerkes-Dodson curve:

The precise shape of this curve depends on the distribution that we use for the innate activation, which I will save for next week’s post.

Mental accounting and “free shipping”

Mar 5 JDN 2460009

Suppose you are considering buying a small item, such as a hardcover book or a piece of cookware. If you buy it from one seller, the price is $50, but shipping costs $20; if you buy it from another, it costs $70 but you’ll get free shipping. Which one do you buy from?

If you are being rational, you won’t care in the slightest. But most people don’t seem to behave that way. The idea of paying $20 to ship a $50 item just feels wrong somehow, and so most people will tend to prefer the seller with free shipping—even though the total amount they spend is the same.

Sellers know this, and take advantage of it. Indeed, it is the only plausible reason they would ever offer free shipping in the first place.

Free shipping, after all, is not actually free. Someone still gets paid to perform that delivery. And while the seller is the one making the payment, they will no doubt raise the price they charge you as a customer in order to make up the difference—it would be very foolish of them not to. So ultimately, everything turns out the same as if you had paid for shipping.

But it still feels different, doesn’t it? This is because of a series of heuristics most people use for their financial decisions known as mental accounting.

There are a lot of different heuristics that go into mental accounting, but the one that is most relevant here is mental budgeting: We divide our spending into different budgetary categories, and try not to go over budget in any particular category.

While the item you’re buying may in fact be worth more than $70 to you, you probably didn’t budget in your mind $20 for shipping. So even if the total impact on your finances is the same, you register the higher shipping price as “over budget” in one of your mental categories. So it feels like you are spending more than if you had simply paid $70 for the item and gotten free shipping. Even though you are actually paying exactly the same amount.

Another reason this works so well may be that people don’t really have a clear idea what the price of items is at different sellers. So you see “$70, free shipping” and you assume that it previously had a price of $70 and they are generously offering you shipping for free.

But if you ever find yourself assuming that a corporation is being generous—you are making a cognitive error. Corporations are, by design, as selfish as possible. They are never generous. There is always something in it for them.

In the best-case scenario, what serves the company will also serve other people, as when they donate to good causes for tax deductions and better PR (or when they simply provide good products at low prices). But no corporation is going to intentionally sacrifice its own interests to benefit anyone else. They exist to maximize profits for their shareholders. That is what they do. That is what they always do. Keep that in mind, and you won’t be disappointed by them.

They might offer you a lower price, or other perks, in order to keep you as a customer; but they will do so very carefully, only enough to keep you from shopping elsewhere. And if they are able to come down on the price while still making a profit, that really just goes to show they had too much market power to begin with.

Free shipping, at least, is relatively harmless. It’s slightly manipulative, but a higher price plus free shipping really does ultimately amount to the same thing as a lower price plus paid shipping. The worst I can say about it is that it may cause people to buy things they otherwise wouldn’t have; but they must have still felt that the sticker price was worth it, so it can’t really be so bad.

Another, more sinister way that corporations use mental accounting to manipulate customers is through the use of credit cards.

It’s well-documented that people are willing to spend more on credit cards than they would be in cash. In most cases, this does not appear to be the result of people actually being constrained by their liquidity—even if people have the cash, they are more willing to spend a credit card to buy the same item.

This effect is called pain of paying. It hurts more, psychologically, to hand over a series of dollar bills than it does to swipe (or lately, just tap) a credit card. It’s not just about convenience; by making it less painful to pay, companies can pressure us to spend more.

And since credit cards add to an existing balance, there is what’s called transaction decoupling: The money we spent on any particular item gets mentally separated from the actual transaction in which we bought that item. We may not even remember how much we paid. We just see a credit card balance go up; and it may end up being quite a large balance, but any particular transaction usually won’t have raised it very much.

Human beings tend to perceive stimuli proportionally: We don’t really feel the effect of $5 per se, we feel the effect of a 20% increase. So that $5 feels like a lot more when it’s coming out of a wallet that held $20 than it does when it’s adding to a $200 credit card balance.

This is also why I say expensive cheap things, cheap expensive things; you should care more about the same proportional difference when it’s on a higher base price.

Optimization is unstable. Maybe that’s why we satisfice.

Feb 26 JDN 2460002

Imagine you have become stranded on a deserted island. You need to find shelter, food, and water, and then perhaps you can start working on a way to get help or escape the island.

Suppose you are programmed to be an optimizerto get the absolute best solution to any problem. At first this may seem to be a boon: You’ll build the best shelter, find the best food, get the best water, find the best way off the island.

But you’ll also expend an enormous amount of effort trying to make it the best. You could spend hours just trying to decide what the best possible shelter would be. You could pass up dozens of viable food sources because you aren’t sure that any of them are the best. And you’ll never get any rest because you’re constantly trying to improve everything.

In principle your optimization could include that: The cost of thinking too hard or searching too long could be one of the things you are optimizing over. But in practice, this sort of bounded optimization is often remarkably intractable.

And what if you forgot about something? You were so busy optimizing your shelter you forgot to treat your wounds. You were so busy seeking out the perfect food source that you didn’t realize you’d been bitten by a venomous snake.

This is not the way to survive. You don’t want to be an optimizer.

No, the person who survives is a satisficerthey make sure that what they have is good enough and then they move on to the next thing. Their shelter is lopsided and ugly. Their food is tasteless and bland. Their water is hard. But they have them.

Once they have shelter and food and water, they will have time and energy to do other things. They will notice the snakebite. They will treat the wound. Once all their needs are met, they will get enough rest.

Empirically, humans are satisficers. We seem to be happier because of it—in fact, the people who are the happiest satisfice the most. And really this shouldn’t be so surprising: Because our ancestral environment wasn’t so different from being stranded on a desert island.

Good enough is perfect. Perfect is bad.

Let’s consider another example. Suppose that you have created a powerful artificial intelligence, an AGI with the capacity to surpass human reasoning. (It hasn’t happened yet—but it probably will someday, and maybe sooner than most people think.)

What do you want that AI’s goals to be?

Okay, ideally maybe they would be something like “Maximize goodness”, where we actually somehow include all the panoply of different factors that go into goodness, like beneficence, harm, fairness, justice, kindness, honesty, and autonomy. Do you have any idea how to do that? Do you even know what your own full moral framework looks like at that level of detail?

Far more likely, the goals you program into the AGI will be much simpler than that. You’ll have something you want it to accomplish, and you’ll tell it to do that well.

Let’s make this concrete and say that you own a paperclip company. You want to make more profits by selling paperclips.

First of all, let me note that this is not an unreasonable thing for you to want. It is not an inherently evil goal for one to have. The world needs paperclips, and it’s perfectly reasonable for you to want to make a profit selling them.

But it’s also not a true ultimate goal: There are a lot of other things that matter in life besides profits and paperclips. Anyone who isn’t a complete psychopath will realize that.

But the AI won’t. Not unless you tell it to. And so if we tell it to optimize, we would need to actually include in its optimization all of the things we genuinely care about—not missing a single one—or else whatever choices it makes are probably not going to be the ones we want. Oops, we forgot to say we need clean air, and now we’re all suffocating. Oops, we forgot to say that puppies don’t like to be melted down into plastic.

The simplest cases to consider are obviously horrific: Tell it to maximize the number of paperclips produced, and it starts tearing the world apart to convert everything to paperclips. (This is the original “paperclipper” concept from Less Wrong.) Tell it to maximize the amount of money you make, and it seizes control of all the world’s central banks and starts printing $9 quintillion for itself. (Why that amount? I’m assuming it uses 64-bit signed integers, and 2^63 is over 9 quintillion. If it uses long ints, we’re even more doomed.) No, inflation-adjusting won’t fix that; even hyperinflation typically still results in more real seigniorage for the central banks doing the printing (which is, you know, why they do it). The AI won’t ever be able to own more than all the world’s real GDP—but it will be able to own that if it prints enough and we can’t stop it.

But even if we try to come up with some more sophisticated optimization for it to perform (what I’m really talking about here is specifying its utility function), it becomes vital for us to include everything we genuinely care about: Anything we forget to include will be treated as a resource to be consumed in the service of maximizing everything else.

Consider instead what would happen if we programmed the AI to satisfice. The goal would be something like, “Produce at least 400,000 paperclips at a price of at most $0.002 per paperclip.”

Given such an instruction, in all likelihood, it would in fact produce exactly 400,000 paperclips at a price of exactly $0.002 per paperclip. And maybe that’s not strictly the best outcome for your company. But if it’s better than what you were previously doing, it will still increase your profits.

Moreover, such an instruction is far less likely to result in the end of the world.

If the AI has a particular target to meet for its production quota and price limit, the first thing it would probably try is to use your existing machinery. If that’s not good enough, it might start trying to modify the machinery, or acquire new machines, or develop its own techniques for making paperclips. But there are quite strict limits on how creative it is likely to be—because there are quite strict limits on how creative it needs to be. If you were previously producing 200,000 paperclips at $0.004 per paperclip, all it needs to do is double production and halve the cost. That’s a very standard sort of industrial innovation— in computing hardware (admittedly an extreme case), we do this sort of thing every couple of years.

It certainly won’t tear the world apart making paperclips—at most it’ll tear apart enough of the world to make 400,000 paperclips, which is a pretty small chunk of the world, because paperclips aren’t that big. A paperclip weighs about a gram, so you’ve only destroyed about 400 kilos of stuff. (You might even survive the lawsuits!)

Are you leaving money on the table relative to the optimization scenario? Eh, maybe. One, it’s a small price to pay for not ending the world. But two, if 400,000 at $0.002 was too easy, next time try 600,000 at $0.001. Over time, you can gently increase its quotas and tighten its price requirements until your company becomes more and more successful—all without risking the AI going completely rogue and doing something insane and destructive.

Of course this is no guarantee of safety—and I absolutely want us to use every safeguard we possibly can when it comes to advanced AGI. But the simple change from optimizing to satisficing seems to solve the most severe problems immediately and reliably, at very little cost.

Good enough is perfect; perfect is bad.

I see broader implications here for behavioral economics. When all of our models are based on optimization, but human beings overwhelmingly seem to satisfice, maybe it’s time to stop assuming that the models are right and the humans are wrong.

Optimization is perfect if it works—and awful if it doesn’t. Satisficing is always pretty good. Optimization is unstable, while satisficing is robust.

In the real world, that probably means that satisficing is better.

Good enough is perfect; perfect is bad.

Where is the money going in academia?

Feb 19 JDN 2459995

A quandary for you:

My salary is £41,000.

Annual tuition for a full-time full-fee student in my department is £23,000.

I teach roughly the equivalent of one full-time course (about 1/2 of one and 1/4 of two others; this is typically counted as “teaching 3 courses”, but if I used that figure, it would underestimate the number of faculty needed).

Each student takes about 5 or 6 courses at a time.

Why do I have 200 students?

If you multiply this out, the 200 students I teach, divided by the 6 instructors they have at one time, times the £23,000 they are paying… I should be bringing in over £760,000 for the university. Why am I paid only 5% of that?

Granted, there are other costs a university must bear aside from paying instructors. There are facilities, and administration, and services. And most of my students are not full-fee paying; that £23,000 figure really only applies to international students.

Students from Scotland pay only £1,820, but there aren’t very many of them, and public funding is supposed to make up that difference. Even students from the rest of the UK pay £9,250. And surely the average tuition paid has got to be close to that? Yet if we multiply that out, £9,000 times 200 divided by 6, we’re still looking at £300,000. So I’m still getting only 14%.

Where is the rest going?

This isn’t specific to my university by any means. It seems to be a global phenomenon. The best data on this seems to be from the US.

According to salary.com, the median salary for an adjunct professor in the US is about $63,000. This actually sounds high, given what I’ve heard from other entry-level faculty. But okay, let’s take that as our figure. (My pay is below this average, though how much depends upon the strength of the pound against the dollar. Currently the pound is weak, so quite a bit.)

Yet average tuition for out-of-state students at public college is $23,000 per year.

This means that an adjunct professor in the US with 200 students takes in $760,000 but receives $63,000. Where does that other $700,000 go?

If you think that it’s just a matter of paying for buildings, service staff, and other costs of running a university, consider this: It wasn’t always this way.

Since 1970, inflation-adjusted salaries for US academic faculty at public universities have risen a paltry 3.1%. In other words, basically not at all.

This is considerably slower than the growth of real median household income, which has risen almost 40% in that same time.

Over the same interval, nominal tuition has risen by over 2000%; adjusted for inflation, this is a still-staggering increase of 250%.

In other words, over the last 50 years, college has gotten three times as expensive, but faculty are still paid basically the same. Where is all this extra money going?

Part of the explanation is that public funding for colleges has fallen over time, and higher tuition partly makes up the difference. But private school tuition has risen just as fast, and their faculty salaries haven’t kept up either.

In their annual budget report, the University of Edinburgh proudly declares that their income increased by 9% last year. Let me assure you, my salary did not. (In fact, inflation-adjusted, my salary went down.) And their EBITDA—earnings before interest, taxes, depreciation, and amortization—was £168 million. Of that, £92 million was lost to interest and depreciation, but they don’t pay taxes at all, so their real net income was about £76 million. In the report, they include price changes of their endowment and pension funds to try to make this number look smaller, ending up with only £37 million, but that’s basically fiction; these are just stock market price drops, and they will bounce back.

Using similar financial alchemy, they’ve been trying to cut our pensions lately, because they say they “are too expensive” (because the stock market went down—nevermind that it’ll bounce back in a year or two). Fortunately, the unions are fighting this pretty hard. I wish they’d also fight harder to make them put people like me on the tenure track.

Had that £76 million been distributed evenly between all 5,000 of us faculty, we’d each get an extra £15,600.

Well, then, that solves part of the mystery in perhaps the most obvious, corrupt way possible: They’re literally just hoarding it.

And Edinburgh is far from the worst offender here. No, that would be Harvard, who are sitting on over $50 billion in assets. Since they have 21,000 students, that is over $2 million per student. With even a moderate return on its endowment, Harvard wouldn’t need to charge tuition at all.

But even then, raising my salary to £56,000 wouldn’t explain why I need to teach 200 students. Even that is still only 19% of the £300,000 those students are bringing in. But hey, then at least the primary service for which those students are here for might actually account for one-fifth of what they’re paying!

Now let’s considers administrators. Median salary for a university administrator in the US is about $138,000—twice what adjunct professors make.


Since 1970, that same time interval when faculty salaries were rising a pitiful 3% and tuition was rising a staggering 250%, how much did chancellors’ salaries increase? Over 60%.

Of course, the number of administrators is not fixed. You might imagine that with technology allowing us to automate a lot of administrative tasks, the number of administrators could be reduced over time. If that’s what you thought happened, you would be very, very wrong. The number of university administrators in the US has more than doubled since the 1980s. This is far faster growth than the number of students—and quite frankly, why should the number of administrators even grow with the number of students? There is a clear economy of scale here, yet it doesn’t seem to matter.

Combine those two facts: 60% higher pay times twice as many administrators means that universities now spend at least 3 times as much on administration as they did 50 years ago. (Why, that’s just about the proportional increase in tuition! Coincidence? I think not.)

Edinburgh isn’t even so bad in this regard. They have 6,000 administrative staff versus 5,000 faculty. If that already sounds crazy—more admins than instructors?—consider that the University of Michigan has 7,000 faculty but 19,000 administrators.

Michigan is hardly exceptional in this regard: Illinois UC has 2,500 faculty but nearly 8,000 administrators, while Ohio State has 7,300 faculty and 27,000 administrators. UCLA is even worse, with only 4,000 faculty but 26,000 administrators—a ratio of 6 to 1. It’s not the UC system in general, though: My (other?) alma mater of UC Irvine somehow supports 5,600 faculty with only 6,400 administrators. Yes, that’s right; compared to UCLA, UCI has 40% more faculty but 76% fewer administrators. (As far as students? UCLA has 47,000 while UCI has 36,000.)

At last, I think we’ve solved the mystery! Where is all the money in academia going? Administrators.

They keep hiring more and more of them, and paying them higher and higher salaries. Meanwhile, they stop hiring tenure-track faculty and replace them with adjuncts that they can get away with paying less. And then, whatever they manage to save that way, they just squirrel away into the endowment.

A common right-wing talking point is that more institutions should be “run like a business”. Well, universities seem to have taken that to heart. Overpay your managers, underpay your actual workers, and pocket the savings.

The role of police in society

Feb12 JDN 2459988

What do the police do? Not in theory, in practice. Not what are they supposed to do—what do they actually do?

Ask someone right-wing and they’ll say something like “uphold the law”. Ask someone left-wing and they’ll say something like “protect the interests of the rich”. Both of these are clearly inaccurate. They don’t fit the pattern of how the police actually behave.

What is that pattern? Well, let’s consider some examples.

If you rob a bank, the police will definitely arrest you. That would be consistent with either upholding the law or protecting the interests of the rich, so it’s not a very useful example.

If you run a business with unsafe, illegal working conditions, and someone tells the police about it, the police will basically ignore it and do nothing. At best they might forward it to some regulatory agency who might at some point get around to issuing a fine.

If you strike against your unsafe working conditions and someone calls the police to break up your picket line, they’ll immediately come in force and break up your picket line.

So that definitively refutes the “uphold the law” theory; by ignoring OSHA violations and breaking up legal strikes, the police are actively making it harder to enforce the law. It seems to fit the “protect the interests of the rich” theory. Let’s try some other examples.

If you run a fraudulent business that cons people out of millions of dollars, the police might arrest you, eventually, if they ever actually bother to get around to investigating the fraud. That certainly doesn’t look like upholding the law—but you can get very rich and they’ll still arrest you, as Bernie Madoff discovered. So being rich doesn’t grant absolute immunity from the police.

If your negligence in managing the safety systems of your factory or oil rig kills a dozen people, the police will do absolutely nothing. Some regulatory agency may eventually get around to issuing you a fine. That also looks like protecting the interests of the rich. So far the left-wing theory is holding up.

If you are homeless and camping out on city property, the police will often come to remove you. Sometimes there’s a law against such camping, but there isn’t always; and even when there is, the level of force used often seems wildly disproportionate to the infraction. This also seems to support the left-wing account.

But now suppose you go out and murder several homeless people. That is, if anything, advancing the interests of the rich; it’s certainly not harming them. Yet the police would in fact investigate. It might be low on their priorities, especially if they have a lot of other homicides; but they would, in fact, investigate it and ultimately arrest you. That doesn’t look like advancing the interests of the rich. It looks a lot more like upholding the law, in fact.

Or suppose you are the CEO of a fraudulent company that is about to be revealed and thus collapse, and instead of accepting the outcome or absconding to the Carribbean (as any sane rich psychopath would), you decide to take some SEC officials hostage and demand that they certify your business as legitimate. Are the police going to take that lying down? No. They’re going to consider you a terrorist, and go in guns blazing. So they don’t just protect the interests of the rich after all; that also looks a lot like they’re upholding the law.

I didn’t even express this as the left-wing view earlier, because I’m trying to use the woodman argument; but there are also those on the left who would say that the primary function of the police is to uphold White supremacy. I’d be a fool to deny that there are a lot of White supremacist cops; but notice that in the above scenarios I didn’t even specify the race of the people involved, and didn’t have to. The cops are no more likely to arrest a fraudulent banker because he’s Black, and no more likely to let a hostage-taker go free because he’s White. (They might be less likely to shoot the White hostage-taker—maybe, the data on that actually isn’t as clear-cut as people think—but they’d definitely still arrest him.) While racism is a widespread problem in the police, it doesn’t dictate their behavior all the time—and it certainly isn’t their core function.

What does categorically explain how the police react in all these scenarios?

The police uphold order.

Not law. Order. They don’t actually much seem to care whether what you’re doing is illegal or harmful or even deadly. They care whether it violates civil order.

This is how we can explain the fact that police would investigate murders, but ignore oil rig disasters—even if the latter causes more deaths. The former is a violation of civil order, the latter is not.

It also explains why they would be so willing to tear apart homeless camps and break up protests and strikes. Those are actually often legal, or at worst involve minor infractions; but they’re also disruptive and disorderly.

The police seem to see their core mission as keeping the peace. It could be an unequal, unjust peace full of illegal policies that cause grievous harm and death—but what matters to them is that it’s peace. They will stomp out any violence they see with even greater violence of their own. They have a monopoly on the use of force, and they intend to defend it.

I think that realizing this can help us take a nuanced view of the police. They aren’t monsters or tools of oppression. But they also aren’t brave heroes who uphold the law and keep us safe. They are instruments of civil order.

We do need civil order; there are a lot of very important things in society that simply can’t function if civil order collapses. In places where civil order does fall apart, life becomes entirely about survival; the security that civil order provides is necessary not only for economic activity, but also for much of what gives our lives value.

But nor is civil order all that matters. And sometimes injustice truly does become so grave that it’s worth sacrificing some order in order to redress it. Strikes and protests genuinely are disruptive; society couldn’t function if they were happening everywhere all the time. But sometimes we need to disrupt the way things are going in order to get people to clearly see the injustice around them and do something about it.

I hope that this more realistic, nuanced assessment of the role police play in society may help to pull people away from both harmful political extremes.We can’t simply abolish the police; we need some system for maintaining civil order, and whatever system we have is probably going to end up looking a lot like police. (#ScandinaviaIsBetter, truly, but there are still cops in Norway.) But we also can’t afford to lionize the police or ignore their failures and excesses. When they fight to maintain civil order at the expense of social justice, they become part of the problem.

The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

Home price targeting

Jan 29 JDN 2459973

One of the largest divides in opinion between economists and the general population concerns the question of rent control. While the general public mostly supports rent control (and often votes for it in referenda), economists almost universally oppose it. It’s hard to get a consensus among economists on almost anything, and yet here we have one; but people don’t seem to care.

Why? I think it’s because high rents are a genuine and serious problem, which economists have invested remarkably little effort in trying to solve. Housing prices are one of the chief drivers of long-term inflation, and with most people spending over a third of their income on housing, even relatively small increases in housing prices can cause a lot of suffering.

One thing we do know is that rent control does not work as a long-term solution. Maybe in response to some short-term shock it would make sense. Maybe you do it for awhile as you wait for better long-term solutions to take effect. But simply putting an arbitrary cap on prices will create shortages in the long run—and it is not a coincidence that cities with strict rent control have the worst housing shortages and the greatest rates of homelessness. Rent control doesn’t even do a good job of helping the people who need it most.

Price ceilings in general are just… not a good idea. If people are selling something at a price that you think is too high and you just insist that they aren’t allowed to, they don’t generally sell at a lower price—they just don’t sell at all. There are a few exceptions; in a very monopolistic market, a well-targeted price ceiling might actually work. And short-run housing supply is inelastic enough that rent control isn’t the worst kind of price ceiling. But as a general strategy, price ceilings just aren’t an effective way of making things cheaper.

This is why we so rarely use them as a policy intervention. When the Federal Reserve wants to achieve a certain interest rate on bonds, do they simply demand that people buy the bonds at that price? No. They adjust the supply of bonds in the market until the market price goes to what they want it to be.

Prices aren’t set in a vacuum by the fiat of evil corporations. They are an equilibrium outcome of a market system. There are things you can do to intervene and shift that equilibrium, but if you just outlaw certain prices, it will result in a new equilibrium—it won’t simply be the same amount sold at the new price you wanted.

Maybe some graphs would help explain this. In each graph, the red line is the demand and the blue line is the supply.

Here is what the market looks like before intervention: The price is $6. We’ll say that’s too high; people can’t afford it.

[no_intervention.png]

Now suppose we impose a price ceiling at $4 (the green line). You aren’t allowed to charge more than $4. What will happen? Companies will charge $4. But they will also produce and sell a smaller quantity than before.

Far better would be to increase the supply of the good, shifting to a new supply curve (the purple line). Then you would reduce the price and increase the amount of the good available.

[supply_intervention.png]

This is precisely what we do with government bonds when we want to raise interest rates. (A greater supply of bonds makes their prices lower, which makes their yields higher.) And when we want to lower interest rates, we do the opposite.

Of course, with bonds, it’s easy to control the supply; it’s all just numbers in a network. Increasing the supply of housing is a much greater undertaking; you actually need to build new housing. But ultimately, the only way to ensure that housing is available and affordable for everyone is in fact to build more housing.

There are various ways we might accomplish that; one of the simplest would be to simply relax zoning restrictions that make it difficult to build high-density housing in cities. Those are bad laws anyway; they only benefit a small number of people a little bit while harming a large number of people a lot. (The problem is that the people they benefit are the local homeowners who show up to city council meetings.)

But we could do much more. I propose that we really use interest-rate targeting as our model and introduce home price targeting. I want the federal government to exercise eminent domain and order the construction of new high-density housing in any city that has rents above a certain threshold—if you like, the same threshold you were thinking of setting the rent control at.

Is this an extreme solution? Perhaps. But housing affordability is an extreme problem. And I keep hearing from the left wing that economists aren’t willing to consider “radical enough” solutions to housing (by which they always seem to mean the tried-and-failed strategy of rent control). So here’s a radical solution for you. If cities refuse to build enough housing for their people, make them do it. Buy up and bulldoze their “lovely” “historic” suburban neighborhoods that are ludicrous wastes of land (and also environmentally damaging), and replace them with high-rise apartments. (Get rid of the golf courses while you’re at it.)

This would be expensive, of course; we have to pay to build all those new apartments. But hardly so expensive as living in a society where people can’t afford to live where they want.

In fact, estimates suggest that we are losing over one trillion dollars per year in unrealized productivity because people can’t afford to live in the highest-rent cities. Average income per worker in the US has been reduced by nearly $7000 per year because of high housing prices. So that’s the budget you should be comparing against. Keeping things as they are is like taxing our whole population about 9%. (And it’s probably regressive, so more than that for poor people.)

Would this destroy the “charm” of the city? I dunno, maybe a little. But if the only thing your city had going for it was some old houses that are clearly not an efficient use of space, that’s pretty sad. And it is quite possible to build a city at high density and have it still be beautiful and a major draw for tourists; Paris is a lot denser than far-less-picturesque Houston. (Though I’ll admit, Houston is far more affordable than Paris. It’s not just about density.) And is the “charm” of your city really worth making it so unaffordable that people can’t move there without risking becoming homeless?

There are a lot of details to be worked out: How serious must things get before the federal government steps in? (Wherever we draw the line, San Francisco is surely well past it.) It takes a long time to build houses and let prices adjust, so how do we account for that time-lag? Where does the money come from, actually? Debt? Taxes? But these could all be resolved.

Of course, it’s a pipe dream; we’re never going to implement this policy, because homeowners dread the idea of their home values going down (even though it would actually make their property taxes cheaper!). I’d even be willing to consider some kind of program that would let people refinance underwater mortgages to write off the lost equity, if that’s what it takes to actually build enough housing.

Because there is really only one thing that’s ever going to solve the (global!) housing crises:

Build more homes.

I’m old enough to be President now.

Jan 22 JDN 2459967

When this post goes live, I will have passed my 35th birthday. This is old enough to be President of the United States, at least by law. (In practice, no POTUS has been less than 42.)

Not that I will ever be President. I have neither the wealth nor the charisma to run any kind of national political campaign. I might be able to get elected to some kind of local office at some point, like a school board or a city water authority. But I’ve been eligible to run for such offices for quite awhile now, and haven’t done so; nor do I feel particularly inclined at the moment.

No, the reason this birthday feels so significant is the milestone it represents. By this age, most people have spouses, children, careers. I have a spouse. I don’t have kids. I sort of have a career.

I have a job, certainly. I work for relatively decent pay. Not excellent, not what I was hoping for with a PhD in economics, but enough to live on (anywhere but an overpriced coastal metropolis). But I can’t really call that job a career, because I find large portions of it unbearable and I have absolutely no job security. In fact, I have the exact opposite: My job came with an explicit termination date from the start. (Do the people who come up with these short-term postdoc positions understand how that feels? It doesn’t seem like they do.)

I missed the window to apply for academic jobs that start next year. If I were happy here, this would be fine; I still have another year left on my contract. But I’m not happy here, and that is a grievous understatement. Working here is clearly the most important situational factor contributing to my ongoing depression. So I really ought to be applying to every alternative opportunity I can find—but I can’t find the will to try it, or the self-confidence to believe that my attempts could succeed if I did.

Then again, I’m not sure I should be applying to academic positions at all. If I did apply to academic positions, they’d probably be teaching-focused ones, since that’s the one part of my job I’m actually any good at. I’ve more or less written off applying to major research institutions; I don’t think I would get hired anyway, and even if I did, the pressure to publish is so unbearable that I think I’d be just as miserable there as I am here.

On the other hand, I can’t be sure that I would be so miserable even at another research institution; maybe with better mentoring and better administration I could be happy and successful in academic research after all.

The truth is, I really don’t know how much of my misery is due to academia in general, versus the British academic system, versus Edinburgh as an institution, versus starting work during the pandemic, versus the experience of being untenured faculty, versus simply my own particular situation. I don’t know if working at another school would be dramatically better, a little better, or just the same. (If it were somehow worse—which frankly seems hard to arrange—I would literally just quit immediately.)

I guess if the University of Michigan offered me an assistant professor job right now, I would take it. But I’m confident enough that they wouldn’t offer it to me that I can’t see the point in applying. (Besides, I missed the application windows this year.) And I’m not even sure that I would be happy there, despite the fact that just a few years ago I would have called it a dream job.

That’s really what I feel most acutely about turning 35: The shattering of dreams.

I thought I had some idea of how my life would go. I thought I knew what I wanted. I thought I knew what would make me happy.

The weirdest part it that it isn’t even that different from how I’d imagined it. If you’d asked me 10 or even 20 years ago what my career would be like at 35, I probably would have correctly predicted that I would have a PhD and be working at a major research university. 10 years ago I would have correctly expected it to be a PhD in economics; 20, I probably would have guessed physics. In both cases I probably would have thought I’d be tenured by now, or at least on the tenure track. But a postdoc or adjunct position (this is sort of both?) wouldn’t have been utterly shocking, just vaguely disappointing.

The biggest error by my past self was thinking that I’d be happy and successful in this career, instead of barely, desperately hanging on. I thought I’d have published multiple successful papers by now, and be excited to work on a new one. I imagined I’d also have published a book or two. (The fact that I self-published a nonfiction book at 16 but haven’t published any nonfiction ever since would be particularly baffling to my 15-year-old self, and is particularly depressing to me now.) I imagined myself becoming gradually recognized as an authority in my field, not languishing in obscurity; I imagined myself feeling successful and satisfied, not hopeless and depressed.

It’s like the dark Mirror Universe version of my dream job. It’s so close to what I thought I wanted, but it’s also all wrong. I finally get to touch my dreams, and they shatter in my hands.

When you are young, birthdays are a sincere cause for celebration; you look forward to the new opportunities the future will bring you. I seem to be now at the age where it no longer feels that way.