The Efficient Roulette Hypothesis

Nov 27 JDN 2459911

The efficient market hypothesis is often stated in several different ways, and these are often treated as equivalent. There are at least three very different definitions of it that people seem to use interchangeably:

  1. Market prices are optimal and efficient.
  2. Market prices aggregate and reflect all publicly-available relevant information.
  3. Market prices are difficult or impossible to predict.

The first reading, I will call the efficiency hypothesis, because, well, it is what we would expect a phrase like “efficient market hypothesis” to mean. The ordinary meaning of those words would imply that we are asserting that market prices are in some way optimal or near-optimal, that markets get prices “right” in some sense at least the vast majority of the time.

The second reading I’ll call the information hypothesis; it implies that market prices are an information aggregation mechanism which automatically incorporates all publicly-available information. This already seems quite different from efficiency, but it seems at least tangentially related, since information aggregation could be one useful function that markets serve.

The third reading I will call the unpredictability hypothesis; it says simply that market prices are very difficult to predict, and so you can’t reasonably expect to make money by anticipating market price changes far in advance of everyone else. But as I’ll get to in more detail shortly, that doesn’t have the slightest thing to do with efficiency.

The empirical data in favor of the unpredictability hypothesis is quite overwhelming. It’s exceedingly hard to beat the market, and for most people, most of the time, the smartest way to invest is just to buy a diversified portfolio and let it sit.

The empirical data in favor of the information hypothesis is mixed, but it’s at least plausible; most prices do seem to respond to public announcements of information in ways we would expect, and prediction markets can be surprisingly accurate at forecasting the future.

The empirical data in favor of the efficiency hypothesis, on the other hand, is basically nonexistent. On the one hand this is a difficult hypothesis to test directly, since it isn’t clear what sort of benchmark we should be comparing against—so it risks being not even wrong. But if you consider basically any plausible standard one could try to set for how an efficient market would run, our actual financial markets in no way resemble it. They are erratic, jumping up and down for stupid reasons or no reason at all. They are prone to bubbles, wildly overvaluing worthless assets. They have collapsed governments and ruined millions of lives without cause. They have resulted in the highest-paying people in the world doing jobs that accomplish basically nothing of genuine value. They are, in short, a paradigmatic example of what inefficiency looks like.

Yet, we still have economists who insist that “the efficient market hypothesis” is a proven fact, because the unpredictability hypothesis is clearly correct.

I do not think this is an accident. It’s not a mistake, or an awkwardly-chosen technical term that people are misinterpreting.

This is a motte and bailey doctrine.

Motte-and-bailey was a strategy in medieval warfare. Defending an entire region is very difficult, so instead what was often done was constructing a small, highly defensible fortification—the motte—while accepting that the land surrounding it—the bailey—would not be well-defended. Most of the time, the people stayed on the bailey, where the land was fertile and it was relatively pleasant to live. But should they be attacked, they could retreat to the motte and defend themselves until the danger was defeated.

A motte-and-bailey doctrine is an analogous strategy used in argumentation. You use the same words for two different versions of an idea: The motte is a narrow, defensible core of your idea that you can provide strong evidence for, but it isn’t very strong and may not even be interesting or controversial. The bailey is a broad, expansive version of your idea that is interesting and controversial and leads to lots of significant conclusions, but can’t be well-supported by evidence.

The bailey is the efficiency hypothesis: That market prices are optimal and we are fools to try to intervene or even regulate them because the almighty Invisible Hand is superior to us.

The motte is the unpredictability hypothesis: Market prices are very hard to predict, and most people who try to make money by beating the market fail.

By referring to both of these very different ideas as “the efficient market hypothesis”, economists can act as if they are defending the bailey, and prescribe policies that deregulate financial markets on the grounds that they are so optimal and efficient; but then when pressed for evidence to support their beliefs, they can pivot to the motte, and merely show that markets are unpredictable. As long as people don’t catch on and recognize that these are two very different meanings of “the efficient market hypothesis”, then they can use the evidence for unpredictability to support their goal of deregulation.

Yet when you look closely at this argument, it collapses. Unpredictability is not evidence of efficiency; if anything, it’s the opposite. Since the world doesn’t really change on a minute-by-minute basis, an efficient system should actually be relatively predictable in the short term. If prices reflected the real value of companies, they would change only very gradually, as the fortunes of the company change as a result of real-world events. An earthquake or a discovery of a new mine would change stock prices in relevant industries; but most of the time, they’d be basically flat. The occurrence of minute-by-minute or even second-by-second changes in prices basically proves that we are not tracking any genuine changes in value.

Roulette wheels are extremely unpredictable by design—by law, even—and yet no one would accuse them of being an efficient way of allocating resources. If you bet on roulette wheels and try to beat the house, you will almost surely fail, just as you would if you try to beat the stock market—and dare I say, for much the same reasons?

So if we’re going to insist that “efficiency” just means unpredictability, rather than actual, you know, efficiency, then we should all speak of the Efficient Roulette Hypothesis. Anything we can’t predict is now automatically “efficient” and should therefore be left unregulated.

Mind reading is not optional

Nov 20 JDN 2459904

I have great respect for cognitive-behavioral therapy (CBT), and it has done a lot of good for me. (It is also astonishingly cost-effective; its QALY per dollar rate compares favorably to almost any other First World treatment, and loses only to treating high-impact Third World diseases like malaria and schistomoniasis.)

But there are certain aspects of it that have always been frustrating to me. Standard CBT techniques often present as ‘cognitive distortions‘ what are in fact clearly necessary heuristics without which it would be impossible to function.

Perhaps the worst of these is so-called ‘mind reading‘. The very phrasing of it makes it sound ridiculous: Are you suggesting that you have some kind of extrasensory perception? Are you claiming to be a telepath?

But in fact ‘mind reading’ is simply the use of internal cognitive models to forecast the thoughts, behaviors, and expectations of other human beings. And without it, it would be completely impossible to function in human society.

For instance, I have had therapists tell me that it is ‘mind reading’ for me to anticipate that people will have tacit expectations for my behavior that they will judge me for failing to meet, and I should simply wait for people to express their expectations rather than assuming them. I admit, life would be much easier if I could do that. But I know for a fact that I can’t. Indeed, I used to do that, as a child, and it got me in trouble all the time. People were continually upset at me for not doing things they had expected me to do but never bothered to actually mention. They thought these expectations were “obvious”; they were not, at least not to me.

It was often little things, and in hindsight some of these things seem silly: I didn’t know what a ‘made bed’ was supposed to look like, so I put it in a state that was functional for me, but that was not considered ‘making the bed’. (I have since learned that my way was actually better: It’s good to let sheets air out before re-using them.) I was asked to ‘clear the sink’, so I moved the dishes out of the sink and left them on the counter, not realizing that the implicit command was for me to wash those dishes, dry them, and put them away. I was asked to ‘bring the dinner plates to the table’, so I did that, and left them in a stack there, not realizing that I should be setting them out in front of each person’s chair and also bringing flatware. Of course I know better now. But how was I supposed to know then? It seems like I was expected to, though.

Most people just really don’t seem to realize how many subtle, tacit expectations are baked into every single task. I think neurodivergence is quite relevant here; I have a mild autism spectrum disorder, and so I think rather differently than most people. If you are neurotypical, then you probably can forecast other people’s expectations fairly well automatically, and so they may seem obvious to you. In fact, they may seem so obvious that you don’t even realize you’re doing it. Then when someone like me comes along and is consciously, actively trying to forecast other people’s expectations, and sometimes doing it poorly, you go and tell them to stop trying to forecast. But if they were to do that, they’d end up even worse off than they are. What you really need to be telling them is how to forecast better—but that would require insight into your own forecasting methods which you aren’t even consciously aware of.

Seriously, stop and think for a moment all of the things other people expect you to do every day that are rarely if ever explicitly stated. How you are supposed to dress, how you are supposed to speak, how close you are supposed to stand to other people, how long you are supposed to hold eye contact—all of these are standards you will be expected to meet, whether or not any of them have ever been explicitly explained to you. You may do this automatically; or you may learn to do it consciously after being criticized for failing to do it. But one way or another, you must forecast what other people will expect you to do.

To my knowledge, no one has ever explicitly told me not to wear a Starfleet uniform to work. I am not aware of any part of the university dress code that explicitly forbids such attire. But I’m fairly sure it would not be a good idea. To my knowledge, no one has ever explicitly told me not to burst out into song in the middle of a meeting. But I’m still pretty sure I shouldn’t do that. To my knowledge, no one has ever explicitly told me what the ‘right of way’ rules are for walking down a crowded sidewalk, who should be expected to move out of the way of whom. But people still get mad if you mess up and bump into them.

Even when norms are stated explicitly, it is often as a kind of last resort, and the mere fact that you needed to have a norm stated is often taken as a mark against your character. I have been explicitly told in various contexts not to talk to myself or engage in stimming leg movements; but the way I was told has generally suggested that I would have been judged better if I hadn’t had to be told, if I had simply known the way that other people seem to know. (Or is it that they never felt any particular desire to stim?)

In fact, I think a major part of developing social skills and becoming more functional, to the point where a lot of people actually now seem a bit surprised to learn I have an autism spectrum disorder, has been improving my ability to forecast other people’s expectations for my behavior. There are dozens if not hundreds of norms that people expect you to follow at any given moment; most people seem to intuit them so easily that they don’t even realize they are there. But they are there all the same, and this is painfully evident to those of us who aren’t always able to immediately intuit them all.

Now, the fact remains that my current mental models are surely imperfect. I am often wrong about what other people expect of me. I’m even prepared to believe that some of my anxiety comes from believing that people have expectations more demanding than what they actually have. But I can’t simply abandon the idea of forecasting other people’s expectations. Don’t tell me to stop doing it; tell me how to do it better.

Moreover, there is a clear asymmetry here: If you think people want more from you than they actually do, you’ll be anxious, but people will like you and be impressed by you. If you think people want less from you than they actually do, people will be upset at you and look down on you. So, in the presence of uncertainty, there’s a lot of pressure to assume that the expectations are high. It would be best to get it right, of course; but when you aren’t sure you can get it right, you’re often better off erring on the side of caution—which is to say, the side of anxiety.

In short, mind reading isn’t optional. If you think it is, that’s only because you do it automatically.

Is the cure for inflation worse than the disease?

Nov 13 JDN 2459897

A lot of people seem really upset about inflation. I’ve previously discussed why this is a bit weird; inflation really just isn’t that bad. In fact, I am increasingly concerned that the usual methods for fixing inflation are considerably worse than inflation itself.

To be clear, I’m not talking about hyperinflationif you are getting triple-digit inflation or more, you are clearly printing too much money and you need to stop. And there are places in the world where this happens.

But what about just regular, ordinary inflation, even when it’s fairly high? Prices rising at 8% or 9% or even 11% per year? What catastrophe befalls our society when this happens?

Okay, sure, if we could snap our fingers and make prices all stable without cost, that would be worth doing. But we can’t. All of our mechanisms for reducing inflation come with costs—and often very high costs.

The chief mechanism by which inflation is currently controlled is open-market operations by central banks such as the Federal Reserve, the Bank of England, and the European Central Bank. These central banks try to reduce inflation by selling bonds, which lowers the price of bonds and reduces capital available to banks, and thereby increases interest rates. This also effectively removes money from the economy, as banks are using that money to buy bonds instead of lending it out. (It is chiefly in this odd indirect sense that the central bank manages the “money supply”.)

But how does this actually reduce inflation? It’s remarkably indirect. It’s actually the higher interest rates which prevent people from buying houses and prevent companies from hiring workers which result in reduced economic growth—or even economic recession—which then is supposed to bring down prices. There’s actually a lot we still don’t know about how this works or how long it should be expected to take. What we do know is that the pain hits quickly and the benefits arise only months or even years later.

As Krugman has rightfully pointed out, the worst pain of the 1970s was not the double-digit inflation; it was the recessions that Paul Volcker’s economic policy triggered in response to that inflation. The inflation wasn’t exactly a good thing; but for most people, the cure was much worse than the disease.

Most laypeople seem to think that prices somehow go up without wages going up, but that simply isn’t how it works. Prices and wages rise at close to the same rate in most countries most of the time. In fact, inflation is often driven chiefly by rising wages rather than the other way around. There are often lags between when the inflation hits and when people see their wages rise; but these lags can actually be in either direction—inflation first or wages first—and for moderate amounts of inflation they are clearly less harmful than the high rates of unemployment that we would get if we fought inflation more aggressively with monetary policy.

Economists are also notoriously vague about exactly how they expect the central bank to reduce inflation. They use complex jargon or broad euphemisms. But when they do actually come out and say they want to reduce wages, it tends to outrage people. Well, that’s one of three main ways that interest rates actually reduce inflation: They reduce wages, they cause unemployment, or they stop people from buying houses. That’s pretty much all that central banks can do.

There may be other ways to reduce inflation, like windfall profits taxes, antitrust action, or even price controls. The first two are basically no-brainers; we should always be taxing windfall profits (if they really are due to a windfall outside a corporation’s control, there’s no incentive to distort), and we should absolutely be increasing antitrust action (why did we reduce it in the first place?). Price controls are riskier—they really do create shortages—but then again, is that really worse than lower wages or unemployment? Because the usual strategy involves lower wages and unemployment.

It’s a little ironic: The people who are usually all about laissez-faire are the ones who panic about inflation and want the government to take drastic action; meanwhile, I’m usually in favor of government intervention, but when it comes to moderate inflation, I think maybe we should just let it be.

Now is the time for CTCR

Nov 6 JDN 2459890

We live in a terrifying time. As Ukraine gains ground in its war with Russia, thanks in part to the deployment of high-tech weapons from NATO, Vladimir Putin has begun to make thinly-veiled threats of deploying his nuclear arsenal in response. No one can be sure how serious he is about this. Most analysts believe that he was referring to the possible use of small-scale tactical nuclear weapons, not a full-scale apocalyptic assault. Many think he’s just bluffing and wouldn’t resort to any nukes at all. Putin has bluffed in the past, and could be doing so again. Honestly, “this is not a bluff” is exactly the sort of thing you say when you’re bluffing—people who aren’t bluffing have better ways of showing it. (It’s like whenever Trump would say “Trust me”, and you’d know immediately that this was an especially good time not to. Of course, any time is a good time not to trust Trump.)

(By the way, financial news is a really weird thing: I actually found this article discussing how a nuclear strike would be disastrous for the economy. Dude, if there’s a nuclear strike, we’ve got much bigger things to worry about than the economy. It reminds me of this XKCD.)

But if Russia did launch nuclear weapons, and NATO responded with its own, it could trigger a nuclear war that would kill millions in a matter of hours. So we need to be prepared, and think very carefully about the best way to respond.

The current debate seems to be over whether to use economic sanctions, conventional military retaliation, or our own nuclear weapons. Well, we already have economic sanctions, and they aren’t making Russia back down. (Though they probably are hurting its war effort, so I’m all for keeping them in place.) And if we were to use our own nuclear weapons, that would only further undermine the global taboo against nuclear weapons and could quite possibly trigger that catastrophic nuclear war. Right now, NATO seems to be going for a bluff of our own: We’ll threaten an overwhelming nuclear response, but then we obviously won’t actually carry it out because that would be murder-suicide on a global scale.

That leaves conventional military retaliation. What sort of retaliation? Several years ago I came up with a very specific method of conventional retaliation I call credible targeted conventional response (CTCR, which you can pronounce “cut-core”). I believe that now would be an excellent time to carry it out.

The basic principle of CTCR is really quite simple: Don’t try to threaten entire nations. A nation is an abstract entity. Threaten people. Decisions are made by people. The response to Vladimir Putin launching nuclear weapons shouldn’t be to kill millions of innocent people in Russia that probably mean even less to Putin than they do to us. It should be to kill Vladimir Putin.

How exactly to carry this out is a matter for military strategists to decide. There are a variety of weapons at our disposal, ranging from the prosaic (covert agents) to the exotic (precision strikes from high-altitude stealth drones). Indeed, I think we should leave it purposefully vague, so that Putin can’t try to defend himself against some particular mode of attack. The whole gamut of conventional military responses should be considered on the table, from a single missile strike to a full-scale invasion.

But the basic goal is quite simple: Launching a nuclear weapon is one of the worst possible war crimes, and it must be met with an absolute commitment to bring the perpetrator to justice. We should be willing to accept some collateral damage, even a lot of collateral damage; carpet-bombing a city shouldn’t be considered out of the question. (If that sounds extreme, consider that we’ve done it before for much weaker reasons.) The only thing that we should absolutely refuse to do is deploy nuclear weapons ourselves.

The great advantage of this strategy—even aside from being obviously more humane than nuclear retaliation—is that it is more credible. It sounds more like something we’d actually be willing to do. And in fact we likely could even get help from insiders in Russia, because there are surely many people in the Russian government who aren’t so loyal to Putin that they’d want him to get away with mass murder. It might not just be an assassination; it might end up turning into a coup. (Also something we’ve done for far weaker reasons.)


This is how we preserve the taboo on nuclear weapons: We refuse to use them, but otherwise stop at nothing to kill anyone who does use them.

I therefore call upon the world to make this threat:

Launch a nuclear weapon, Vladimir Putin, and we will kill you. Not your armies, not your generals—you. It could be a Tomahawk missile at the Kremlin. It could be a car bomb in your limousine, or a Stinger missile at Aircraft One. It could be a sniper at one of your speeches. Or perhaps we’ll poison your drink with polonium, like you do to your enemies. You won’t know when or where. You will live the rest of your short and miserable life in terror. There will be nowhere for you to hide. We will stop at nothing. We will deploy every available resource around the world, and it will be our top priority. And you will die.

That’s how you threaten a psychopath. And it’s what we must do in order to keep the world safe from nuclear war.

The United Kingdom in transition

Oct 30 JDN 2459883

When I first decided to move to Edinburgh, I certainly did not expect it to be such a historic time. The pandemic was already in full swing, but I thought that would be all. But this year I was living in the UK when its leadership changed in two historic ways:

First, there was the death of Queen Elizabeth II, and the coronation of King Charles III.

Second, there was the resignation of Boris Johnson, the appointment of Elizabeth Truss, and then, so rapidly I feel like I have whiplash, the resignation of Elizabeth Truss.

In other words, I have seen the end of the longest-reigning monarch and the rise and fall of the shortest-reigning prime minister in the history of the United Kingdom. The three hundred-year history of the United Kingdom.

The prior probability of such a 300-year-historic event happening during my own 3-year term in the UK is approximately 1%. Yet, here we are. A new king, one of a handful of genuine First World monarchs to be coronated in the 21st century. The others are the Netherlands, Belgium, Spain, Monaco, Andorra, and Luxembourg; none of these have even a third the population of the UK, and if we include every Commonwealth Realm (believe it or not, “realm” is in fact still the official term), Charles III is now king of a supranational union with a population of over 150 million people—half the size of the United States. (Yes, he’s your king too, Canada!) Note that Charles III is not king of the entire Commonwealth of Nations, which includes now-independent nations such as India, Pakistan, and South Africa; that successor to the British Empire contains 54 nations and has a population of over 2 billion.

I still can’t quite wrap my mind around this idea of having a king. It feels even more ancient and anachronistic than the 400-year-old university I work at. Of course I knew that we had a queen before, and that she was old and would presumably die at some point and probably be replaced; but that wasn’t really salient information to me until she actually did die and then there was a ten-mile-long queue to see her body and now next spring they will be swearing in this new guy as the monarch of the fourteen realms. It now feels like I’m living in one of those gritty satirical fractured fairy tales. Maybe it’s an urban fantasy setting; it feels a lot like Shrek, to be honest.

Yet other than feeling surreal, none of this has affected my life all that much. I haven’t even really felt the effects of inflation: Groceries and restaurant meals seem a bit more expensive than they were when we arrived, but it’s well within what our budget can absorb; we don’t have a car here, so we don’t care about petrol prices; and we haven’t even been paying more than usual in natural gas because of the subsidy programs. Actually it’s probably been good for our household finances that the pound is so weak and the dollar is so strong. I have been much more directly affected by the university union strikes: being temporary contract junior faculty (read: expendable), I am ineligible to strike and hence had to cross a picket line at one point.

Perhaps this is what history has always felt like for most people: The kings and queens come and go, but life doesn’t really change. But I honestly felt more directly affected by Trump living in the US than I did by Truss living in the UK.

This may be in part because Elizabeth Truss was a very unusual politician; she combined crazy far-right economic policy with generally fairly progressive liberal social policy. A right-wing libertarian, one might say. (As Krugman notes, such people are astonishingly rare in the electorate.) Her socially-liberal stance meant that she wasn’t trying to implement horrific hateful policies against racial minorities or LGBT people the way that Trump was, and for once her horrible economic policies were recognized immediately as such and quickly rescinded. Unlike Trump, Truss did not get the chance to appoint any supreme court justices who could go on to repeal abortion rights.

Then again, Truss couldn’t have appointed any judges if she’d wanted to. The UK Supreme Court is really complicated, and I honestly don’t understand how it works; but from what I do understand, the Prime Minister appoints the Lord Chancellor, the Lord Chancellor forms a commission to appoint the President of the Supreme Court, and the President of the Supreme Court forms a commission to appoint new Supreme Court judges. But I think the monarch is considered the ultimate authority and can veto any appointment along the way. (Or something. Sometimes I get the impression that no one truly understands the UK system, and they just sort of go with doing things as they’ve always been done.) This convoluted arrangement seems to grant the court considerably more political independence than its American counterpart; also, unlike the US Supreme Court, the UK Supreme Court is not allowed to explicitly overturn primary legislation. (Fun fact: The Lord Chancellor is also the Keeper of the Great Seal of the Realm, because Great Britain hasn’t quite figured out that the 13th century ended yet.)

It’s sad and ironic that it was precisely by not being bigoted and racist that Truss ensured she would not have sufficient public support for her absurd economic policies. There’s a large segment of the population of both the US and UK—aptly, if ill-advisedly, referred to by Clinton as “deplorables”—who will accept any terrible policy as long as it hurts the right people. But Truss failed to appeal to that crucial demographic, and so could find no one to support her. Hence, her approval rating fell to a dismal 10%, and she was outlasted by a head of lettuce.

At the time of writing, the new prime minister has not yet been announced, but the smart money is on Rishi Sunak. (I mean that quite literally; he’s leading in prediction markets.) He’s also socially liberal but fiscally conservative, but unlike Truss he seems to have at least some vague understanding of how economics works. Sunak is also popular in a way Truss never was (though that popularity has been declining recently). So I think we can expect to get new policies which are in the same general direction as what Truss wanted—lower taxes on the rich, more privatization, less spent on social services—but at least Sunak is likely to do so in a way that makes the math(s?) actually add up.

All of this is unfortunate, but largely par for the course for the last few decades. It compares quite favorably to the situation in the US, where somehow a large chunk of Americans either don’t believe that an insurrection attempt occurred, are fine with it, or blame the other side, and as the guardrails of democracy continue breaking, somehow gasoline prices appear to be one of the most important issues in the midterm election.

You know what? Living through history sucks. I don’t want to live in “interesting times” anymore.

Updating your moral software

Oct 23 JDN 2459876

I’ve noticed an odd tendency among politically active people, particular social media slacktivists (a term I do not use pejoratively: slacktivism is highly cost-effective). They adopt new ideas very rapidly, trying to stay on the cutting edge of moral and political discourse—and then they denigrate and disparage anyone who fails to do the same as an irredeemable monster.

This can take many forms, such as “if you don’t buy into my specific take on Critical Race Theory, you are a racist”, “if you have any uncertainty about the widespread use of puberty blockers you are a transphobic bigot”, “if you give any credence to the medical consensus on risks of obesity you are fatphobic“, “if you think disabilities should be cured you’re an ableist”, and “if you don’t support legalizing abortion in all circumstances you are a misogynist”.

My intention here is not to evaluate any particular moral belief, though I’ll say the following: I am skeptical of Critical Race Theory, especially the 1619 project which seems to be to include substantial distortions of history. I am cautiously supportive of puberty blockers, because the medical data on their risks are ambiguous—while the sociological data on how much happier trans kids are when accepted are totally unambiguous. I am well aware of the medical data saying that the risks of obesity are overblown (but also not negligible, particular for those who are very obese). Speaking as someone with a disability that causes me frequent, agonizing pain, yes, I want disabilities to be cured, thank you very much; accommodations are nice in the meantime, but the best long-term solution is to not need accommodations. (I’ll admit to some grey areas regarding certain neurodivergences such as autism and ADHD, and I would never want to force cures on people who don’t want them; but paralysis, deafness, blindness, diabetes, depression, and migraine are all absolutely worth finding cures for—the QALY at stake here are massive—and it’s silly to say otherwise.) I think abortion should generally be legal and readily available in the first trimester (which is when most abortions happen anyway), but much more strictly regulated thereafter—but denying it to children and rape victims is a human rights violation.

What I really want to talk about today is not the details of the moral belief, but the attitude toward those who don’t share it. There are genuine racists, transphobes, fatphobes, ableists, and misogynists in the world. There are also structural institutions that can lead to discrimination despite most of the people involved having no particular intention to discriminate. It’s worthwhile to talk about these things, and to try to find ways to fix them. But does calling anyone who disagrees with you a monster accomplish that goal?

This seems particularly bad precisely when your own beliefs are so cutting-edge. If you have a really basic, well-established sort of progressive belief like “hiring based on race should be illegal”, “women should be allowed to work outside the home” or “sodomy should be legal”, then people who disagree with you pretty much are bigots. But when you’re talking about new, controversial ideas, there is bound to be some lag; people who adopted the last generation’s—or even the last year’s—progressive beliefs may not yet be ready to accept the new beliefs, and that doesn’t make them bigots.

Consider this: Were you born believing in your current moral and political beliefs?

I contend that you were not. You may have been born intelligent, open-minded, and empathetic. You may have been born into a progressive, politically-savvy family. But the fact remains that any particular belief you hold about race, or gender, or ethics was something you had to learn. And if you learned it, that means that at some point you didn’t already know it. How would you have felt back then, if, instead of calmly explaining it to you, people called you names for not believing in it?

Now, perhaps it is true that as soon as you heard your current ideas, you immediately adopted them. But that may not be the case—it may have taken you some time to learn or change your mind—and even if it was, it’s still not fair to denigrate anyone who takes a bit longer to come around. There are many reasons why someone might not be willing to change their beliefs immediately, and most of them are not indicative of bigotry or deep moral failings.

It may be helpful to think about this in terms of updating your moral software. You were born with a very minimal moral operating system (emotions such as love and guilt, the capacity for empathy), and over time you have gradually installed more and more sophisticated software on top of that OS. If someone literally wasn’t born with the right OS—we call these people psychopaths—then, yes, you have every right to hate, fear, and denigrate them. But most of the people we’re talking about do have that underlying operating system, they just haven’t updated all their software to the same version as yours. It’s both unfair and counterproductive to treat them as irredeemably defective simply because they haven’t updated to the newest version yet. They have the hardware, they have the operating system; maybe their download is just a little slower than yours.

In fact, if you are very fast to adopt new, trendy moral beliefs, you may in fact be adopting them too quickly—they haven’t been properly vetted by human experience just yet. You can think of this as like a beta version: The newest update has some great new features, but it’s also buggy and unstable. It may need to be fixed before it is really ready for widespread release. If that’s the case, then people aren’t even wrong not to adopt them yet! It isn’t necessarily bad that you have adopted the new beliefs; we need beta testers. But you should be aware of your status as a beta tester and be prepared both to revise your own beliefs if needed, and also to cut other people slack if they disagree with you.

I understand that it can be immensely frustrating to be thoroughly convinced that something is true and important and yet see so many people disagreeing with it. (I am an atheist activist after all, so I absolutely know what that feels like.) I understand that it can be immensely painful to watch innocent people suffer because they have to live in a world where other people have harmful beliefs. But you aren’t changing anyone’s mind or saving anyone from harm by calling people names. Patience, tact, and persuasion will win the long game, and the long game is really all we have.

And if it makes you feel any better, the long game may not be as long as it seems. The arc of history may have tighter curvature than we imagine. We certainly managed a complete flip of the First World consensus on gay marriage in just a single generation. We may be able to achieve similarly fast social changes in other areas too. But we haven’t accomplished the progress we have so far by being uncharitable or aggressive toward those who disagree.

I am emphatically not saying you should stop arguing for your beliefs. We need you to argue for your beliefs. We need you to argue forcefully and passionately. But when doing so, try not to attack the people who don’t yet agree with you—for they are precisely the people we need to listen to you.

The era of the eurodollar is upon us

Oct 16 JDN 2459869

I happen to be one of those weirdos who liked the game Cyberpunk 2077. It was hardly flawless, and had many unforced errors (like letting you choose your gender, but not making voice type independent from pronouns? That has to be, like, three lines of code to make your game significantly more inclusive). But overall I thought it did a good job of representing a compelling cyberpunk world that is dystopian but not totally hopeless, and had rich, compelling characters, along with reasonably good gameplay. The high level of character customization sets a new standard (aforementioned errors notwithstanding), and I for one appreciate how they pushed the envelope for sexuality in a AAA game.

It’s still not explicit—though I’m sure there are mods for that—but at least you can in fact get naked, and people talk about sex in a realistic way. It’s still weird to me that showing a bare breast or a penis is seen as ‘adult’ in the same way as showing someone’s head blown off (Remind me: Which of the three will nearly everyone have seen from the time they were a baby? Which will at least 50% of children see from birth, guaranteed, and virtually 100% of adults sooner or later? Which can you see on Venus de Milo and David?), but it’s at least some progress in our society toward a healthier relationship with sex.

A few things about the game’s world still struck me as odd, though. Chiefly it has to be the weird alternate history where apparently we have experimental AI and mind-uploading in the 2020s, but… those things are still experimental in the 2070s? So our technological progress was through the roof for the early 2000s, and then just completely plateaued? They should have had Johnny Silverhand’s story take place in something like 2050, not 2023. (You could leave essentially everything else unchanged! V could still have grown up hearing tales of Silverhand’s legendary exploits, because 2050 was 27 years ago in 2077; canonically, V is 28 years old when the game begins. Honestly it makes more sense in other ways: Rogue looks like she’s in her 60s, not her 80s.)

Another weird thing is the currency they use: They call it the “eurodollar”, and the symbol is, as you might expect, €$. When the game first came out, that seemed especially ridiculous, since euros were clearly worth more than dollars and basically always had been.

Well, they aren’t anymore. In fact, euros and dollars are now trading almost exactly at parity, and have been for weeks. CD Projekt Red was right: In the 2020s, the era of the eurodollar is upon us after all.

Of course, we’re unlike to actually merge the two currencies any time soon. (Can you imagine how Republicans would react if such a thing were proposed?) But the weird thing is that we could! It almost is like the two currencies are interchangeable—for the first time in history.

It isn’t so much that the euro is weak; it’s that the dollar is strong. When I first moved to the UK, the pound was trading at about $1.40. It is now trading at $1.10! If it continues dropping as it has, it could even reach parity as well! We might have, for the first time in history, the dollar, the pound, and the euro functioning as one currency. Get the Canadian dollar too (currently much too weak), and we’ll have the Atlantic Union dollar I use in some of my science fiction (I imagine the AU as an expansion of NATO into an economic union that gradually becomes its own government).Then again, the pound is especially weak right now because it plunged after the new prime minister announced an utterly idiotic economic plan. (Conservatives refusing to do basic math and promising that tax cuts would fix everything? Why, it felt like being home again! In all the worst ways.)

This is largely a bad thing. A strong dollar means that the US trade deficit will increase, and also that other countries will have trouble buying our exports. Conversely, with their stronger dollars, Americans will buy more imports from other countries. The combination of these two effects will make inflation worse in other countries (though it could reduce it in the US).

It’s not so bad for me personally, as my husband’s income is largely in dollars while our expenses are in pounds. (My income is in pounds and thus unaffected.) So a strong dollar and a weak pound means our real household income is about £4,000 than it would otherwise have been—which is not a small difference!

In general, the level of currency exchange rates isn’t very important. It’s changes in exchange rates that matter. The changes in relative prices will shift around a lot of economic activity, causing friction both in the US and in its (many) trading partners. Eventually all those changes should result in the exchange rates converging to a new, stable equilibrium; but that can take a long time, and exchange rates can fluctuate remarkably fast. In the meantime, such large shifts in exchange rates are going to cause even more chaos in a world already shaken by the COVID pandemic and the war in Ukraine.

On (gay) marriage

Oct 9 JDN 2459862

This post goes live on my first wedding anniversary. Thus, as you read this, I will have been married for one full year.

Honestly, being married hasn’t felt that different to me. This is likely because we’d been dating since 2012 and lived together for several years before actually getting married. It has made some official paperwork more convenient, and I’ve reached the point where I feel naked without my wedding band; but for the most part our lives have not really changed.

And perhaps this is as it should be. Perhaps the best way to really know that you should get married is to already feel as though you are married, and just finally get around to making it official. Perhaps people for whom getting married is a momentous change in their lives (as opposed to simply a formal announcement followed by a celebration) are people who really shouldn’t be getting married just yet.

A lot of things in my life—my health, my career—have not gone very well in this past year. But my marriage has been only a source of stability and happiness. I wouldn’t say we never have conflict, but quite honestly I was expecting a lot more challenges and conflicts from the way I’d heard other people talk about marriage in the past. All of my friends who have kids seem to be going through a lot of struggles as a result of that (which is one of several reasons we keep procrastinating on looking into adoption), but marriage itself does not appear to be any more difficult than friendship—in fact, maybe easier.

I have found myself oddly struck by how un-important it has been that my marriage is to a same-sex partner. I keep expecting people to care—to seem uncomfortable, to be resistant, or simply to be surprised—and it so rarely happens.

I think this is probably generational: We Millennials grew up at the precise point in history when the First World suddenly decided, all at once, that gay marriage was okay.

Seriously, look at this graph. I’ve made it combining this article using data from the General Social Survey, and this article from Pew:

Until around 1990—when I was 2 years old—support for same-sex marriage was stable and extremely low: About 10% of Americans supported it (presumably most of them LGBT!), and over 70% opposed it. Then, quite suddenly, attitudes began changing, and by 2019, over 60% of Americans supported it and only 31% opposed it.

That is, within a generation, we went from a country where almost no one supported gay marriage to a country where same-sex marriage is so popular that any major candidate who opposed it would almost certainly lose a general election. (They might be able to survive a Republican primary, as Republican support for same-sex marriage is only about 44%—about where it was among Democrats in the early 2000s.)

This is a staggering rate of social change. If development economics is the study of what happened in South Korea from 1950-2000, I think political science should be the study of what happened to attitudes on same-sex marriage in the US from 1990-2020.

And of course it isn’t just the US. Similar patterns can be found across Western Europe, with astonishingly rapid shifts from near-universal opposition to near-universal support within a generation.

I don’t think I have been able to fully emotionally internalize this shift. I grew up in a world where homophobia was mainstream, where only the most radical left-wing candidates were serious about supporting equal rights and representation for LGBT people. And suddenly I find myself in a world where we are actually accepted and respected as equals, and I keep waiting for the other shoe to drop. Aren’t you the same people who told me as a teenager that I was a sexual deviant who deserved to burn in Hell? But now you’re attending my wedding? And offering me joint life insurance policies? My own extended family members treat me differently now than they did when I was a teenager, and I don’t quite know how to trust that the new way is the true way and not some kind of facade that could rapidly disappear.

I think this sort of generational trauma may never fully heal, in which case it will be the generation after us—the Zoomers, I believe we’re calling them now—who will actually live in this new world we created, while the rest of us forever struggle to accept that things are not as we remember them. Once bitten, we remain forever twice shy, lest attitudes regress as suddenly as they advanced.

Then again, it seems that Zoomers may be turning against the institution of marriage in general. As the meme says: “Boomers: No gay marriage. Millennials: Yes gay marriage. Gen Z: Yes gay, no marriage.” Maybe that’s for the best; maybe the future of humanity is for personal relationships to be considered no business of the government at all. But for now at least, equal marriage is clearly much better than unequal marriage, and the First World seems to have figured that out blazing fast.

And of course the rest of the world still hasn’t caught up. While trends are generally in a positive direction, there are large swaths of the world where even very basic rights for LGBT people are opposed by most of the population. As usual, #ScandinaviaIsBetter, with over 90% support for LGBT rights; and, as usual, Sub-Saharan Africa is awful, with support in Kenya, Uganda and Nigeria not even hitting 20%.

Housing prices are out of control

Oct 2 JDN 2459855

This is a topic I could have done for quite awhile now, and will surely address again in the future; it’s a slow-burn crisis that has covered most of the world for a generation.

In most of the world’s cities, housing prices are now the highest they have ever been, even adjusted for inflation. The pandemic made this worse, but it was already bad.

This is of course very important, because housing is usually the largest expenditure for most families.

Changes in housing prices are directly felt in people’s lifestyles, especially when they are renting. Homeownership rates vary a lot between countries, so the impact of this is quite different in different places.

There’s also an important redistributive effect: When housing prices go up, people who own homes get richer, while people who rent homes get poorer. Since people who own homes tend to be richer to begin with (and landlordsare typically richest of all), rising housing prices directly increase wealth inequality.

The median price of a house in the US, even adjusted for inflation, is nearly twice what it was in 1993.

This wasn’t a slow and steady climb; housing prices moved with inflation for most of the 1980s and 1990s, and then surged upward just before the 2008 crash. Then they plummeted for a few years, before reversing course and surging even higher than they were at their 2007 peak:

https://fred.stlouisfed.org/series/CSUSHPINSA

[housing_prices_US_2.png]

https://fred.stlouisfed.org/series/USSTHPI

This is not a uniquely American problem. The UK shows almost the same pattern:

https://fred.stlouisfed.org/series/HPIUKA

But it’s also not the same pattern everywhere. In China, housing prices have been rising steadily, and didn’t crash in 2008:

https://fred.stlouisfed.org/series/QCNN628BIS

In France, housing prices have been relatively stable, and are no higher now than they were in the 1990s:

https://fred.stlouisfed.org/series/CP0410FRM086NEST

Meanwhile, in Japan, housing prices surged in the 1970s, 1980s, and 1990s, ending up four times what they had been in the 1960s; then they suddenly leveled off and haven’t changed since:

https://fred.stlouisfed.org/series/JPNCPIHOUMINMEI

It’s also worse in some cities than others. In San Francisco, housing now costs three times what it did in the 1990s, even adjusting for inflation:

https://fred.stlouisfed.org/series/SFXRSA

Meanwhile, in Detroit, housing is only about 25% more expensive now than it was in the 1990s:

https://fred.stlouisfed.org/series/ATNHPIUS19804Q

This variation tells me that policy matters. This isn’t some inevitable result of population growth or technological change. Those could still be important factors, but they can’t explain the strong varation between countries or even between cities within the same country. (Yes, San Francisco has seen more population growth than Detroit—but not that much more.)

Part of the problem, I think, is that most policymakers don’t actually want housing to be more affordable. They might say they do, they might occasionally feel some sympathy for people who get evicted or live on the streets; but in general, they want housing prices to be higher, because that gives them more property tax revenue. The wealthy benefit from rising housing prices, while the poor are harmed. Since the interests of the wealthy are wildly overrepresented in policy, policy is made to increase housing prices, not decrease them. This is likely especially true in housing, because even the upper-middle class mostly benefits from rising housing prices. It’s only the poor and lower-middle class who are typically harmed.

This is why I don’t really want to get into suggesting policies that could fix this. We know what would fix this: Build more housing. Lots of it. Everywhere. Increase supply, and the price will go down. And we should keep doing it until housing is not just back where it was, but cheaper—much cheaper. Buying a house shouldn’t be a luxury afforded only to the upper-middle class; it should be something everyone does several times in their life and doesn’t have to worry too much about. Buying a house should be like buying a car; not cheap, exactly, but you don’t have to be rich to do it. Because everyone needs housing. So everyone should have housing.

But that isn’t going to happen, because the people who make the decisions about this don’t want it to happen.

So the real question becomes: What do we do about that?

Mindful of mindfulness

Sep 25 JDN 2459848

I have always had trouble with mindfulness meditation.

On the one hand, I find it extremely difficult to do: if there is one thing my mind is good at, it’s wandering. (I think in addition to my autism spectrum disorder, I may also have a smidgen of ADHD. I meet some of the criteria at least.) And it feels a little too close to a lot of practices that are obviously mumbo-jumbo nonsense, like reiki, qigong, and reflexology.

On the other hand, mindfulness meditation has been empirically shown to have large beneficial effects in study after study after study. It helps with not only depression, but also chronic pain. It even seems to improve immune function. The empirical data is really quite clear at this point. The real question is how it does all this.

And I am, above all, an empiricist. I bow before the data. So, when my new therapist directed me to an app that’s supposed to train me to do mindfulness meditation, I resolved that I would in fact give it a try.

Honestly, as of writing this, I’ve been using it less than a week; it’s probably too soon to make a good evaluation. But I did have some prior experience with mindfulness, so this was more like getting back into it rather than starting from scratch. And, well, I think it might actually be working. I feel a bit better than I did when I started.

If it is working, it doesn’t seem to me that the mechanism is greater focus or mental control. I don’t think I’ve really had time to meaningfully improve those skills, and to be honest, I have a long way to go there. The pre-recorded voice samples keep telling me it’s okay if my mind wanders, but I doubt the app developers planned for how much my mind can wander. When they suggest I try to notice each wandering thought, I feel like saying, “Do you want the complete stack trace, or just the final output? Because if I wrote down each terminal branch alone, my list would say something like ‘fusion reactors, ice skating, Napoleon’.”

I think some of the benefit is simply parasympathetic activation, that is, being more relaxed. I am, and have always been, astonishingly bad at relaxing. It’s not that I lack positive emotions: I can enjoy, I can be excited. Nor am I incapable of low-arousal emotions: I can get bored, I can be lethargic. I can also experience emotions that are negative and high-arousal: I can be despondent or outraged. But I have great difficulty reaching emotional states which are simultaneously positive and low-arousal, i.e. states of calm and relaxation. (See here for more on the valence/arousal model of emotional states.) To some extent I think this is due to innate personality: I am high in both Conscientiousness and Neuroticism, which basically amounts to being “high-strung“. But mindfulness has taught me that it’s also trainable, to some extent; I can get better at relaxing, and I already have.

And even more than that, I think the most important effect has been reminding and encouraging me to practice self-compassion. I am an intensely compassionate person, toward other people; but toward myself, I am brutal, demanding, unforgiving, even cruel. My internal monologue says terrible things to me that I wouldnever say to anyone else. (Or at least, not to anyone else who wasn’t a mass murderer or something. I wouldn’t feel particularly bad about saying “You are a failure, you are broken, you are worthless, you are unworthy of love” to, say, Josef Stalin. And yes, these are in fact things my internal monologue has said to me.) Whenever I am unable to master a task I consider important, my automatic reaction is to denigrate myself for failing; I think the greatest benefit I am getting from practicing meditation is being encouraged to fight that impulse. That is, the most important value added by the meditation app has not been in telling me how to focus on my own breathing, but in reminding me to forgive myself when I do it poorly.

If this is right (as I said, it’s probably too soon to say), then we may at last be able to explain why meditation is simultaneously so weird and tied to obvious mumbo-jumbo on the one hand, and also so effective on the other. The actual function of meditation is to be a difficult cognitive task which doesn’t require outside support.

And then the benefit actually comes from doing this task, getting slowly better at it—feeling that sense of progress—and also from learning to forgive yourself when you do it badly. The task probably could have been anything: Find paths through mazes. Fill out Sudoku grids. Solve integrals. But these things are hard to do without outside resources: It’s basically impossible to draw a maze without solving it in the process. Generating a Sudoku grid with a unique solution is at least as hard as solving one (which is NP-complete). By the time you know a given function is even integrable over elementary functions, you’ve basically integrated it. But focusing on your breath? That you can do anywhere, anytime. And the difficulty of controlling all your wandering thoughts may be less a bug than a feature: It’s precisely because the task is so difficult that you will have reason to practice forgiving yourself for failure.

The arbitrariness of the task itself is how you can get a proliferation of different meditation techniques, and a wide variety of mythologies and superstitions surrounding them all, but still have them all be about equally effective in the end. Because it was never really about the task at all. It’s about getting better and failing gracefully.

It probably also helps that meditation is relaxing. Solving integrals might not actually work as well as focusing on your breath, even if you had a textbook handy full of integrals to solve. Breathing deeply is calming; integration by parts isn’t. But lots of things are calming, and some things may be calming to one person but not to another.

It is possible that there is yet some other benefit to be had directly via mindfulness itself. If there is, it will surely have more to do with anterior cingulate activation than realignment of qi. But such a particular benefit isn’t necessary to explain the effectiveness of meditation, and indeed would be hard-pressed to explain why so many different kinds of meditation all seem to work about as well.

Because it was never about what you’re doing—it was always about how.