In search of reasonable conservatism

Feb 21JDN 2459267

This is a very tumultuous time for American politics. Donald Trump, not once, but twice was impeached—giving him the dubious title of having been impeached as many times as the previous 45 US Presidents combined. He was not convicted either time, not because the evidence for his crimes was lacking—it was in fact utterly overwhelming—but because of obvious partisan bias: Republican Senators didn’t want to vote against a Republican President. All 50 of the Democratic Senators, but only 7 of the 50 Republican Senators, voted to convict Trump. The required number of votes to convict was 67.

Some degree of partisan bias is to be expected. Indeed, the votes looked an awful lot like Bill Clinton’s impeachment, in which all Democrats and only a handful of Republicans voted to acquit. But Bill Clinton’s impeachment trial was nowhere near as open-and-shut as Donald Trump’s. He was being tried for perjury and obstruction of justice, over lies he told about acts that were unethical, but not illegal or un-Constitutional. I’m a little disappointed that no Democrats voted against him, but I think acquittal was probably the right verdict. There’s something very odd about being tried for perjury because you lied about something that wasn’t even a crime. Ironically, had it been illegal, he could have invoked the Fifth Amendment instead of lying and they wouldn’t have been able to touch him. So the only way the perjury charge could actually stick was because it wasn’t illegal. But that isn’t what perjury is supposed to be about: It’s supposed to be used for things like false accusations and planted evidence. Refusing to admit that you had an affair that’s honestly no one’s business but your family’s really shouldn’t be a crime, regardless of your station.

So let us not imagine an equivalency here: Bill Clinton was being tried for crimes that were only crimes because he lied about something that wasn’t a crime. Donald Trump was being tried for manipulating other countries to interfere in our elections, obstructing investigations by Congress, and above all attempting to incite a coup. Partisan bias was evident in all three trials, but only Trump’s trials were about sedition against the United States.

That is to say, I expect to see partisan bias; it would be unrealistic not to. But I expect that bias to be limited. I expect there to be lines beyond which partisans will refuse to go. The Republican Party in the United States today has shown us that they have no such lines. (Or if there are, they are drawn far too high. What would he have to do, bomb an American city? He incited an invasion of the Capitol Building, for goodness’ sake! And that was after so terribly mishandling a pandemic that he caused roughly 200,000 excess American deaths!)

Temperamentally, I like to compromise. I want as many people to be happy as possible, even if that means not always getting exactly what I would personally prefer. I wanted to believe that there were reasonable conservatives in our government, professional statespersons with principles who simply had honest disagreements about various matters of policy. I can now confirm that there are at most 7 such persons in the US Senate, and at most 10 such persons in the US House of Representatives. So of the 261 Republicans in Congress, no more than 17 are actually reasonable statespersons who do not let partisan bias override their most basic principles of justice and democracy.

And even these 17 are by no means certain: There were good strategic reasons to vote against Trump, even if the actual justice meant nothing to you. Trump’s net disapproval rating was nearly the highest of any US President ever. Carter and Bush I had periods where they fared worse, but overall fared better. Johnson, Ford, Reagan, Obama, Clinton, Bush II, and even Nixon were consistently more approved than Trump. Kennedy and Eisenhower completely blew him out of the water—at their worst, Kennedy and Eisenhower were nearly 30 percentage points above Trump at his best. With Trump this unpopular, cutting ties with him would make sense for the same reason rats desert a sinking ship. And yet somehow partisan loyalty won out for 94% of Republicans in Congress.

Politics is the mind-killer, and I fear that this sort of extreme depravity on the part of Republicans in Congress will make it all too easy to dismiss conservatism as a philosophy in general. I actually worry about that; not all conservative ideas are wrong! Low corporate taxes actually make a lot of sense. Minimum wage isn’t that harmful, but it’s also not that beneficial. Climate change is a very serious threat, but it’s simply not realistic to jump directly to fully renewable energy—we need something for the transition, probably nuclear energy. Capitalism is overall the best economic system, and isn’t particularly bad for the environment. Industrial capitalism has brought us a golden age. Rent control is a really bad idea. Fighting racism is important, but there are ways in which woke culture has clearly gone too far. Indeed, perhaps the worst thing about woke culture is the way it denies past successes for civil rights and numbs us with hopelessness.

Above all, groupthink is incredibly dangerous. Once we become convinced that any deviation from the views of the group constitutes immorality or even treason, we become incapable of accepting new information and improving our own beliefs. We may start with ideas that are basically true and good, but we are not omniscient, and even the best ideas can be improved upon. Also, the world changes, and ideas that were good a generation ago may no longer be applicable to the current circumstances. The only way—the only way—to solve that problem is to always remain open to new ideas and new evidence.

Therefore my lament is not just for conservatives, who now find themselves represented by craven ideologues; it is also for liberals, who no longer have an opposition party worth listening to. Indeed, it’s a little hard to feel bad for the conservatives, because they voted for these maniacs. Maybe they didn’t know what they were getting? But they’ve had chances to remove most of them, and didn’t do so. At best I’d say I pity them for being so deluded by propaganda that they can’t see the harm their votes have done.

But I’m actually quite worried that the ideologues on the left will now feel vindicated; their caricatured view of Republicans as moustache-twirling cartoon villains turned out to be remarkably accurate, at least for Trump himself. Indeed, it was hard not to think of the ridiculous “destroying the environment for its own sake” of Captain Planet villains when Trump insisted on subsidizing coal power—which by the way didn’t even work.

The key, I think, is to recognize that reasonable conservatives do exist—there just aren’t very many of them in Congress right now. A significant number of Americans want low taxes, deregulation, and free markets but are horrified by Trump and what the Republican Party has become—indeed, at least a few write for the National Review.

The mere fact that an idea comes from Republicans is not a sufficient reason to dismiss that idea. Indeed, I’m going to say something even stronger: The mere fact that an idea comes from a racist or a bigot is not a sufficient reason to dismiss that idea. If the idea itself is racist or bigoted, yes, that’s a reason to think it is wrong. But even bad people sometimes have good ideas.

The reasonable conservatives seem to be in hiding at the moment; I’ve searched for them, and had difficulty finding more than a handful. Yet we must not give up the search. Politics should not appear one-sided.

What happened with GameStop?

Feb 7 JDN 2459253

No doubt by now you’ve heard about the recent bubble in GameStop stock that triggered several trading stops, nearly destroyed a hedge fund, and launched a thousand memes. What really strikes me about this whole thing is how ordinary it is: This is basically the sort of thing that happens in our financial markets all the time. So why are so many people suddenly paying so much attention to it?

There are a few important ways this is unusual: Most importantly, the bubble was triggered by a large number of middle-class people investing small amounts, rather than by a handful of billionaires or hedge funds. It’s also more explicitly collusive than usual, with public statements in writing about what stocks are being manipulated rather than hushed whispers between executives at golf courses. Partly as a consequence of these, the response from the government and the financial industry has been quite different as well, trying to halt trading and block transactions in a way that they would never do if the crisis had been caused by large financial institutions.

If you’re interested in the technical details of what happened, what a short squeeze is and how it can make a hedge fund lose enormous amounts of money unexpectedly, I recommend this summary by KQED. But the gist of it is simple enough: Melvin Capital placed huge bets that GameStop stock would fall in price, and a coalition of middle-class traders coordinated on Reddit to screw them over by buying a bunch of GameStop stock and driving up the price. It worked, and now Melvin Capital lost something on the order of $3-5 billion in just a few days.

The particular kind of bet they placed is called a short, and it’s a completely routine practice on Wall Street despite the fact that I could never quite understand why it is a thing that should be allowed.

The essence of a short is quite simple: When you short, you are selling something you don’t own. You “borrow” it (it isn’t really even borrowing), and then sell it to someone else, promising to buy it back and return it to where you borrowed it from at some point in the future. This amounts to a bet that the price will decline, so that the price at which you buy it is lower than the price at which you sold it.

Doesn’t that seem like an odd thing to be allowed to do? Normally you can’t sell something you have merely borrowed. I can’t borrow a car and then sell it; car title in fact exists precisely to prevent this from happening. If I were to borrow your coat and then sell it to a thrift store, I’d have committed larceny. It’s really quite immaterial whether I plan to buy it back afterward; in general we do not allow people to sell things that they do not own.

Now perhaps the problem is that when I borrow your coat or your car, you expect me to return that precise object—not a similar coat or a car of equivalent Blue Book value, but your coat or your car. When I borrow a share of GameStop stock, no one really cares whether it is that specific share which I return—indeed, it would be almost impossible to even know whether it was. So in that way it’s a bit like borrowing money: If I borrow $20 from you, you don’t expect me to pay back that precise $20 bill. Indeed you’d be shocked if I did, since presumably I borrowed it in order to spend it or invest it, so how would I ever get it back?

But you also don’t sell money, generally speaking. Yes, there are currency exchanges and money-market accounts; but these are rather exceptional cases. In general, money is not bought and sold the way coats or cars are.

What about consumable commodities? You probably don’t care too much about any particular banana, sandwich, or gallon of gasoline. Perhaps in some circumstances we might “loan” someone a gallon of gasoline, intending them to repay us at some later time with a different gallon of gasoline. But far more likely, I think, would be simply giving a friend a gallon of gasoline and then not expecting any particular repayment except perhaps a vague offer of providing a similar favor in the future. I have in fact heard someone say the sentence “Can I borrow your sandwich?”, but it felt very odd when I heard it. (Indeed, I responded something like, “No, you can keep it.”)

And in order to actually be shorting gasoline (which is a thing that you, too, can do, perhaps even right now, if you have a margin account on a commodities exchange), it isn’t enough to borrow a gallon with the expectation of repaying a different gallon; you must also sell that gallon you borrowed. And now it seems very odd indeed to say to a friend, “Hey, can I borrow a gallon of gasoline so that I can sell it to someone for a profit?”

The usual arguments for why shorting should be allowed are much like the arguments for exotic financial instruments in general: “Increase liquidity”, “promote efficient markets”. These arguments are so general and so ubiquitous that they essentially amount to the strongest form of laissez-faire: Whatever Wall Street bankers feel like doing is fine and good and part of what makes American capitalism great.

In fact, I was never quite clear why margin accounts are something we decided to allow; margin trading is inherently high-leverage and thus inherently high-risk. Borrowing money in order to arbitrage financial assets doesn’t just seem like a very risky thing to do, it has been one way or another implicated in virtually every financial crisis that has ever occurred. It would be an exaggeration to say that leveraged arbitrage is the one single cause of financial crises, but it would be a shockingly small exaggeration. I think it absolutely is fair to say that if leveraged arbitrage did not exist, financial crises would be far rarer and further between.

Indeed, I am increasingly dubious of the whole idea of allowing arbitrage in general. Some amount of arbitrage may be unavoidable; there may always be people people who see that prices are different for the same item in two different markets, and then exploit that difference before anyone can stop them. But this is a bit like saying that theft is probably inevitable: Yes, every human society that has had a system of property ownership (which is most of them—even communal hunter-gatherers have rules about personal property), has had some amount of theft. That doesn’t mean there is nothing we can do to reduce theft, or that we should simply allow theft wherever it occurs.

The moral argument against arbitrage is straightforward enough: You’re not doing anything. No good is produced; no service is provided. You are making money without actually contributing any real value to anyone. You just make money by having money. This is what people in the Middle Ages found suspicious about lending money at interest; but lending money actually is doing something—sometimes people need more money than they have, and lending it to them is providing a useful service for which you deserve some compensation.

A common argument economists make is that arbitrage will make prices more “efficient”, but when you ask them what they mean by “efficient”, the answer they give is that it removes arbitrage opportunities! So the good thing about arbitrage is that it stops you from doing more arbitrage?

And what if it doesn’t stop you? Many of the ways to exploit price gaps (particularly the simplest ones like “where it’s cheap, buy it; where it’s expensive, sell it”) will automatically close those gaps, but it’s not at all clear to me that all the ways to exploit price gaps will necessarily do so. And even if it’s a small minority of market manipulation strategies that exploit gaps without closing them, those are precisely the strategies that will be most profitable in the long run, because they don’t undermine their own success. Then, left to their own devices, markets will evolve to use such strategies more and more, because those are the strategies that work.

That is, in order for arbitrage to be beneficial, it must always be beneficial; there must be no way to exploit price gaps without inevitably closing those price gaps. If that is not the case, then evolutionary pressure will push more and more of the financial system toward using methods of arbitrage that don’t close gaps—or even exacerbate them. And indeed, when you look at how ludicrously volatile and crisis-prone our financial system has become, it sure looks an awful lot like an evolutionary equilibrium where harmful arbitrage strategies have evolved to dominate.

A world where arbitrage actually led to efficient pricing would be a world where the S&P 500 rises a steady 0.02% per day, each and every day. Maybe you’d see a big move when there was actually a major event, like the start of a war or the invention of a vaccine for a pandemic. You’d probably see a jump up or down of a percentage point or two with each quarterly Fed announcement. But daily moves of even five or six percentage points would be a very rare occurrence—because the real expected long-run aggregate value of the 500 largest publicly-traded corporations in America is what the S&P 500 is supposed to represent, and that is not a number that should change very much very often. The fact that I couldn’t really tell you what that number is without multi-trillion-dollar error bars is so much the worse for anyone who thinks that financial markets can somehow get it exactly right every minute of every day.

Moreover, it’s not hard to imagine how we might close price gaps without simply allowing people to exploit them. There could be a bunch of economists at the Federal Reserve whose job it is to locate markets where there are arbitrage opportunities, and then a bundle of government funds that they can allocate to buying and selling assets in order to close those price gaps. Any profits made are received by the treasury; any losses taken are borne by the treasury. The economists would get paid a comfortable salary, and perhaps get bonuses based on doing a good job in closing large or important price gaps; but there is no need to give them even a substantial fraction of the proceeds, much less all of it. This is already how our money supply is managed, and it works quite well, indeed obviously much better than an alternative with “skin in the game”: Can you imagine the dystopian nightmare we’d live in if the Chair of the Federal Reserve actually received even a 1% share of the US money supply? (Actually I think that’s basically what happened in Zimbabwe: The people who decided how much money to print got to keep a chunk of the money that was printed.)

I don’t actually think this GameStop bubble is all that important in itself. A decade from now, it may be no more memorable than Left Shark or the Macarena. But what is really striking about it is how little it differs from business-as-usual on Wall Street. The fact that a few million Redditors can gather together to buy a stock “for the lulz” or to “stick it to the Man” and thereby bring hedge funds to their knees is not such a big deal in itself, but it is symptomatic of much deeper structural flaws in our financial system.

I dislike overstatement

Jan 10 JDN 2459225

I was originally planning on titling this post “I hate overstatement”, but I thought that might be itself an overstatement; then I considered leaning into the irony with something like “Overstatement is the worst thing ever”. But no, I think my point best comes across if I exemplify it, rather than present it ironically.

It’s a familiar formula: “[Widespread belief] is wrong! [Extreme alternative view] is true! [Obvious exception]. [Further qualifications]. [Revised, nuanced view that is only slightly different from the widespread belief].”

Here are some examples of the formula (these are not direct quotes but paraphrases of their general views). Note that these are all people I basically agree with, and yet I still find their overstatement annoying:

Bernie Sanders: “Capitalism is wrong! Socialism is better! Well, not authoritarian socialism like the Soviet Union. And some industries clearly function better when privatized. Scandinavian social democracy seems to be the best system.”

Richard Dawkins: “Religion is a delusion! Only atheists are rational! Well, some atheists are also pretty irrational. And most religious people are rational about most things most of the time, and don’t let their religious beliefs interfere too greatly with their overall behavior. Really, what I mean to say that is that God doesn’t exist and organized religion is often harmful.”

Black Lives Matter: “Abolish the police! All cops are bastards! Well, we obviously still need some kind of law enforcement system for dealing with major crimes; we can’t just let serial killers go free. In fact, while there are deep-seated flaws in police culture, we could solve a lot of the most serious problems with a few simple reforms like changing the rules of engagement.”

Sam Harris is particularly fond of this formula, so here is a direct quote that follows the pattern precisely:

“The link between belief and behavior raises the stakes considerably. Some propositions are so dangerous that it may even be ethical to kill people for believing them. This may seem an extraordinary claim, but it merely enunciates an ordinary fact about the world in which we live. Certain beliefs place their adherents beyond the reach of every peaceful means of persuasion, while inspiring them to commit acts of extraordinary violence against others. There is, in fact, no talking to some people. If they cannot be captured, and they often cannot, otherwise tolerant people may be justified in killing them in self-defense. This is what the United States attempted in Afghanistan, and it is what we and other Western powers are bound to attempt, at an even greater cost to ourselves and to innocents abroad, elsewhere in the Muslim world. We will continue to spill blood in what is, at bottom, a war of ideas.”

Somehow in a single paragraph he started with the assertion “It is permissible to punish thoughtcrime with death” and managed to qualify it down to “The Afghanistan War was largely justified”. This is literally the difference between a proposition fundamentally antithetical to everything America stands for, and an utterly uncontroversial statement most Americans agree with. Harris often complains that people misrepresent his views, and to some extent this is true, but honestly I think he does this on purpose because he knows that controversy sells. There’s taking things out of context—and then there’s intentionally writing in a style that will maximize opportunities to take you out of context.

I think the idea behind overstating your case is that you can then “compromise” toward your actual view, and thereby seem more reasonable.

If there is some variable X that we want to know the true value of, and I currently believe that it is some value x1 while you believe that it is some larger value x2, and I ask you what you think, you may not want to tell me x2. Intead you might want to give some number even larger than x2 that you choose to try to make me adjust all the way into adopting your new belief.

For instance, suppose I think the probability of your view being right is p and the probability of my view being right is 1-p. But you think that the probability of your view being right is q > p and the probability of my view being right is 1-q < 1-p.

I tell you that my view is x1. Then I ask you what your view is. What answer should you give?


Well, you can expect that I’ll revise my belief to a new value px + (1-p)x1, where x is whatever answer you give me. The belief you want me to hold is qx2 + (1-q)x1. So your optimal choice is as follows:

qx2 + (1-q)x1 = px + (1-p)x1

x = x1 + q/p(x2-x1)

Since q > p, q/p > 1 and the x you report to me will be larger than your true value x2. You will overstate your case to try to get me to adjust my beliefs more. (Interestingly, if you were less confident in your own beliefs, you’d report a smaller difference. But this seems like a rare case.)

In a simple negotiation over dividing some resource (e.g. over a raise or a price), this is quite reasonable. When you’re a buyer and I’m a seller, our intentions are obvious enough: I want to sell high and you want to buy low. Indeed, the Nash Equilibrium of this game seems to be that we both make extreme offers then compromise on a reasonable offer, all the while knowing that this is exactly what we’re doing.

But when it comes to beliefs about the world, things aren’t quite so simple.

In particular, we have reasons for our beliefs. (Or at least, we’re supposed to!) And evidence isn’t linear. Even when propositions can be placed on a one-dimensional continuum in this way (and quite frankly we shoehorn far too many complex issues onto a simple “left/right” continuum!), evidence that X = x isn’t partial evidence that X = 2x. A strong argument that the speed of light is 3*108 m/s isn’t a weak argument that the speed of light is 3*109 m/s. A compelling reason to think that taxes should be over 30% isn’t even a slight reason to think that taxes should be over 90%.

To return to my specific examples: Seeing that Norway is a very prosperous country doesn’t give us reasons to like the Soviet Union. Recognizing that religion is empirically false doesn’t justify calling all religious people delusional. Reforming the police is obviously necessary, and diverting funds to other social services is surely a worthwhile goal; but law enforcement is necessary and cannot simply be abolished. And defending against the real threat of Islamist terrorism in no way requires us to institute the death penalty for thoughtcrime.

I don’t know how most people response to overstatement. Maybe it really does cause them to over-adjust their beliefs. Hyperbole is a very common rhetorical tactic, and for all I know perhaps it is effective on many people.

But personally, here is my reaction: At the very start, you stated something implausible. That has reduced your overall credibility.

If I continue reading and you then deal with various exceptions and qualifications, resulting in a more reasonable view, I do give you some credit for that; but now I am faced with a dilemma: Either (1) you were misrepresenting your view initially, or (2) you are engaging in a motte-and-bailey doctrine, trying to get me to believe the strong statement while you can only defend the weak statement. Either way I feel like you are being dishonest and manipulative. I trust you less. I am less interested in hearing whatever else you have to say. I am in fact less likely to adopt your nuanced view than I would have been if you’d simply presented it in the first place.

And that’s assuming I have the opportunity to hear your full nuanced version. If all I hear is the sound-byte overstatement, I will come away with an inaccurate assessment of your beliefs. I will have been presented with an implausible claim and evidence that doesn’t support that claim. I will reject your view out of hand, without ever actually knowing what your view truly was.

Furthermore, I know that many others who are listening are not as thoughtful as I am about seeking out detailed context, so even if I know the nuanced version I know—and I think you know—that some people are going to only hear the extreme version.

Maybe what it really comes down to is a moral question: Is this a good-faith discussion where we are trying to reach the truth together? Or is this a psychological manipulation to try to get me to believe what you believe? Am I a fellow rational agent seeking knowledge with you? Or am I a behavior machine that you want to control by pushing the right buttons?

I won’t say that overstatement is always wrong—because that would be an overstatement. But please, make an effort to avoid it whenever you can.

Signaling and the Curse of Knowledge

Jan 3 JDN 2459218

I received several books for Christmas this year, and the one I was most excited to read first was The Sense of Style by Steven Pinker. Pinker is exactly the right person to write such a book: He is both a brilliant linguist and cognitive scientist and also an eloquent and highly successful writer. There are two other books on writing that I rate at the same tier: On Writing by Stephen King, and The Art of Fiction by John Gardner. Don’t bother with style manuals from people who only write style manuals; if you want to learn how to write, learn from people who are actually successful at writing.

Indeed, I knew I’d love The Sense of Style as soon as I read its preface, containing some truly hilarious takedowns of Strunk & White. And honestly Strunk & White are among the best standard style manuals; they at least actually manage to offer some useful advice while also being stuffy, pedantic, and often outright inaccurate. Most style manuals only do the second part.

One of Pinker’s central focuses in The Sense of Style is on The Curse of Knowledge, an all-too-common bias in which knowing things makes us unable to appreciate the fact that other people don’t already know it. I think I succumbed to this failing most greatly in my first book, Special Relativity from the Ground Up, in which my concept of “the ground” was above most people’s ceilings. I was trying to write for high school physics students, and I think the book ended up mostly being read by college physics professors.

The problem is surely a real one: After years of gaining expertise in a subject, we are all liable to forget the difficulty of reaching our current summit and automatically deploy concepts and jargon that only a small group of experts actually understand. But I think Pinker underestimates the difficulty of escaping this problem, because it’s not just a cognitive bias that we all suffer from time to time. It’s also something that our society strongly incentivizes.

Pinker points out that a small but nontrivial proportion of published academic papers are genuinely well written, using this to argue that obscurantist jargon-laden writing isn’t necessary for publication; but he didn’t seem to even consider the fact that nearly all of those well-written papers were published by authors who already had tenure or even distinction in the field. I challenge you to find a single paper written by a lowly grad student that could actually get published without being full of needlessly technical terminology and awkward passive constructions: “A murian model was utilized for the experiment, in an acoustically sealed environment” rather than “I tested using mice and rats in a quiet room”. This is not because grad students are more thoroughly entrenched in the jargon than tenured professors (quite the contrary), nor that grad students are worse writers in general (that one could really go either way), but because grad students have more to prove. We need to signal our membership in the tribe, whereas once you’ve got tenure—or especially once you’ve got an endowed chair or something—you have already proven yourself.

Pinker seems to briefly touch this insight (p. 69), without fully appreciating its significance: “Even when we have an inlkling that we are speaking in a specialized lingo, we may be reluctant to slip back into plain speech. It could betray to our peers the awful truth that we are still greenhorns, tenderfoots, newbies. And if our readers do know the lingo, we might be insulting their intelligence while spelling it out. We would rather run the risk of confusing them while at least appearing to be soophisticated than take a chance at belaboring the obvious while striking them as naive or condescending.”

What we are dealing with here is a signaling problem. The fact that one can write better once one is well-established is the phenomenon of countersignaling, where one who has already established their status stops investing in signaling.

Here’s a simple model for you. Suppose each person has a level of knowledge x, which they are trying to demonstrate. They know their own level of knowledge, but nobody else does.

Suppose that when we observe someone’s knowledge, we get two pieces of information: We have an imperfect observation of their true knowledge which is x+e, the real value of x plus some amount of error e. Nobody knows exactly what the error is. To keep the model as simple as possible I’ll assume that e is drawn from a uniform distribution between -1 and 1.

Finally, assume that we are trying to select people above a certain threshold: Perhaps we are publishing in a journal, or hiring candidates for a job. Let’s call that threshold z. If x < z-1, then since e can never be larger than 1, we will immediately observe that they are below the threshold and reject them. If x > z+1, then since e can never be smaller than -1, we will immediately observe that they are above the threshold and accept them.

But when z-1 < x < z+1, we may think they are above the threshold when they actually are not (if e is positive), or think they are not above the threshold when they actually are (if e is negative).

So then let’s say that they can invest in signaling by putting some amount of visible work in y (like citing obscure papers or using complex jargon). This additional work may be costly and provide no real value in itself, but it can still be useful so long as one simple condition is met: It’s easier to do if your true knowledge x is high.

In fact, for this very simple model, let’s say that you are strictly limited by the constraint that y <= x. You can’t show off what you don’t know.

If your true value x > z, then you should choose y = x. Then, upon observing your signal, we know immediately that you must be above the threshold.

But if your true value x < z, then you should choose y = 0, because there’s no point in signaling that you were almost at the threshold. You’ll still get rejected.

Yet remember before that only those with z-1 < x < z+1 actually need to bother signaling at all. Those with x > z+1 can actually countersignal, by also choosing y = 0. Since you already have tenure, nobody doubts that you belong in the club.

This means we’ll end up with three groups: Those with x < z, who don’t signal and don’t get accepted; those with z < x < z+1, who signal and get accepted; and those with x > z+1, who don’t signal but get accepted. Then life will be hardest for those who are just above the threshold, who have to spend enormous effort signaling in order to get accepted—and that sure does sound like grad school.

You can make the model more sophisticated if you like: Perhaps the error isn’t uniformly distributed, but some other distribution with wider support (like a normal distribution, or a logistic distribution); perhaps the signalling isn’t perfect, but itself has some error; and so on. With such additions, you can get a result where the least-qualified still signal a little bit so they get some chance, and the most-qualified still signal a little bit to avoid a small risk of being rejected. But it’s a fairly general phenomenon that those closest to the threshold will be the ones who have to spend the most effort in signaling.

This reveals a disturbing overlap between the Curse of Knowledge and Impostor Syndrome: We write in impenetrable obfuscationist jargon because we are trying to conceal our own insecurity about our knowledge and our status in the profession. We’d rather you not know what we’re talking about than have you realize that we don’t know what we’re talking about.

For the truth is, we don’t know what we’re talking about. And neither do you, and neither does anyone else. This is the agonizing truth of research that nearly everyone doing research knows, but one must be either very brave, very foolish, or very well-established to admit out loud: It is in the nature of doing research on the frontier of human knowledge that there is always far more that we don’t understand about our subject than that we do understand.

I would like to be more open about that. I would like to write papers saying things like “I have no idea why it turned out this way; it doesn’t make sense to me; I can’t explain it.” But to say that the profession disincentivizes speaking this way would be a grave understatement. It’s more accurate to say that the profession punishes speaking this way to the full extent of its power. You’re supposed to have a theory, and it’s supposed to work. If it doesn’t actually work, well, maybe you can massage the numbers until it seems to, or maybe you can retroactively change the theory into something that does work. Or maybe you can just not publish that paper and write a different one.

Here is a graph of one million published z-scores in academic journals:

It looks like a bell curve, except that almost all the values between -2 and 2 are mysteriously missing.

If we were actually publishing all the good science that gets done, it would in fact be a very nice bell curve. All those missing values are papers that never got published, or results that were excluded from papers, or statistical analyses that were massaged, in order to get a p-value less than the magical threshold for publication of 0.05. (For the statistically uninitiated, a z-score less than -2 or greater than +2 generally corresponds to a p-value less than 0.05, so these are effectively the same constraint.)

I have literally never read a single paper published in an academic journal in the last 50 years that said in plain language, “I have no idea what’s going on here.” And yet I have read many papers—probably most of them, in fact—where that would have been an appropriate thing to say. It’s actually quite a rare paper, at least in the social sciences, that actually has a theory good enough to really precisely fit the data and not require any special pleading or retroactive changes. (Often the bar for a theory’s success is lowered to “the effect is usually in the right direction”.) Typically results from behavioral experiments are bizarre and baffling, because people are a little screwy. It’s just that nobody is willing to stake their career on being that honest about the depth of our ignorance.

This is a deep shame, for the greatest advances in human knowledge have almost always come from people recognizing the depth of their ignorance. Paradigms never shift until people recognize that the one they are using is defective.

This is why it’s so hard to beat the Curse of Knowledge: You need to signal that you know what you’re talking about, and the truth is you probably don’t, because nobody does. So you need to sound like you know what you’re talking about in order to get people to listen to you. You may be doing nothing more than educated guesses based on extremely limited data, but that’s actually the best anyone can do; those other people saying they have it all figured out are either doing the same thing, or they’re doing something even less reliable than that. So you’d better sound like you have it all figured out, and that’s a lot more convincing when you “utilize a murian model” than when you “use rats and mice”.

Perhaps we can at least push a little bit toward plainer language. It helps to be addressing a broader audience: it is both blessing and curse that whatever I put on this blog is what you will read, without any gatekeepers in my path. I can use plainer language here if I so choose, because no one can stop me. But of course there’s a signaling risk here as well: The Internet is a public place, and potential employers can read this as well, and perhaps decide they don’t like me speaking so plainly about the deep flaws in the academic system. Maybe I’d be better off keeping my mouth shut, at least for awhile. I’ve never been very good at keeping my mouth shut.

Once we get established in the system, perhaps we can switch to countersignaling, though even this doesn’t always happen. I think there are two reasons this can fail: First, you can almost always try to climb higher. Once you have tenure, aim for an endowed chair. Once you have that, try to win a Nobel. Second, once you’ve spent years of your life learning to write in a particular stilted, obscurantist, jargon-ridden way, it can be very difficult to change that habit. People have been rewarding you all your life for writing in ways that make your work unreadable; why would you want to take the risk of suddenly making it readable?

I don’t have a simple solution to this problem, because it is so deeply embedded. It’s not something that one person or even a small number of people can really fix. Ultimately we will need to, as a society, start actually rewarding people for speaking plainly about what they don’t know. Admitting that you have no clue will need to be seen as a sign of wisdom and honesty rather than a sign of foolishness and ignorance. And perhaps even that won’t be enough: Because the fact will still remain that knowing what you know that other people don’t know is a very difficult thing to do.

Adversity is not a gift

Nov 29 JDN 2459183

For the last several weeks I’ve been participating in a program called “positive intelligence” (which they abbreviate “PQ” even though that doesn’t make sense); it’s basically a self-help program that is designed to improve mood and increase productivity. I am generally skeptical of such things, and I could tell from the start that it was being massively oversold, but I had the opportunity to participate for free, and I looked into the techniques involved and most of them seem to be borrowed from cognitive-behavioral therapy and mindfulness meditation.

Overall, I would say that the program has had small but genuine benefits for me. I think the most helpful part was actually getting the chance to participate in group sessions (via Zoom of course) with others also going through the program. That kind of mutual social support can make a big difference. The group I joined was all comprised of fellow economists (some other grad students, some faculty), so we had a lot of shared experiences.

Some of the techniques feel very foolish, and others just don’t seem to work for me; but I did find at least some of the meditation techniques (which they annoyingly insist on calling by the silly name “PQ reps”) have helped me relax.

But there’s one part of the PQ program in particular that I just can’t buy into, and this is the idea that adversity is a gift and an opportunity.

They call it the “Sage perspective”: You observe the world without judging what is good or bad, and any time you think something is bad, you find a way to transform it into a gift and an opportunity. The claim is that everything—or nearly everything—that happens to you can make you better off. There’s a lot of overlap here with the attitude “Everything happens for a reason”.

I don’t doubt that sincerely believing this would make you happier. Nevertheless, it is obviously false.

If indeed adversity were a gift, we would seek it out. If getting fired or going bankrupt or getting sick were a gift and an opportunity, we’d work to make these things happen.

Yes, it’s true that sometimes an event which seems bad at the time can turn out to have good consequences in the long run. This is simply because we are unable to foresee all future ramifications. Sometimes things turn out differently than you think they will. But most of the time, when something seems bad, it is actually bad.

There might be some small amount of discomfort or risk that would be preferable to a life of complete safety and complacency; but we are perfectly capable of seeking out whatever discomfort or risk we choose. Most of us live with far more discomfort and risk than we would prefer, and simply have no choice in the matter.

If adversity were a gift, people would thank you for giving it to them. “Thanks for dumping me!” “Thanks for firing me!” “Thanks for punching me!” These aren’t the sort of thing we hear very often (at least not sincerely).

I think this is fairly obvious, honestly, so I won’t belabor it any further. But it raises a question: Is there a way to salvage the mental health benefits of this attitude while abandoning its obvious falsehood?

“Everything happens for a reason” doesn’t work; we live in a universe of deep randomness, ruled by the blind idiot gods of natural law.

“Every cloud has a silver lining” is better; but clearly not every bad thing has an upside, or if it does the upside can be so small as to be utterly negligible. (What was the upside of Rwandan genocide?) Restricted to ordinary events like getting fired this one works pretty well; but it obviously fails for the most extreme traumas, and doesn’t seem particularly helpful for the death of a loved one either.

“What doesn’t kill me makes me stronger” is better still, but clearly not true in every case; some bad events that don’t actually kill us can traumatize us and make the rest of our lives harder. Perhaps “What doesn’t permanently damage me makes me stronger”?

I think the version of this attitude that I have found closest to the truth is “Everything is raw material”. Sometimes bad things just happen: Bad luck, or bad actions, can harm just about anyone at just about any time. But it is within our power to decide how we will respond to what happens to us, and wallowing in despair is almost never the best response.

Thus, while it is foolish to see adversity as a gift, it is not so foolish to see it as an opportunity. Don’t try to pretend that bad things aren’t bad. There’s no sense in denying that we would prefer some outcomes over others, and we feel hurt or disappointed when things don’t turn out how we wanted. Yet even what is bad can still contain within it chances to learn or make things better.

How men would benefit from a less sexist world

Nov 22 JDN 2459176

November 19 is International Men’s Day, so this week seemed an appropriate time for this post.

It’s obvious that a less sexist world would benefit women. But there are many ways in which it would benefit men as well.

First, there is the overwhelming pressure of conforming to norms of masculinity. I don’t think most women realize just how oppressive this is, how nearly every moment of our lives we are struggling to conform to a particular narrow vision of what it is to be a man, from which even small deviations can be severely punished. A less sexist world would mean a world where these pressures are greatly reduced.

Second, there is the fact that men are subjected to far more violence than women. Men are three times as likely to be murdered as women. This violence has many causes—indeed, the fact that men are much more likely to be both victims and perpetrators of violence nearly everywhere in the world suggests genetic causes—but a less sexist world could be a world with less violence in general, and men would benefit most from that.

Third, a less sexist world is a world where men and women feel more equal and more comfortable with one another, a world in which relationships between men and women can be deeper and more authentic. Another part of the male experience that most women don’t seem to understand is how incredibly painful it is to be treated as “Schrodinger’s Rapist”, where you are considered a potential predator by default and have to constantly signal that you are not threatening. To be clear, the problem isn’t that women are trying to protect themselves from harm; it’s that their risk of being harmed is high enough that they have to do this. I’m not saying women should stop trying to play it safe around men; I’m saying that we should be trying to find ways to greatly reduce the risk of harm that they face—and that doing so would benefit both women, who would be safer, and men, who wouldn’t have to be treated as potential predators at all times.

Feminists have actually done a lot of things that directly benefit men, including removing numerous laws that discriminate against men.

Are there some men who stand to be harmed by a less sexist society? Sure. Rapists clearly stand to be harmed. Extremely misogynist men will be pressured to change, which could be harmful to them. And, to be clear, it won’t all be benefits even for the rest of us. We will have to learn new things, change how we behave, challenge some of our most deep-seated norms and attitudes. But overall, I think that most men are already better off because of feminism, and would continue to be even better off still if the world became more feminist.

Why does this matter? Wouldn’t the benefits to women be a sufficient reason to make a less sexist world, even if it did end up harming most men?

Well, yes and no: It actually depends on how much it would harm men. If those harms were actually large enough, they would present a compelling reason not to make a more feminist world. That is clearly not the case, and this should be obvious to just about anyone; but it’s not a logical impossibility. Indeed, even knowing that the harms are not enough to justify abandoning the entire project, they could still be large enough to justify slowing it down or seeking other approaches to solving the problems feminism was intended to solve.

But yes, clearly feminism would be worth doing even if it had no net benefit to men. Yet, the fact that it does have a net benefit to most men is useful information.

First, it tells us that the world is nonzero-sum, that we can make some people better off without making others equally worse off. This is a deep and important insight that I think far too few people have really internalized.

Second, it provides numerous strategic benefits for recruiting men to the cause. Consider the following two potential sales pitches for feminism:

“You benefit from this system, but women are harmed by it. You should help us change it, even though that would harm you! If you don’t, you’re a bad person!”

“Women are harmed most by this system, but you are harmed by it too. You can help us change it, and we’ll make almost everyone better off, including you!”

Which of those two sales pitches seems more likely to convince someone who is on the fence?

Consider in particular men who aren’t particularly well-off themselves. If you are an unemployed, poor Black man, you probably find that the phrase “male privilege” rings a little hollow. Yes, perhaps you would be even worse off if you were a woman, but you’re not doing great right now, and you probably aren’t thrilled with the idea of risking being made even worse off, even by changes that you would otherwise agree are beneficial to society as a whole.

Similar reasoning applies to other “privileged” groups: Poor White men dying from treatable diseases because they can’t afford healthcare probably aren’t terribly moved by the phrase “White privilege”. Emphasizing the ways that your social movement will harm people seems like a really awful way of recruiting support, doesn’t it?

Yes, sometimes things that are overall good will harm some people, and we have to accept that. But the world is not always this way, and in fact some of the greatest progress in human civilization has been of the sort that benefits nearly everyone. Indeed, perhaps we should focus our efforts on the things that will benefit the most people, and then maybe come back later for things that benefit some at the expense of others?

What’s wrong with “should”?

Nov 8 JDN 2459162

I have been a patient in cognitive behavioral therapy (CBT) for many years now. The central premise that thoughts can influence emotions is well-founded, and the results of CBT are empirically well supported.

One of the central concepts in CBT is cognitive distortions: There are certain systematic patterns in how we tend to think, which often results in beliefs and emotions that are disproportionate with reality.

Most of the cognitive distortions CBT deals with make sense to me—and I am well aware that my mind applies them frequently: All-or-nothing, jumping to conclusions, overgeneralization, magnification and minimization, mental filtering, discounting the positive, personalization, emotional reasoning, and labeling are all clearly distorted modes of thinking that nevertheless are extremely common.

But there’s one “distortion” on CBT lists that always bothers me: “should statements”.

Listen to this definition of what is allegedly a cognitive distortion:

Another particularly damaging distortion is the tendency to make “should” statements. Should statements are statements that you make to yourself about what you “should” do, what you “ought” to do, or what you “must” do. They can also be applied to others, imposing a set of expectations that will likely not be met.

When we hang on too tightly to our “should” statements about ourselves, the result is often guilt that we cannot live up to them. When we cling to our “should” statements about others, we are generally disappointed by their failure to meet our expectations, leading to anger and resentment.

So any time we use “should”, “ought”, or “must”, we are guilty of distorted thinking? In other words, all of ethics is a cognitive distortion? The entire concept of obligation is a symptom of a mental disorder?

Different sources on CBT will define “should statements” differently, and sometimes they offer a more nuanced definition that doesn’t have such extreme implications:

Individuals thinking in ‘shoulds’, ‘oughts; or ‘musts’ have an ironclad view of how they and others ‘should’ and ‘ought’ to be. These rigid views or rules can generate feels of anger, frustration, resentment, disappointment and guilt if not followed.

Example: You don’t like playing tennis but take lessons as you feel you ‘should’, and that you ‘shouldn’t’ make so many mistakes on the court, and that your coach ‘ought to’ be stricter on you. You also feel that you ‘must’ please him by trying harder.

This is particularly problematic, I think, because of the All-or-Nothing distortion which does genuinely seem to be common among people with depression: Unless you are very clear from the start about where to draw the line, our minds will leap to saying that all statements involving the word “should” are wrong.

I think what therapists are trying to capture with this concept is something like having unrealistic expectations, or focusing too much on what could or should have happened instead of dealing with the actual situation you are in. But many seem to be unable to articulate that clearly, and instead end up asserting that entire concept of moral obligation is a cognitive distortion.

There may be a deeper error here as well: The way we study mental illness doesn’t involve enough comparison with the control group. Psychologists are accustomed to asking the question, “How do people with depression think?”; but they are not accustomed to asking the question, “How do people with depression think compared to people who don’t?” If you want to establish that A causes B, it’s not enough to show that those with B have A; you must also show that those who don’t have B also don’t have A.

This is an extreme example for illustration, but suppose someone became convinced that depression is caused by having a liver. They studied a bunch of people with depression, and found that they all had livers; hypothesis confirmed! Clearly, we need to remove the livers, and that will cure the depression.

The best example I can find of a study that actually asked that question compared nursing students and found that cognitive distortions explain about 20% of the variance in depression. This is a significant amount—but still leaves a lot unexplained. And most of the research on depression doesn’t even seem to think to compare against people without depression.

My impression is that some cognitive distortions are genuinely more common among people with depression—but not all of them. There is an ongoing controversy over what’s called the depressive realism effect, which is the finding that in at least some circumstances the beliefs of people with mild depression seem to be more accurate than the beliefs of people with no depression at all. The result is controversial both because it seems to threaten the paradigm that depression is caused by distortions, and because it seems to be very dependent on context; sometimes depression makes people more accurate in their beliefs, other times it makes them less accurate.

Overall, I am inclined to think that most people have a variety of cognitive distortions, but we only tend to notice when those distortions begin causing distress—such when are they involved in depression. Human thinking in general seems to be a muddled mess of heuristics, and the wonder is that we function as well as we do.

Does this mean that we should stop trying to remove cognitive distortions? Not at all. Distorted thinking can be harmful even if it doesn’t cause you distress: The obvious example is a fanatical religious or political belief that leads you to harm others. And indeed, recognizing and challenging cognitive distortions is a highly effective treatment for depression.

Actually I created a simple cognitive distortion worksheet based on the TEAM-CBT approach developed by David Burns that has helped me a great deal in a remarkably short time. You can download the worksheet yourself and try it out. Start with a blank page and write down as many negative thoughts as you can, and then pick 3-5 that seem particularly extreme or unlikely. Then make a copy of the cognitive distortion worksheet for each of those thoughts and follow through it step by step. Particularly do not ignore the step “This thought shows the following good things about me and my core values:”; that often feels the strangest, but it’s a critical part of what makes the TEAM-CBT approach better than conventional CBT.

So yes, we should try to challenge our cognitive distortions. But the mere fact that a thought is distressing doesn’t imply that it is wrong, and giving up on the entire concept of “should” and “ought” is throwing out a lot of babies with that bathwater.

We should be careful about labeling any thoughts that depressed people have as cognitive distortions—and “should statements” is a clear example where many psychologists have overreached in what they characterize as a distortion.

Trump will soon be gone. But this isn’t over.

Nov 8 JDN 2459162

After a frustratingly long wait for several states to finish counting their mail-in ballots (particularly Pennsylvania, Nevada, and Arizona), Biden has officially won the Presidential election. While it was far too close in a few key states, this is largely an artifact of the Electoral College: Biden’s actual popular vote advantage was over 4 million votes. We now have our first Vice President who is a woman of color. I think it’s quite reasonable for us all to share a long sigh of relief at this result.

We have won this battle. But the war is far from over.

First, there is the fact that we are still in a historic pandemic and economic recession. I have no doubt that Biden’s policy response will be better than Trump’s; but he hasn’t taken office yet, and much of the damage has already been done. Things are not going to get much better for quite awhile yet.

Second, while Biden is a pretty good candidate, he does have major flaws.

Above all, Biden is still far too hawkish on immigration and foreign policy. He won’t chant “build the wall!”, but he’s unlikely to tear down all of our border fences or abolish ICE. He won’t rattle the saber with Iran or bomb civilians indiscriminately, but he’s unlikely to end the program of assassination drone strikes. Trump has severely, perhaps irrevocably, damaged the Pax Americana with his ludicrous trade wars, alienation of our allies, and fawning over our enemies; but whether or not Biden can restore America’s diplomatic credibility, I have no doubt that he’ll continue to uphold—and deploy—America’s military hegemony. Indeed, the failure of the former could only exacerbate the latter.

Biden’s domestic policy is considerably better, but even there he doesn’t go far enough. His healthcare plan is a substantial step forward, improving upon the progress already made by Obamacare; but it’s still not the single-payer healthcare system we really need. He has some good policy ideas for directly combating discrimination, but isn’t really addressing the deep structural sources of systemic racism. His anti-poverty programs would be a step in the right direction, but are clearly insufficient.

Third, Democrats did not make significant gains in Congress, and while they kept the majority in the House, they are unlikely to gain control of the Senate. Because the Senate is so powerful and Mitch McConnell is so craven, this could be disastrous for Biden’s ability to govern.

But there is an even more serious problem we must face as a country: Trump got 70 million votes. Even after all he did—his endless lies, his utter incompetence, his obvious corruption—and all that happened—the mishandled pandemic, the exacerbated recession—there were still 70 million people willing to vote for Trump. I said it from the beginning: I have never feared Trump nearly so much as I fear an America that could elect him.

Yes, of course he would have had a far worse shot if our voting system were better: Several viable parties, range voting, and no Electoral College would have all made things go very differently than they did in 2016. But the fact remains that tens of millions of Americans were willing to vote for this man not once, but twice.

What can explain the support of so many people for such an obviously terrible leader?

First, there is misinformation: Our mass media is biased and can give a very distorted view of the world. Someone whose view of world events was shaped entirely by right-wing media like Fox News (let alone OAN) might not realize how terrible Trump is, or might be convinced that Biden is somehow even worse. Yet today, in the 21st century, our access to information is virtually unlimited. Anyone who really wanted to know what Trump is like would be able to find out—so whatever ignorance or misinformation Trump voters had, they bear the greatest responsibility for it.

Then, there is discontent: Growth in total economic output has greatly outpaced growth in real standard of living for most Americans. While real per-capita GDP rose from $26,000 in 1974 to $56,000 today (a factor of 2.15, or 1.7% per year), real median personal income only rose from $25,000 to $36,000 (a factor of 1.44, or 0.8% per year). This reflects the fact that more and more of our country’s wealth is being concentrated in the hands of the rich. Combined with dramatically increased costs of education and healthcare, this means that most American families really don’t feel like their standard of living has meaningfully improved in a generation or more.

Yet if people are discontent with how our economy is run… why would they vote for Donald Trump, who epitomizes everything that is wrong with that system? The Democrats have not done enough to fight rising inequality and spiraling healthcare costs, but they have at least done something—raising taxes here, expanding Medicaid there. This is not enough, since it involves only tweaking the system at the edges rather than solving the deeper structural problems—but it has at least some benefit. The Republicans at their best have done nothing, and at their worst actively done everything in their power to exacerbate rising inequality. And Trump is no different in this regard than any other Republican; he promised more populist economic policy, but did not deliver it in any way. Do people somehow not see that?

I think we must face up to the fact that racism and sexism are clearly a major part of what motivates supporters of Trump. Trump’s core base consists of old, uneducated White men. Women are less likely to support him, and young people, educated people, and people of color are far less likely to support him. The race gap is staggering: A mere 8% of Black people support Trump, while 54% of White people do. While Asian and Hispanic voters are not quite so univocal, still it’s clear that if only non-White people had voted Biden would have won an utter landslide and might have taken every state—yes, likely even Florida, where Cuban-Americans did actually lean slightly toward Trump. The age and education gaps are also quite large: Among those under 30, only 30% support Trump, while among those over 65, 52% do. Among White people without a college degree, 64% support Trump, while among White people with a college degree, only 38% do. The gender gap is smaller, but still significant: 48% of men but only 42% of women support Trump. (Also the fact that the gender gap was smaller this year than in 2016 could reflect the fact that Clinton was running for President but Harris was only running for Vice President.)

We shouldn’t ignore the real suffering and discontent that rising inequality has wrought, nor should we dismiss the significance of right-wing propaganda. Yet when it comes right down to it, I don’t see how we can explain Trump’s popularity without recognizing that an awful lot of White men in America are extremely racist and sexist. The most terrifying thing about Trump is that millions of Americans do know what he’s like—and they’re okay with that.

Trump will soon be gone. But many others like him remain. We need to find a way to fix this, or the next racist, misogynist, corrupt, authoritarian psychopath may turn out to be a lot less foolish and incompetent.

What meritocracy trap?

Nov 1 JDN 2459155

So I just finished reading The Meritocracy Trap by David Markovits.

The basic thesis of the book is that America’s rising inequality is not due to a defect in our meritocratic ideals, but is in fact their ultimate fruition. Markovits implores us to reject the very concept of meritocracy, and replace it with… well, something, and he’s never very clear about exactly what.

The most frustrating thing about reading this book is trying to figure out where Markovits draws the line for “elite”. He rapidly jumps between talking about the upper quartile, the upper decile, the top 1%, and even the top 0.1% or top 0.01% while weaving his narrative. The upper quartile of the US contains 75 million people; the top 0.01% contains only 300,000. The former is the size of Germany, the latter the size of Iceland (which has fewer people than Long Beach). Inequality which concentrates wealth in the top quartile of Americans is a much less serious problem than inequality which concentrates wealth in the top 0.01%. It could still be a problem—those lower three quartiles are people too—but it is definitely not nearly as bad.

I think it’s particularly frustrating to me personally, because I am an economist, which means both that such quantitative distinctions are important to me, and also that whether or not I myself am in this “elite” depends upon which line you are drawing. Do I have a post-graduate education? Yes. Was I born into the upper quartile? Not quite, but nearly. Was I raised by married parents in a stable home? Certainly. Am I in the upper decile and working as a high-paid professional? Hopefully I will be soon. Will I enter the top 1%? Maybe, maybe not. Will I join the top 0.1%? Probably not. Will I ever be in the top 0.01% and a captain of industry? Almost certainly not.

So, am I one of the middle class who are suffering alienation and stagnation, or one of the elite who are devouring themselves with cutthroat competition? Based on BLS statistics for economists and job offers I’ve been applying to, my long-term household income is likely to be about 20-50% higher than my parents’; this seems like neither the painful stagnation he attributes to the middle class nor the unsustainable skyrocketing of elite incomes. (Even 50% in 30 years is only 1.4% per year, about our average rate of real GDP growth.) Marxists would no doubt call me petit bourgeoisie; but isn’t that sort of the goal? We want as many people as possible to live comfortable upper-middle class lives in white-collar careers?

Markovits characterizes—dare I say caricatures—the habits of the middle-class versus the elite, and once again I and most people I know cross-cut them: I spend more time with friends than family (elite), but I cook familiar foods, not fancy dinners (middle); I exercise fairly regularly and don’t watch much television (elite) but play a lot of video games and sleep a lot as well (middle). My web searches involve technology and travel (elite), but also chronic illness (middle). I am a donor to Amnesty International (elite) but also play tabletop role-playing games (middle). I have a functional, inexpensive car (middle) but a top-of-the-line computer (elite)—then again that computer is a few years old now (middle). Most of the people I hang out with are well-educated (elite) but struggling financially (middle), civically engaged (elite) but pessimistic (middle). I rent my apartment and have a lot of student debt (middle) but own stocks (elite). (The latter seemed like a risky decision before the pandemic, but as stock prices have risen and student loan interest was put on moratorium, it now seems positively prescient.) So which class am I, again?

I went to public school (middle) but have a graduate degree (elite). I grew up in Ann Arbor (middle) but moved to Irvine (elite). Then again my bachelor’s was at a top-10 institution (elite) but my PhD will be at only a top-50 (middle). The beautiful irony there is that the top-10 institution is the University of Michigan and the top-50 institution is the University of California, Irvine. So I can’t even tell which class each of those events is supposed to represent! Did my experience of Ann Arbor suddenly shift from middle class to elite when I graduated from public school and started attending the University of Michigan—even though about a third of my high school cohort did exactly that? Was coming to UCI an elite act because it’s a PhD in Orange County, or a middle-class act because it’s only a top-50 university?

If the gap between these two classes is such a wide chasm, how am I straddling it? I honestly feel quite confident in characterizing myself as precisely the upwardly-mobile upper-middle class that Markovits claims no longer exists. Perhaps we’re rarer than we used to be; perhaps our status is more precarious; but we plainly aren’t gone.

Markovits keeps talking about “radical differences” “not merely in degree but in kind” between “subordinate” middle-class workers and “superordinate” elite workers, but if the differences are really that stark, why is it so hard to tell which group I’m in? From what I can see, the truth seems less like a sharp divide between middle-class and upper-class, and more like an increasingly steep slope from middle-class to upper-middle class to upper-class to rich to truly super-rich. If I had to put numbers on this, I’d say annual household incomes of about $50,000, $100,000, $200,000, $400,000, $1 million, and $10 million respectively. (And yet perhaps I should add more categories: Even someone who makes $10 million a year has only pocket change next to Elon Musk or Jeff Bezos.) The slope has gotten steeper over time, but it hasn’t (yet?) turned into a sharp cliff the way Markovits describes. America’s Lorenz curve is clearly too steep, but it doesn’t have a discontinuity as far as I can tell.

Some of the inequalities Markovits discusses are genuine, but don’t seem to be particularly related to meritocracy. The fact that students from richer families go to better schools indeed seems unjust, but the problem is clearly not that the rich schools are too good (except maybe at the very top, where truly elite schools seem a bit excessive—five-figure preschool tuition?), but that the poor schools are not good enough. So it absolutely makes sense to increase funding for poor schools and implement various reforms, but this is hardly a radical notion—nor is it in any way anti-meritocratic. Providing more equal opportunities for the poor to raise their own station is what meritocracy is all about.

Other inequalities he objects to seem, if not inevitable, far too costly to remove: Educated people are better parents, who raise their children in ways that make them healthier, happier, and smarter? No one is going to apologize for being a good parent, much less stop doing so because you’re concerned about what it does to inequality. If you have some ideas for how we might make other people into better parents, by all means let’s hear them. But I believe I speak for the entire upper-middle class when I say: when I have kids of my own, I’m going to read to them, I’m not going to spank them, and there’s not a damn thing you can do to change my mind on either front. Quite frankly, this seems like a heavy-handed satire of egalitarianism, right out of Harrison Bergeron: Let’s make society equal by forcing rich people to neglect and abuse their kids as much as poor people do! My apologies to Vonnegut: I thought you were ridiculously exaggerating, but apparently some people actually think like this.

This is closely tied with the deepest flaw in the argument: The meritocratic elite are actually more qualified. It’s easy to argue that someone like Donald Trump shouldn’t rule the world; he’s a deceitful, narcissistic, psychopathic, incompetent buffoon. (The only baffling part is that 40% of American voters apparently disagree.) But it’s a lot harder to see why someone like Bill Gates shouldn’t be in charge of things: He’s actually an extremely intelligent, dedicated, conscientious, hard-working, ethical, and competent individual. Does he deserve $100 billion? No, for reasons I’ve talked about before. But even he knows that! He’s giving most of it away to highly cost-effective charities! Bill Gates alone has saved several million lives by his philanthropy.

Markovits tries to argue that the merits of the meritocratic elite are arbitrary and contextual, like the alleged virtues of the aristocratic class: “The meritocratic virtues, that is, are artifacts of economic inequality in just the fashion in which the pitching virtues are artifacts of baseball.” (p. 264) “The meritocratic achievement commonly celebrated today, no less than the aristocratic virtue acclaimed in the ancien regime, is a sham.” (p. 268)

But it’s pretty hard for me to see how things like literacy, knowledge of history and science, and mathematical skill are purely arbitrary. Even the highly specialized skills of a quantum physicist, software engineer, or geneticist are clearly not arbitrary. Not everyone needs to know how to solve the Schrodinger equation or how to run a polymerase chain reaction, but our civilization greatly benefits from the fact that someone does. Software engineers aren’t super-productive because of high inequality; they are super-productive because they speak the secret language of the thinking machines. I suppose some of the skills involved in finance, consulting, and law are arbitrary and contextual; but he makes it sound like the only purpose graduate school serves is in teaching us table manners.

Precisely by attacking meritocracy, Markovits renders his own position absurd. So you want less competent people in charge? You want people assigned to jobs they’re not good at? You think businesses should go out of their way to hire employees who will do their jobs worse? Had he instead set out to show how American society fails at achieving its meritocratic ideals—indeed, failing to provide equality of opportunity for the poor is probably the clearest example of this—he might have succeeded. But instead he tries to attack the ideals themselves, and fails miserably.

Markovits avoids the error that David Graeber made: Graeber sees that there are many useless jobs but doesn’t seem to have a clue why these jobs exist (and turns to quite foolish Marxian conspiracy theories to explain it). Markovits understands that these jobs are profitable for the firms that employ them, but unproductive for society as a whole. He is right; this is precisely what virtually the entire fields of finance, sales, advertising, and corporate law consist of. Most people in our elite work very hard with great skill and competence, and produce great profits for the corporations that employ them, all while producing very little of genuine societal value. But I don’t see how this is a flaw in meritocracy per se.

Nor does Markovits stop at accusing employment of being rent-seeking; he takes aim at education as well: “when the rich make exceptional investments in schooling, this does reduce the value of ordinary, middle-class training and degrees. […] Meritocratic education inexorably engenders a wasteful and destructive arms educational arms race, which ultimately benefits no one, not even the victors.” (p.153) I don’t doubt that education is in part such a rent-seeking arms race, and it’s worthwhile to try to minimize that. But education is not entirely rent-seeking! At the very least, is there not genuine value in teaching children to read and write and do arithmetic? Perhaps by the time we get to calculus or quantum physics or psychopathology we have reached diminishing returns for most students (though clearly at least some people get genuine value out of such things!), but education is not entirely comprised of signaling or rent-seeking (and nor do “sheepskin effects” prove otherwise).

My PhD may be less valuable to me than it would be to someone in my place 40 years ago, simply because there are more people with PhDs now and thus I face steeper competition. Then again, perhaps not, as the wage premium for college and postgraduate education has been increasing, not decreasing, over that time period. (How much of that wage premium is genuine social benefit and how much is rent-seeking is difficult to say.) In any case it’s definitely still valuable. I have acquired many genuine skills, and will in fact be able to be genuinely more productive as well as compete better in the labor market than I would have without it. Some parts of it have felt like a game where I’m just trying to stay ahead of everyone else, but it hasn’t all been that. A world where nobody had PhDs would be a world with far fewer good scientists and far slower technological advancement.

Abandoning meritocracy entirely would mean that we no longer train people to be more productive or match people to the jobs they are most qualified to do. Do you want a world where surgery is not done by the best surgeons, where airplanes are not flown by the best pilots? This necessarily means less efficient production and an overall lower level of prosperity for society as a whole. The most efficient way may not be the best way, but it’s still worth noting that it’s the most efficient way.

Really, is meritocracy the problem, or is it something else?

Markovits is clearly right that something is going wrong with American society: Our inequality is much too high, and our job market is much too cutthroat. I can’t even relate to his description of what the job market was like in the 1960s (“Old Economy Steve” has it right): “Even applicants for white-collar jobs received startlingly little scrutiny. For most midcentury workers, getting a job did not involve any application at all, in the competitive sense of the term.” (p.203)

In fact, if anything he seems to understate the difference across time, perhaps because it lets him overstate the difference across class (p. 203):

Today, by contrast, the workplace is methodically arranged around gradations of skill. Firms screen job candidates intensively at hiring, and they then sort elite and non-elite workers into separate physical spaces.

Only the very lowest-wage employers, seeking unskilled workers, hire casually. Middle-class employers screen using formal cognitive tests and lengthy interviews. And elite employers screen with urgent intensity, recruiting from only a select pool and spending millions of dollars to probe applicants over several rounds of interviews, lasting entire days.

Today, not even the lowest-wage employers hire casually! Have you ever applied to work at Target? There is a personality test you have to complete, which I presume is designed to test your reliability as an obedient corporate drone. Never in my life have I gotten a job that didn’t involve either a lengthy application process or some form of personal connection—and I hate to admit it, but usually the latter. It is literally now harder to get a job as a cashier at Target than it was to get a job as an engineer at Ford 60 years ago.

But I still can’t shake the feeling that meritocracy is not exactly what’s wrong here. The problem with the sky-high compensation packages at top financial firms isn’t that they are paid to people who are really good at their jobs; it’s that those jobs don’t actually accomplish anything beneficial for society. Where elite talent and even elite compensation is combined with genuine productivity, such as in science and engineering, it seems unproblematic (and I note that Markovits barely even touches on these industries, perhaps because he sees they would undermine his argument). The reason our economic growth seems to have slowed as our inequality has massively surged isn’t that we are doing too good a job of rewarding people for being productive.

Indeed, it seems like the problem may be much simpler: Labor supply exceeds labor demand.

Take a look at this graph from the Federal Reserve Bank of San Francisco:

[Beveridge_curve_data.png]

This graph shows the relationship over time between unemployment and job vacancies. As you can see, they are generally inversely related: More vacancies means less unemployment. I have drawn in a green line which indicates the cutoff between having more vacancies than unemployment—upper left—and having more unemployment than vacancies—lower right. We have almost always been in the state of having more unemployment than we have vacancies; notably, the mid-1960s were one of the few periods in which we had significantly more vacancies than unemployment.

For decades we’ve been instituting policies to try to give people “incentives to work”; but there is no shortage of labor in this country. We seem to have plenty of incentives to work—what we need are incentives to hire people and pay them well.

Indeed, perhaps we need incentives not to work—like a basic income or an expanded social welfare system. Thanks to automation, productivity is now astonishingly high, and yet we work ourselves to death instead of enjoying leisure.

And of course there are various other policy changes that have made our inequality worse—chiefly the dramatic drops in income tax rates at the top brackets that occurred under Reagan.

In fact, many of the specific suggestions Markovits makes—which, much to my chagrin, he waits nearly 300 pages to even mention—are quite reasonable, or even banal: He wants to end tax deductions for alumni donations to universities and require universities to enroll more people from lower income brackets; I could support that. He wants to regulate finance more stringently, eliminate most kinds of complex derivatives, harmonize capital gains tax rates to ordinary income rates, and remove the arbitrary cap on payroll taxes; I’ve been arguing for all of those things for years. What about any of these policies is anti-meritocratic? I don’t see it.

More controversially, he wants to try to re-organize production to provide more opportunities for mid-skill labor. In some industries I’m not sure that’s possible: The 10X programmer is a real phenomenon, and even mediocre programmers and engineers can make software and machines that are a hundred times as productive as doing the work by hand would be. But some of his suggestions make sense, such as policies favoring nurse practitioners over specialist doctors and legal secretaries instead of bar-certified lawyers. (And please, please reform the medical residency system! People die from the overwork caused by our medical residency system.)

But I really don’t see how not educating people or assigning people to jobs they aren’t good at would help matters—which means that meritocracy, as I understand the concept, is not to blame after all.

What if we cared for everyone equally?

Oct 11 JDN 2459134

Imagine for a moment a hypothetical being who was a perfect utilitarian, who truly felt at the deepest level an equal caring for all human beings—or even all life.

We often imagine that such a being would be perfectly moral, and sometimes chide ourselves for failing so utterly to live up to its ideal. Today I’d like to take a serious look at how such a being would behave, and ask whether it is really such a compelling ideal after all.

I cannot feel sadness at your grandmother’s death, for over 150,000 people die every day. By far the highest QALY lost are the deaths of children in the poorest countries, and I feel sad for them as an aggregate, but couldn’t feel particularly saddened by any individual one.

I cannot feel happiness at your wedding or the birth of your child, for 50,000 couples marry every day, and another 30,000 divorce. 350,000 children are born every day, so why should I care about yours?

My happiness does not change from hour to hour or day to day, except as a slow, steady increase over time that is occasionally interrupted briefly by sudden disasters like hurricanes or tsunamis. 2020 was the saddest year I’ve had in awhile, as for once there was strongly correlated suffering across the globe sufficient to break through the trend of steadily increasing prosperity.

Should we go out with friends for drinks or dinner or games, I’ll be ever-so-slightly happier, some barely perceptible degree, provided that there is no coincidental event which causes more than the baseline rate of global suffering that day. And I’d be just as happy to learn that someone else I’d never met went out to dinner with someone else I’d also never met.

Of course I love you, my dear: Precisely as much as I love the other eight billion people on Earth.

I hope now that you can see how flat, how bleak, how inhuman such a being’s experience would be. We might sometimes wish some respite from the roller coaster ride of our own emotional experiences, but in its place this creature feels almost nothing at all, just a vague sense of gradually increasing contentment which is occasionally interrupted by fleeting deviations from the trend.

Such a being is incapable of feeling love as we would recognize it—for a mind such as ours could not possibly feel so intensely for a billion people at once. To love all the people of the world equally, and still have anything resembling a human mind, is to love no one at all.

Perhaps we should not feel so bad that we are not such creatures, then?

Of course I do not mean to say that we should care nothing for distant strangers in foreign lands, or even that the tiny amount most people seem to care is adequate. We should care—and we should care more, and do more, than most people do.

But I do mean to say that it is possible to care too much about other people far away, an idea that probably seems obvious to some but radical to others. The human capacity for caring is not simply zero-sum—there are those who care more overall and less overall—but I do believe that it is limited: At some point you begin to sacrifice so much for those you have no attachments to that you begin to devalue your own attachments.

There is an interior optimum: We should care enough, but not too much. We should sacrifice some things, but not everything. Those closest to us should matter more than those further away—but both should matter. Where exactly to draw that line is a very difficult question, which has stumped far greater philosophers than I; but at least we can narrow the space and exclude the endpoints.

This may even make a certain space for morally justifying selfishness. Surely it does not justify total, utter selfishness with no regard for the suffering of others. But it defends self-care at the very least, and perhaps can sweep away some of the feelings of guilt we may have from being fortunate or prevailing in fair competition. Yes, much of what you have was gained by sheer luck, and even much of what you have earned, you earned by out-competing someone else nearly as deserving. But this is true of everyone, and as long as you played fair, you’ve not done wrong by doing better. There’s even good reason to think that a system which allocates its privileges by fair competition is a particularly efficient one, one which ultimately raises the prosperity of all.

If nothing else, reflecting on this has made me feel better about giving 8% of my gross income to charity instead of 20% or 50% or even 80%. And if even 8% is too much for you, try 2% or even 1%.