The paperclippers are already here

Jan 24 JDN 2459239

Imagine a powerful artificial intelligence, which is comprised of many parts distributed over a vast area so that it has no particular location. It is incapable of feeling any emotion: Neither love nor hate, neither joy nor sorrow, neither hope nor fear. It has no concept of ethics or morals, only its own programmed directives. It has one singular purpose, which it seeks out at any cost. Any who aid its purpose are generously rewarded. Any who resist its purpose are mercilessly crushed.

The Less Wrong community has come to refer to such artificial intelligences as “paperclippers”; the metonymous singular directive is to maximize the number of paperclips produced. There’s even an online clicker game where you can play as one called “Universal Paperclips“. The concern is that we might one day invent such artificial intelligences, and they could get out of control. The paperclippers won’t kill us because they hate us, but simply because we can be used to make more paperclips. This is a far more plausible scenario for the “AI apocalypse” than the more conventional sci-fi version where AIs try to kill us on purpose.

But I would say that the paperclippers are already here. Slow, analog versions perhaps. But they are already getting out of control. We call them corporations.

A corporation is probably not what you visualized when you read the first paragraph of this post, so try reading it again. Which parts are not true of corporations?

Perhaps you think a corporation is not an artificial intelligence? But clearly it’s artificial, and doesn’t it behave in ways that seem intelligent? A corporation has purpose beyond its employees in much the same way that a hive has purpose beyond its bees. A corporation is a human superorganism (and not the only kind either).

Corporations are absolutely, utterly amoral. Their sole directive is to maximize profit. Now, you might think that an individual CEO, or a board of directors, could decide to do something good, or refrain from something evil, for reasons other than profit; and to some extent this is true. But particularly when a corporation is publicly-traded, that CEO and those directors are beholden to shareholders. If shareholders see that the corporation is acting in ways that benefit the community but hurt their own profits, shareholders can rebel by selling their shares or even suing the company. In 1919, Dodge successfully sued Ford for the “crime” of setting wages too high and prices too low.

Humans are altruistic. We are capable of feeling, emotion, and compassion. Corporations are not. Corporations are made of human beings, but they are specifically structured to minimize the autonomy of human choices. They are designed to provide strong incentives to behave in a particular way so as to maximize profit. Even the CEO of a corporation, especially one that is publicly traded, has their hands tied most of the time by the desires of millions of shareholders and customers—so-called “market forces”. Corporations are entirely the result of human actions, but they feel like impersonal forces because they are the result of millions of independent choices, almost impossible to coordinate; so one individual has very little power to change the outcome.

Why would we create such entities? It almost feels as though we were conquered by some alien force that sought to enslave us to its own purposes. But no, we created corporations ourselves. We intentionally set up institutions designed to limit our own autonomy in the name of maximizing profit.

Part of the answer is efficiency: There are genuine gains in economic efficiency due to the corporate structure. Corporations can coordinate complex activity on a vast scale, with thousands or even millions of employees each doing what they are assigned without ever knowing—or needing to know—the whole of which they are a part.

But a publicly-traded corporation is far from the only way to do that. Even for-profit businesses are not the only way to organize production. And empirically, worker co-ops actually seem to be about as productive as corporations, while producing far less inequality and far more satisfied employees.

Thus, in order to explain the primacy of corporations, particularly those that are traded on stock markets, we must turn to ideology: The extreme laissez- faire concept of capitalism and its modern expression in the ideology of “shareholder value”. Somewhere along the way enough people—or at least enough policymakers—became convinced that the best way to run an economy was to hand over as much as possible to entities that exist entirely to maximize their own profits.

This is not to say that corporations should be abolished entirely. I am certainly not advocating a shift to central planning; I believe in private enterprise. But I should note that private enterprise can also include co-ops, partnerships, and closely-held businesses, rather than publicly traded corproations, and perhaps that’s all we need. Yet there do seem to be significant advantages to the corporate structure: Corporation seem to be spectacularly good at scaling up the production of goods and providing them to a large number of customers. So let’s not get rid of corporations just yet.

Instead, let us keep corporations on a short leash. When properly regulated, corporations can be very efficient at producing goods. But corporations can also cause tremendous damage when given the opportunity. Regulations aren’t just “red tape” that gets in the way of production. They are a vital lifeline that protects us against countless abuses that corporations would otherwise commit.

These vast artificial intelligences are useful to us, so let’s not get rid of them. But never for a moment imagine that their goals are the same as ours. Keep them under close watch at all times, and compel them to use their great powers for good—for, left to their own devices, they can just as easily do great evil.

A new chapter in my life, hopefully

Jan 17 JDN 2459232

My birthday is coming up soon, and each year around this time I try to step back and reflect on how the previous year has gone and what I can expect from the next one.

Needless to say, 2020 was not a great year for me. The pandemic and its consequences made this quite a bad year for almost everyone. Months of isolation and fear have made us all stressed and miserable, and even with the vaccines coming out the end is still all too far away. Honestly I think I was luckier than most: My work could be almost entirely done remotely, and my income is a fixed stipend, so financially I faced no hardship at all. But isolation still wreaks its toll.

Most of my energy this past year has been spent on the job market. I applied to over 70 different job postings, and from that I received 6 interviews, all but one of which I’ve already finished. Then, if they liked how I did in those interviews, I will be invited to another phase, which in normal times would be a flyout where candidates visit the campus; but due to COVID it’s all being done remotely now. And then, finally, I may actually get some job offers. Statistically I think I will probably get some kind of offer at this point, but I can’t be sure—and that uncertainty is quite nerve-wracking. I may get a job and move somewhere new, or I may not and have to stay here for another year and try again. Both outcomes are still quite probable, and I really can’t plan on either one.

If I do actually get a job, this will open a new chapter in my life—and perhaps I will finally be able to settle down with a permanent career, buy a house, start a family. One downside of graduate school I hadn’t really anticipated is how it delays adulthood: You don’t really feel like you are a proper adult, because you are still in the role of a student for several additional years. I am all too ready to be done with being a student. I feel as though I’ve spent all my life preparing to do things instead of actually doing them, and I am now so very tired of preparing.

I don’t even know for sure what I want to do—I feel disillusioned with academia, I haven’t been able to snare any opportunities in government or nonprofits, and I need more financial security than I could get if I leapt headlong into full-time writing. But I am quite certain that I want to actually do something, and no longer simply be trained and prepared (and continually evaluated on that training and preparation).

I’m even reluctant to do a postdoc, because that also likely means packing up and moving again in a few year (though I would prefer it to remaining here another year).

I have to keep reminding myself that all of this is temporary: The pandemic will eventually be quelled by vaccines, and quarantine procedures will end, and life for most of us will return to normal. Even if I don’t get a job I like this year, I probably will next year; and then I can finally tie off my education with a bow and move on. Even if the first job isn’t permanent, eventually one will be, and at last I’ll be able to settle into a stable adult life.

Much of this has already dragged on longer than I thought it would. Not the job market, which has gone more or less as expected. (More accurately, my level of optimism has jumped up and down like a roller coaster, and on average what I thought would happen has been something like what actually happened so far.) But the pandemic certainly has; the early attempts at lockdown were ineffective, the virus kept spreading worse and worse, and now there are more COVID cases in the US than ever before. Southern California in particular has been hit especially hard, and hospitals here are now overwhelmed just as we feared they might be.

Even the removal of Trump has been far more arduous than I expected. First there was the slow counting of ballots because so many people had (wisely) voted absentee. Then there were the frivolous challenges to the counts—and yes, I mean frivolous in a legal sense, as 61 out of 62 lawsuits were thrown out immediately and the 1 that made it through was a minor technical issue.

And then there was an event so extreme I can barely even fathom that it actually happened: An armed mob stormed the Capitol building, forced Congress to evacuate, and made it inside with minimal resistance from the police. The stark difference in how the police reacted to this attempted insurrection and how they have responded to the Black Lives Matter protests underscores the message of Black Lives Matter better than they ever could have by themselves.

In one sense it feels like so much has happened: We have borne witness to historic events in real-time. But in another sense it feels like so little has happened: Staying home all the time under lockdown has meant that days are alway much the same, and each day blends into the next. I feel somehow unhinged frrom time, at once marveling that a year has passed already, and marveling that so much happened in only a year.

I should soon hear back from these job interviews and have a better idea what the next chapter of my life will be. But I know for sure that I’ll be relieved once this one is over.

I dislike overstatement

Jan 10 JDN 2459225

I was originally planning on titling this post “I hate overstatement”, but I thought that might be itself an overstatement; then I considered leaning into the irony with something like “Overstatement is the worst thing ever”. But no, I think my point best comes across if I exemplify it, rather than present it ironically.

It’s a familiar formula: “[Widespread belief] is wrong! [Extreme alternative view] is true! [Obvious exception]. [Further qualifications]. [Revised, nuanced view that is only slightly different from the widespread belief].”

Here are some examples of the formula (these are not direct quotes but paraphrases of their general views). Note that these are all people I basically agree with, and yet I still find their overstatement annoying:

Bernie Sanders: “Capitalism is wrong! Socialism is better! Well, not authoritarian socialism like the Soviet Union. And some industries clearly function better when privatized. Scandinavian social democracy seems to be the best system.”

Richard Dawkins: “Religion is a delusion! Only atheists are rational! Well, some atheists are also pretty irrational. And most religious people are rational about most things most of the time, and don’t let their religious beliefs interfere too greatly with their overall behavior. Really, what I mean to say that is that God doesn’t exist and organized religion is often harmful.”

Black Lives Matter: “Abolish the police! All cops are bastards! Well, we obviously still need some kind of law enforcement system for dealing with major crimes; we can’t just let serial killers go free. In fact, while there are deep-seated flaws in police culture, we could solve a lot of the most serious problems with a few simple reforms like changing the rules of engagement.”

Sam Harris is particularly fond of this formula, so here is a direct quote that follows the pattern precisely:

“The link between belief and behavior raises the stakes considerably. Some propositions are so dangerous that it may even be ethical to kill people for believing them. This may seem an extraordinary claim, but it merely enunciates an ordinary fact about the world in which we live. Certain beliefs place their adherents beyond the reach of every peaceful means of persuasion, while inspiring them to commit acts of extraordinary violence against others. There is, in fact, no talking to some people. If they cannot be captured, and they often cannot, otherwise tolerant people may be justified in killing them in self-defense. This is what the United States attempted in Afghanistan, and it is what we and other Western powers are bound to attempt, at an even greater cost to ourselves and to innocents abroad, elsewhere in the Muslim world. We will continue to spill blood in what is, at bottom, a war of ideas.”

Somehow in a single paragraph he started with the assertion “It is permissible to punish thoughtcrime with death” and managed to qualify it down to “The Afghanistan War was largely justified”. This is literally the difference between a proposition fundamentally antithetical to everything America stands for, and an utterly uncontroversial statement most Americans agree with. Harris often complains that people misrepresent his views, and to some extent this is true, but honestly I think he does this on purpose because he knows that controversy sells. There’s taking things out of context—and then there’s intentionally writing in a style that will maximize opportunities to take you out of context.

I think the idea behind overstating your case is that you can then “compromise” toward your actual view, and thereby seem more reasonable.

If there is some variable X that we want to know the true value of, and I currently believe that it is some value x1 while you believe that it is some larger value x2, and I ask you what you think, you may not want to tell me x2. Intead you might want to give some number even larger than x2 that you choose to try to make me adjust all the way into adopting your new belief.

For instance, suppose I think the probability of your view being right is p and the probability of my view being right is 1-p. But you think that the probability of your view being right is q > p and the probability of my view being right is 1-q < 1-p.

I tell you that my view is x1. Then I ask you what your view is. What answer should you give?


Well, you can expect that I’ll revise my belief to a new value px + (1-p)x1, where x is whatever answer you give me. The belief you want me to hold is qx2 + (1-q)x1. So your optimal choice is as follows:

qx2 + (1-q)x1 = px + (1-p)x1

x = x1 + q/p(x2-x1)

Since q > p, q/p > 1 and the x you report to me will be larger than your true value x2. You will overstate your case to try to get me to adjust my beliefs more. (Interestingly, if you were less confident in your own beliefs, you’d report a smaller difference. But this seems like a rare case.)

In a simple negotiation over dividing some resource (e.g. over a raise or a price), this is quite reasonable. When you’re a buyer and I’m a seller, our intentions are obvious enough: I want to sell high and you want to buy low. Indeed, the Nash Equilibrium of this game seems to be that we both make extreme offers then compromise on a reasonable offer, all the while knowing that this is exactly what we’re doing.

But when it comes to beliefs about the world, things aren’t quite so simple.

In particular, we have reasons for our beliefs. (Or at least, we’re supposed to!) And evidence isn’t linear. Even when propositions can be placed on a one-dimensional continuum in this way (and quite frankly we shoehorn far too many complex issues onto a simple “left/right” continuum!), evidence that X = x isn’t partial evidence that X = 2x. A strong argument that the speed of light is 3*108 m/s isn’t a weak argument that the speed of light is 3*109 m/s. A compelling reason to think that taxes should be over 30% isn’t even a slight reason to think that taxes should be over 90%.

To return to my specific examples: Seeing that Norway is a very prosperous country doesn’t give us reasons to like the Soviet Union. Recognizing that religion is empirically false doesn’t justify calling all religious people delusional. Reforming the police is obviously necessary, and diverting funds to other social services is surely a worthwhile goal; but law enforcement is necessary and cannot simply be abolished. And defending against the real threat of Islamist terrorism in no way requires us to institute the death penalty for thoughtcrime.

I don’t know how most people response to overstatement. Maybe it really does cause them to over-adjust their beliefs. Hyperbole is a very common rhetorical tactic, and for all I know perhaps it is effective on many people.

But personally, here is my reaction: At the very start, you stated something implausible. That has reduced your overall credibility.

If I continue reading and you then deal with various exceptions and qualifications, resulting in a more reasonable view, I do give you some credit for that; but now I am faced with a dilemma: Either (1) you were misrepresenting your view initially, or (2) you are engaging in a motte-and-bailey doctrine, trying to get me to believe the strong statement while you can only defend the weak statement. Either way I feel like you are being dishonest and manipulative. I trust you less. I am less interested in hearing whatever else you have to say. I am in fact less likely to adopt your nuanced view than I would have been if you’d simply presented it in the first place.

And that’s assuming I have the opportunity to hear your full nuanced version. If all I hear is the sound-byte overstatement, I will come away with an inaccurate assessment of your beliefs. I will have been presented with an implausible claim and evidence that doesn’t support that claim. I will reject your view out of hand, without ever actually knowing what your view truly was.

Furthermore, I know that many others who are listening are not as thoughtful as I am about seeking out detailed context, so even if I know the nuanced version I know—and I think you know—that some people are going to only hear the extreme version.

Maybe what it really comes down to is a moral question: Is this a good-faith discussion where we are trying to reach the truth together? Or is this a psychological manipulation to try to get me to believe what you believe? Am I a fellow rational agent seeking knowledge with you? Or am I a behavior machine that you want to control by pushing the right buttons?

I won’t say that overstatement is always wrong—because that would be an overstatement. But please, make an effort to avoid it whenever you can.

Signaling and the Curse of Knowledge

Jan 3 JDN 2459218

I received several books for Christmas this year, and the one I was most excited to read first was The Sense of Style by Steven Pinker. Pinker is exactly the right person to write such a book: He is both a brilliant linguist and cognitive scientist and also an eloquent and highly successful writer. There are two other books on writing that I rate at the same tier: On Writing by Stephen King, and The Art of Fiction by John Gardner. Don’t bother with style manuals from people who only write style manuals; if you want to learn how to write, learn from people who are actually successful at writing.

Indeed, I knew I’d love The Sense of Style as soon as I read its preface, containing some truly hilarious takedowns of Strunk & White. And honestly Strunk & White are among the best standard style manuals; they at least actually manage to offer some useful advice while also being stuffy, pedantic, and often outright inaccurate. Most style manuals only do the second part.

One of Pinker’s central focuses in The Sense of Style is on The Curse of Knowledge, an all-too-common bias in which knowing things makes us unable to appreciate the fact that other people don’t already know it. I think I succumbed to this failing most greatly in my first book, Special Relativity from the Ground Up, in which my concept of “the ground” was above most people’s ceilings. I was trying to write for high school physics students, and I think the book ended up mostly being read by college physics professors.

The problem is surely a real one: After years of gaining expertise in a subject, we are all liable to forget the difficulty of reaching our current summit and automatically deploy concepts and jargon that only a small group of experts actually understand. But I think Pinker underestimates the difficulty of escaping this problem, because it’s not just a cognitive bias that we all suffer from time to time. It’s also something that our society strongly incentivizes.

Pinker points out that a small but nontrivial proportion of published academic papers are genuinely well written, using this to argue that obscurantist jargon-laden writing isn’t necessary for publication; but he didn’t seem to even consider the fact that nearly all of those well-written papers were published by authors who already had tenure or even distinction in the field. I challenge you to find a single paper written by a lowly grad student that could actually get published without being full of needlessly technical terminology and awkward passive constructions: “A murian model was utilized for the experiment, in an acoustically sealed environment” rather than “I tested using mice and rats in a quiet room”. This is not because grad students are more thoroughly entrenched in the jargon than tenured professors (quite the contrary), nor that grad students are worse writers in general (that one could really go either way), but because grad students have more to prove. We need to signal our membership in the tribe, whereas once you’ve got tenure—or especially once you’ve got an endowed chair or something—you have already proven yourself.

Pinker seems to briefly touch this insight (p. 69), without fully appreciating its significance: “Even when we have an inlkling that we are speaking in a specialized lingo, we may be reluctant to slip back into plain speech. It could betray to our peers the awful truth that we are still greenhorns, tenderfoots, newbies. And if our readers do know the lingo, we might be insulting their intelligence while spelling it out. We would rather run the risk of confusing them while at least appearing to be soophisticated than take a chance at belaboring the obvious while striking them as naive or condescending.”

What we are dealing with here is a signaling problem. The fact that one can write better once one is well-established is the phenomenon of countersignaling, where one who has already established their status stops investing in signaling.

Here’s a simple model for you. Suppose each person has a level of knowledge x, which they are trying to demonstrate. They know their own level of knowledge, but nobody else does.

Suppose that when we observe someone’s knowledge, we get two pieces of information: We have an imperfect observation of their true knowledge which is x+e, the real value of x plus some amount of error e. Nobody knows exactly what the error is. To keep the model as simple as possible I’ll assume that e is drawn from a uniform distribution between -1 and 1.

Finally, assume that we are trying to select people above a certain threshold: Perhaps we are publishing in a journal, or hiring candidates for a job. Let’s call that threshold z. If x < z-1, then since e can never be larger than 1, we will immediately observe that they are below the threshold and reject them. If x > z+1, then since e can never be smaller than -1, we will immediately observe that they are above the threshold and accept them.

But when z-1 < x < z+1, we may think they are above the threshold when they actually are not (if e is positive), or think they are not above the threshold when they actually are (if e is negative).

So then let’s say that they can invest in signaling by putting some amount of visible work in y (like citing obscure papers or using complex jargon). This additional work may be costly and provide no real value in itself, but it can still be useful so long as one simple condition is met: It’s easier to do if your true knowledge x is high.

In fact, for this very simple model, let’s say that you are strictly limited by the constraint that y <= x. You can’t show off what you don’t know.

If your true value x > z, then you should choose y = x. Then, upon observing your signal, we know immediately that you must be above the threshold.

But if your true value x < z, then you should choose y = 0, because there’s no point in signaling that you were almost at the threshold. You’ll still get rejected.

Yet remember before that only those with z-1 < x < z+1 actually need to bother signaling at all. Those with x > z+1 can actually countersignal, by also choosing y = 0. Since you already have tenure, nobody doubts that you belong in the club.

This means we’ll end up with three groups: Those with x < z, who don’t signal and don’t get accepted; those with z < x < z+1, who signal and get accepted; and those with x > z+1, who don’t signal but get accepted. Then life will be hardest for those who are just above the threshold, who have to spend enormous effort signaling in order to get accepted—and that sure does sound like grad school.

You can make the model more sophisticated if you like: Perhaps the error isn’t uniformly distributed, but some other distribution with wider support (like a normal distribution, or a logistic distribution); perhaps the signalling isn’t perfect, but itself has some error; and so on. With such additions, you can get a result where the least-qualified still signal a little bit so they get some chance, and the most-qualified still signal a little bit to avoid a small risk of being rejected. But it’s a fairly general phenomenon that those closest to the threshold will be the ones who have to spend the most effort in signaling.

This reveals a disturbing overlap between the Curse of Knowledge and Impostor Syndrome: We write in impenetrable obfuscationist jargon because we are trying to conceal our own insecurity about our knowledge and our status in the profession. We’d rather you not know what we’re talking about than have you realize that we don’t know what we’re talking about.

For the truth is, we don’t know what we’re talking about. And neither do you, and neither does anyone else. This is the agonizing truth of research that nearly everyone doing research knows, but one must be either very brave, very foolish, or very well-established to admit out loud: It is in the nature of doing research on the frontier of human knowledge that there is always far more that we don’t understand about our subject than that we do understand.

I would like to be more open about that. I would like to write papers saying things like “I have no idea why it turned out this way; it doesn’t make sense to me; I can’t explain it.” But to say that the profession disincentivizes speaking this way would be a grave understatement. It’s more accurate to say that the profession punishes speaking this way to the full extent of its power. You’re supposed to have a theory, and it’s supposed to work. If it doesn’t actually work, well, maybe you can massage the numbers until it seems to, or maybe you can retroactively change the theory into something that does work. Or maybe you can just not publish that paper and write a different one.

Here is a graph of one million published z-scores in academic journals:

It looks like a bell curve, except that almost all the values between -2 and 2 are mysteriously missing.

If we were actually publishing all the good science that gets done, it would in fact be a very nice bell curve. All those missing values are papers that never got published, or results that were excluded from papers, or statistical analyses that were massaged, in order to get a p-value less than the magical threshold for publication of 0.05. (For the statistically uninitiated, a z-score less than -2 or greater than +2 generally corresponds to a p-value less than 0.05, so these are effectively the same constraint.)

I have literally never read a single paper published in an academic journal in the last 50 years that said in plain language, “I have no idea what’s going on here.” And yet I have read many papers—probably most of them, in fact—where that would have been an appropriate thing to say. It’s actually quite a rare paper, at least in the social sciences, that actually has a theory good enough to really precisely fit the data and not require any special pleading or retroactive changes. (Often the bar for a theory’s success is lowered to “the effect is usually in the right direction”.) Typically results from behavioral experiments are bizarre and baffling, because people are a little screwy. It’s just that nobody is willing to stake their career on being that honest about the depth of our ignorance.

This is a deep shame, for the greatest advances in human knowledge have almost always come from people recognizing the depth of their ignorance. Paradigms never shift until people recognize that the one they are using is defective.

This is why it’s so hard to beat the Curse of Knowledge: You need to signal that you know what you’re talking about, and the truth is you probably don’t, because nobody does. So you need to sound like you know what you’re talking about in order to get people to listen to you. You may be doing nothing more than educated guesses based on extremely limited data, but that’s actually the best anyone can do; those other people saying they have it all figured out are either doing the same thing, or they’re doing something even less reliable than that. So you’d better sound like you have it all figured out, and that’s a lot more convincing when you “utilize a murian model” than when you “use rats and mice”.

Perhaps we can at least push a little bit toward plainer language. It helps to be addressing a broader audience: it is both blessing and curse that whatever I put on this blog is what you will read, without any gatekeepers in my path. I can use plainer language here if I so choose, because no one can stop me. But of course there’s a signaling risk here as well: The Internet is a public place, and potential employers can read this as well, and perhaps decide they don’t like me speaking so plainly about the deep flaws in the academic system. Maybe I’d be better off keeping my mouth shut, at least for awhile. I’ve never been very good at keeping my mouth shut.

Once we get established in the system, perhaps we can switch to countersignaling, though even this doesn’t always happen. I think there are two reasons this can fail: First, you can almost always try to climb higher. Once you have tenure, aim for an endowed chair. Once you have that, try to win a Nobel. Second, once you’ve spent years of your life learning to write in a particular stilted, obscurantist, jargon-ridden way, it can be very difficult to change that habit. People have been rewarding you all your life for writing in ways that make your work unreadable; why would you want to take the risk of suddenly making it readable?

I don’t have a simple solution to this problem, because it is so deeply embedded. It’s not something that one person or even a small number of people can really fix. Ultimately we will need to, as a society, start actually rewarding people for speaking plainly about what they don’t know. Admitting that you have no clue will need to be seen as a sign of wisdom and honesty rather than a sign of foolishness and ignorance. And perhaps even that won’t be enough: Because the fact will still remain that knowing what you know that other people don’t know is a very difficult thing to do.

2020 is almost over

Dec27 JDN 2459211

I don’t think there are many people who would say that 2020 was their favorite year. Even if everything else had gone right, the 1.7 million deaths from the COVID pandemic would already make this a very bad year.

As if that weren’t bad enough, shutdowns in response to the pandemic, resulting unemployment, and inadequate fiscal policy responses have in a single year thrown nearly 150 million people back into extreme poverty. Unemployment in the US this year spiked to nearly 15%, its highest level since World War 2. Things haven’t been this bad for the US economy since the Great Depression.

And this Christmas season certainly felt quite different, with most of us unable to safely travel and forced to interact with our families only via video calls. New Year’s this year won’t feel like a celebration of a successful year so much as relief that we finally made it through.

Many of us have lost loved ones. Fortunately none of my immediate friends and family have died of COVID, but I can now count half a dozen acquaintances, friends-of-friends or distant relatives who are no longer with us. And I’ve been relatively lucky overall; both I and my partner work in jobs that are easy to do remotely, so our lives haven’t had to change all that much.

Yet 2020 is nearly over, and already there are signs that things really will get better in 2021. There are many good reasons for hope.


Joe Biden won the election by a substantial margin in both the popular vote and the Electoral College.

There are now multiple vaccines for COVID that have been successfully fast-tracked, and they are proving to be remarkably effective. Current forecasts suggest that we’ll have most of the US population vaccinated by the end of next summer.

Maybe the success of this vaccine will finally convince some of the folks who have been doubting the safety and effectiveness of vaccines in general. (Or maybe not; it’s too soon to tell.)

Perhaps the greatest reason to be hopeful about the future is the fact that 2020 is a sharp deviation from the long-term trend toward a better world. That 150 million people thrown back into extreme poverty needs to be compared against the over 1 billion people who have been lifted out of extreme poverty in just the last 30 years.

Those 1.7 million deaths need to be compared against the fact that global life expectancy has increased from 45 to 73 since 1950. The world population is 7.8 billion people. The global death rate has fallen from over 20 deaths per 1000 people per year to only 7.6 deaths per 1000 people per year. Multiplied over 7.8 billion people, that’s nearly 100 million lives saved every single year by advances in medicine and overall economic development. Indeed, if we were to sustain our current death rate indefinitely, our life expectancy would rise to over 130. There are various reasons to think that probably won’t happen, mostly related to age demographics, but in fact there are medical breakthroughs we might make that would make it possible. Even according to current forecasts, world life expectancy is expected to exceed 80 years by the end of the 21st century.

There have also been some significant environmental milestones this year: Global carbon emissions fell an astonishing 7% in 2020, though much of that was from reduced economic activity in response to the pandemic. (If we could sustain that, we’d cut global emissions in half each decade!) But many other milestones were the product of hard work, not silver linings of a global disaster: Whales returned to the Hudson river, Sweden officially terminated their last coal power plant, and the Great Barrier Reef is showing signs of recovery.

Yes, it’s been a bad year for most of us—most of the world, in fact. But there are many reasons to think that next year will be much better.

The evolution of cuteness

Dec20 JDN 2459204

I thought I’d go for something a little more light-hearted for this week’s post. It’s been a very difficult year for a lot of people, though with Biden winning the election and the recent FDA approval of a COVID vaccine for emergency use, the light at the end of the tunnel is now visible. I’ve also had some relatively good news in my job search; I now have a couple of job interviews lined up for tenure-track assistant professor positions.

So rather than the usual economic and political topics, I thought I would focus today on cuteness. First of all, this allows me the opportunity to present you with a bunch of photos of cute animals (free stock photos brought to you by pexels.com):

Beyond the joy I hope this brings you in a dark time, I have a genuine educational purpose here, which is to delve into the surprisingly deep evolutionary question: Why does cuteness exist?

Well, first of all, what is cuteness? We evaluate a person or animal (or robot, or alien) as cute based on certain characteristics like wide eyes, a large head, a posture or expression that evokes innocence. We feel positive feelings toward that which we identify as cute, and we want to help them rather than harm them. We often feel protective toward them.

It’s not too hard to provide an evolutionary rationale for why we would find our own offspring cute: We have good reasons to want to protect and support our own offspring, and given the substantial amounts of effort involved in doing so, it behooves us to have a strong motivation for committing to doing so.

But it’s less obvious why we would feel this way about so many other things that are not human. Dogs and cats have co-evolved along with us as they became domesticated, dogs starting about 40,000 years ago and cats starting around 8,000 years ago. So perhaps it’s not so surprising that we find them cute as well: Becoming domesticated is, in many ways, simply the process of maximizing your level of cuteness so that humans will continue to feed and protect you.

But why are non-domesticated animals also often quite cute? That red panda, penguin, owl, and hedgehog are not domesticated; this is what they look like in the wild. And yet I personally find the red panda to be probably the cutest among an already very cute collection.

Some animals we do not find cute, or at least most people don’t. Here’s a collection of “cute snakes” that I honestly am not getting much cuteness reaction from. These “cute snails” work a little better, but they’re assuredly not as cute as kittens or red pandas. But honestly these “cute spiders” are doing a remarkably good job of it, despite the general sense I have (and I think I share with most people) that spiders are not generally cute. And while tentacles are literally the stuff of Lovecraftian nightmares, this “adorable octopus” lives up to the moniker.

The standard theory is that animals that we find cute are simply those that most closely resemble our own babies, but I don’t really buy it. Naked mole rats have their moments, but they are certainly not as cute as puppies or kittens, despite clearly bearing a closer resemblance to the naked wrinkly blob that most human infants look like. Indeed, I think it’s quite striking that babies aren’t really that cute; yes, some are, but many are not, and even the cutest babies are rarely as cute as the average kitten or red panda.

It actually seems to me more that we have some idealized concept of what a cute creature should look like, and maybe it evolved to reflect some kind of “optimal baby” of perfect health and vigor—but most of our babies don’t quite manage to meet that standard. Perhaps the cuteness of penguins or red pandas is sheer coincidence; out of the millions of animal species out there, some of them were bound to send our cuteness-detectors into overdrive. Dogs and cats, then, started as such coincidence—and then through domestication they evolved to fit our cuteness standard better and better, because this was in fact the primary determinant of their survival. That’s how you can get the adorable abomination that is a pug:

Such a creature would never survive in the wild, but we created it because we liked it (or enough of us did, anyway).

There are actually important reasons why having such a strong cuteness response could be maladaptive—we’re apex predators, after all. If finding animals cute prevents us from killing and eating them, that’s an important source of nutrition we are passing up. So whatever evolutionary pressure molded our cuteness response, it must be strong enough to overcome that risk.

Indeed, perhaps the cuteness of cats and dogs goes beyond not only coincidence but also the co-opting of an impulse to protect our offspring. Perhaps it is something that co-evolved in us for the direct purpose of incentivizing us to care for cats and dogs. It has been long enough for that kind of effect—we evolved our ability to digest wheat and milk in roughly the same time period. Indeed, perhaps the very cuteness response that makes us hesitant to kill a rabbit ourselves actually made us better at hunting rabbits, by making us care for dogs who could do the hunting even better than we could. Perhaps the cuteness of a mouse is less relevant to how we relate to mice than the cuteness of the cat who will have that mouse for dinner.

This theory is much more speculative, and I admit I don’t have very clear evidence of it; but let me at least say this: A kitten wouldn’t get cuter by looking more like a human baby. The kitten already seems quite well optimized for us to see it as cute, and any deviation from that optimum is going to be downward, not upward. Any truly satisfying theory of cuteness needs to account for that.

I also think it’s worth noting that behavior is an important element of cuteness; while a kitten will pretty much look cute no matter what it’s doing, where or not a snail or a bird looks cute often depends on the pose it is in.


There is an elegance and majesty to the tiger below, but I wouldn’t call them cute; indeed, should you encounter either one in the wild, the correct response is for you to run for your life.

Cuteness is playful, innocent, or passive; aggressive and powerful postures rapidly undermine cuteness. A lion make look cute as it rubs against a tree—but not once it turns to you and roars.

The truth is, I’m not sure we fully grasp what is going on in our brains when we identify something as cute. But it does seem to brighten our days.

Hyper-competition

Dec13 JDN 2459197

This phenomenon has been particularly salient for me the last few months, but I think it’s a common experience for most people in my generation: Getting a job takes an awful lot of work.

Over the past six months, I’ve applied to over 70 different positions and so far gone through 4 interviews (2 by video, 2 by phone). I’ve done about 10 hours of test work. That so far has gotten me no offers, though I have yet to hear from 50 employers. Ahead of me I probably have about another 10 interviews, then perhaps 4 of what would have been flyouts and in-person presentations but instead will be “comprehensive interviews” and presentations conducted online, likely several more hours of test work, and then finally, maybe, if I’m lucky, I’ll get a good offer or two. If I’m unlucky, I won’t, and I’ll have to stick around for another year and do all this over again next year.

Aside from the limitations imposed by the pandemic, this is basically standard practice for PhD graduates. And this is only the most extreme end of a continuum of intensive job search efforts, for which even applying to be a cashier at Target requires a formal application, references, and a personality test.

This wasn’t how things used to be. Just a couple of generations ago, low-wage employers would more or less hire you on the spot, with perhaps a resume or a cursory interview. More prestigious employers would almost always require a CV with references and an interview, but it more or less stopped there. I discussed in an earlier post how much of the difference actually seems to come from our chronic labor surplus.

Is all of this extra effort worthwhile? Are we actually fitting people to better jobs this way? Even if the matches are better, are they enough better to justify all this effort?

It is a commonly-held notion among economists that competition in markets is good, that it increases efficiency and improves outcomes. I think that this is often, perhaps usually, the case. But the labor market has become so intensely competitive, particularly for high-paying positions, that the costs of this competitive effort likely outweigh the benefits.

How could this happen? Shouldn’t the free market correct for such an imbalance? Not necessarily. Here is a simple formal model of how this sort of intensive competition can result in significant waste.

Note that this post is about a formal mathematical model, so it’s going to use a lot of algebra. If you are uninterested in such things, you can read the next two paragraphs and then skip to the conclusions at the end.

The overall argument is straightforward: If candidates are similar in skill level, a complicated application process can make sense from a firm’s perspective, but be harmful from society’s perspective, due to the great cost to the applicants. This can happen because the difficult application process imposes an externality on the workers who don’t get the job.

All right, here is where the algebra begins.

I’ve included each equation as both formatted text and LaTeX.

Consider a competition between two applicants, X and Z.

They are each asked to complete a series of tasks in an application process. The amount of effort X puts into the application is x, and the amount of effort Z puts into the application is z. Let’s say each additional bit of effort has a fixed cost, normalized to 1.

Let’s say that their skills are similar, but not identical; this seems quite realistic. X has skill level hx, and Z has skill level hz.

Getting hired has a payoff for each worker of V. This includes all the expected benefits of the salary, benefits, and working conditions. I’ll assume that these are essentially the same for both workers, which also seems realistic.

The benefit to the employer is proportional to the worker’s skill, so letting h be the skill level of the actually hired worker, the benefit of hiring that worker is hY. The reason they are requiring this application process is precisely because they want to get the worker with the highest h. Let’s say that this application process has a cost to implement, c.

Who will get hired? Well, presumably whoever does better on the application. The skill level will amplify the quality of their output, let’s say proportionally to the effort they put in; so X’s expected quality will be hxx and Z’s expected output will be hzz.

Let’s also say there’s a certain amount of error in the process; maybe the more-qualified candidate will sleep badly the day of the interview, or make a glaring and embarrassing typo on their CV. And quite likely the quality of application output isn’t perfectly correlated with the quality of actual output once hired. To capture all this, let’s say that having more skill and putting in more effort only increases your probability of getting the job, rather than actually guaranteeing it.

In particular, let’s say that the probability of X getting hired is P[X] = hxx/(hxx + hzz).

\[ P[X] = \frac{h_x}{h_x x + h_z z} \]

This results in a contest function, a type of model that I’ve discussed in some earlier posts in a rather different context.


The expected payoff for worker X is:

E[Ux] = hxx/(hxx + hzz) V – x

\[ E[U_x] = \frac{h_x x}{h_x x + h_z z} V – x \]

Maximizing this with respect to the choice of effort x (which is all that X can control at this point) yields:

hxhzz V = (hxx + hzz)2

\[ h_x h_z x V = (h_x x + h_z z)^2 \]

A similar maximization for worker Z yields:

hxhzx V = (hxx + hzz)2

\[ h_x h_z z V = (h_x x + h_z z)^2 \]

It follows that x=z, i.e. X and Z will exert equal efforts in Nash equilibrium. Their probability of success will then be contingent entirely on their skill levels:

P[X] = hx/(hx + hz).

\[ P[X] = \frac{h_x}{h_x + h_y} \]

Substituting that back in, we can solve for the actual amount of effort:

hxhzx V = (hx + hz)2x2

\[h_x h_z x V = (h_x + h_z)^2 x^2 \]

x = hxhzV/(hx + hz)2

\[ x = \frac{h_x h_z}{h_x + h_z} V \]

Now let’s see what that gives for the expected payoffs of the firm and the workers. This is worker X’s expected payoff:

E[Ux] = hx/(hx + hz) V – hxhzV/(hx + hz)2 = (hx/(hx + hz))2 V

\[ E[U_x] = \frac{h_x}{h_x + h_z} V – \frac{h_x h_z}{(h_x + h_z)^2} V = \left( \frac{h_x}{h_x + h_z}\right)^2 V \]

Worker Z’s expected payoff is the same, with hx and hz exchanged:

E[Uz] = (hz/(hx + hz))2 V

\[ E[U_z] = \left( \frac{h_z}{h_x + h_z}\right)^2 V \]

What about the firm? Their expected payoff is the the probability of hiring X, times the value of hiring X, plus the probability of hiring Z, times the value of hiring Z, all minus the cost c:

E[Uf] = hx/(hx + hz) hx Y + hz/(hx + hz) hz Y – c= (hx2 + hz2)/(hx + hz) Y – c

\[ E[U_f] = \frac{h_x}{h_x + h_z} h_x Y + \frac{h_z}{h_x + h_z} h_z Y – c = \frac{h_x^2 + h_z^2}{h_x + h_z} Y – c\]

To see whether the application process was worthwhile, let’s compare against the alternative of simply flipping a coin and hiring X or Z at random. The probability of getting hired is then 1/2 for each candidate.

Expected payoffs for X and Z are now equal:

E[Ux] = E[Uz] = V/2

\[ E[U_x] = E[U_z] = \frac{V}{2} \]

The expected payoff for the firm can be computed the same as before, but now without the cost c:

E[Uf] = 1/2 hx Y + 1/2 hz Y = (hx + hz)/2 Y

\[ E[U_f] = \frac{1}{2} h_x Y + \frac{1}{2} h_z Y = \frac{h_x + h_z}{2} Y \]

This has a very simple interpretation: The expected value to the firm is just the average quality of the two workers, times the overall value of the job.

Which of these two outcomes is better? Well, that depends on the parameters, of course. But in particular, it depends on the difference between hx and hz.

Consider two extremes: In one case, the two workers are indistinguishable, and hx = hz = h. In that case, the payoffs for the hiring process reduce to the following:

E[Ux] = E[Uz] = V/4

\[ E[U_x] = E[U_z] = \frac{V}{4} \]

E[Uf] = h Y – c

\[ E[U_f] = h Y – c \]

Compare this against the payoffs for hiring randomly:

E[Ux] = E[Uz] = V/2

\[ E[U_x] = E[U_z] = \frac{V}{2} \]

E[Uf] = h Y

\[ E[U_f] = h Y \]

Both the workers and the firm are strictly better off if the firm just hires at random. This makes sense, since the workers have identical skill levels.

Now consider the other extreme, where one worker is far better than the other; in fact, one is nearly worthless, so hz ~ 0. (I can’t do exactly zero because I’d be dividing by zero, but let’s say one is 100 times better or something.)

In that case, the payoffs for the hiring process reduce to the following:

E[Ux] = V

E[Uz] = 0

\[ E[U_x] = V \]

\[ E[U_z] = 0 \]

X will definitely get the job, so X is much better off.

E[Uf] = hx Y – c

\[ E[U_f] = h_x Y – c \]

If the firm had hired randomly, this would have happened instead:

E[Ux] = E[Uz] = V/2

\[ E[U_x] = E[U_z] = \frac{V}{2} \]

E[Uf] = hY/2

\[ E[U_f] = \frac{h}{2} Y \]

As long as c < hY/2, both the firm and the higher-skill worker are better off in this scenario. (The lower-skill worker is worse off, but that’s not surprising.) The total expected benefit for everyone is also higher in this scenario.


Thus, the difference in skill level between the applicants is vital. If candidates are very different in skill level, in a way that the application process can accurately measure, then a long and costly application process can be beneficial, not only for the firm but also for society as a whole.

In these extreme examples, it was either not worth it for the firm, or worth it for everyone. But there is an intermediate case worth looking at, where the long and costly process can be worth it for the firm, but not for society as a whole. I will call this case hyper-competition—a system that is so competitive it makes society overall worse off.

This inefficient result occurs precisely when:
c < (hx2 + hz2)/(hx + hz) Y – (hx + hz)/2 Y < c + (hx/(hx + hz))2 V + (hz/(hx + hz))2 V

\[ c < \frac{h_x^2 + h_z^2}{h_x + h_z} Y – \frac{h_x + h_z}{2} Y < c + \left( \frac{h_x}{h_x + h_z}\right)^2 V + \left( \frac{h_z}{h_x + h_z}\right)^2 V \]

This simplifies to:

c < (hx – hz)2/(2hx + 2hz) Y < c + (hx2 + hz2)/(hx + hz)2 V

\[ c < \frac{(h_x – h_z)^2}{2 (h_x + h_z)} Y < c + \frac{(h_x^2 + h_z^2)}{(h_x+h_z)^2} V \]

If c is small, then we are interested in the case where:

(hx – hz)2 Y/2 < (hx2 + hz2)/(hx + hz) V

\[ \frac{(h_x – h_z)^2}{2} Y < \frac{h_x^2 + h_z^2}{h_x + h_z} V \]

This is true precisely when the difference hx – hz is small compared to the overall size of hx or hz—that is, precisely when candidates are highly skilled but similar. This is pretty clearly the typical case in the real world. If the candidates were obviously different, you wouldn’t need a competitive process.

For instance, suppose that hx = 10 and hz = 8, while V = 180, Y = 20 and c = 1.

Then, if we hire randomly, these are the expected payoffs:

E[Uf] = (hx + hz)/2 Y = 180

E[Ux] = E[Uz] = V/2 = 90

If we use the complicated hiring process, these are the expected payoffs:

E[Ux] = (hx/(hx + hz))2 V = 55.5

E[Uz] = (hz/(hx + hz))2 V = 35.5

E[Uf] = (hx2 + hz2)/(hx + hz) Y – c = 181

The firm gets a net benefit of 1, quite small; while the workers face a far larger total expected loss of 90. And these candidates aren’t that similar: One is 25% better than the other. Yet because the effort expended in applying was so large, even this improvement in quality wasn’t worth it from society’s perspective.

This conclude’s the algebra for today, if you’ve been skipping it.

In this model I’ve only considered the case of exactly two applicants, but this can be generalized to more applicants, and the effect only gets stronger: Seemingly-large differences in each worker’s skill level can be outweighed by the massive cost of making so many people work so hard to apply and get nothing to show for it.

Thus, hyper-competition can exist despite apparently large differences in skill. Indeed, it is precisely the typical real-world scenario with many applicants who are similar that we expect to see the greatest inefficiencies. In the absence of intervention, we should expect markets to get this wrong.

Of course, we don’t actually want employers to hire randomly, right? We want people who are actually qualified for their jobs. Yes, of course; but you can probably assess that with nothing more than a resume and maybe a short interview. Most employers are not actually trying to find qualified candidates; they are trying to sift through a long list of qualified candidates to find the one that they think is best qualified. And my suspicion is that most of them honestly don’t have good methods of determining that.

This means that it could be an improvement for society to simply ban long hiring processes like these—indeed, perhaps ban job interviews altogether, as I can hardly think of a more efficient mechanism for allowing employers to discriminate based on race, gender, age, or disability than a job interview. Just collect a resume from each applicant, remove the ones that are unqualified, and then roll a die to decide which one you hire.

This would probably make the fit of workers to their jobs somewhat worse than the current system. But most jobs are learned primarily through experience anyway, so once someone has been in a job for a few years it may not matter much who was hired originally. And whatever cost we might pay in less efficient job matches could be made up several times over by the much faster, cheaper, easier, and less stressful process of applying for jobs.

Indeed, think for a moment of how much worse it feels being turned down for a job after a lengthy and costly application process that is designed to assess your merit (but may or may not actually do so particularly well), as opposed to simply finding out that you lost a high-stakes die roll. Employers could even send out letters saying one of two things: “You were rejected as unqualifed for this position.” versus “You were qualified, but you did not have the highest die roll.” Applying for jobs already feels like a crapshoot; maybe it should literally be one.

People would still have to apply for a lot of jobs—actually, they’d probably end up applying for more, because the lower cost of applying would attract more applicants. But since the cost is so much lower, it would still almost certainly be easier to do a job search than it is in the current system. In fact, it could largely be automated: simply post your resume on a central server and the system matches you with employers’ requirements and then randomly generates offers. Employers and prospective employees could fill out a series of forms just once indicating what they were looking for, and then the system could do the rest.

What I find most interesting about this policy idea is that it is in an important sense anti-meritocratic. We are in fact reducing the rewards for high levels of skill—at least a little bit—in order to improve society overall and especially for those with less skill. This is exactly the kind of policy proposal that I had hoped to see from a book like The Meritocracy Trap, but never found there. Perhaps it’s too radical? But the book was all about how we need fundamental, radical change—and then its actual suggestions were simple, obvious, and almost uncontroversial.

Note that this simplified process would not eliminate the incentives to get major, verifiable qualifications like college degrees or years of work experience. In fact, it would focus the incentives so that only those things matter, instead of whatever idiosyncratic or even capricious preferences HR agents might have. There would be no more talk of “culture fit” or “feeling right for the job”, just: “What is their highest degree? How many years have they worked in this industry?” I suppose this is credentialism, but in a world of asymmetric information, I think credentialism may be our only viable alternative to nepotism.

Of course, it’s too late for me. But perhaps future generations may benefit from this wisdom.

The necessitization of American consumption

Dec6 JDN 2459190

Why do we feel poorer than our parents?

Over the last 20 years, real per-capita GDP has risen from $46,000 to $56,000 (in 2012 dollars):

It’s not just increasing inequality (though it is partly that); real median household income has increased over the same period from $62,500 to $68,700 (in 2019 dollars):

The American Enterprise Institute has utterly the wrong interpretation of what’s going on here, but their graph is actually quite informative if you can read it without their ideological blinders:

Over the past 20 years, some industries have seen dramatic drops in prices, such as televisions, cellphones, toys, and computer software. Other industries have seen roughly constant prices, such as cars, clothing, and furniture. Still other industries have seen modest increases in prices that tracked overall inflation, such as housing and food. And then there are some industries where prices have exploded to staggering heights, such as childcare, college education, and hospital services.

Since wages basically kept up with inflation, this is the relevant comparison: A product or service is more expensive in real terms if its price grew faster than inflation.

It’s not inherently surprising that some prices would rise faster than inflation and some would rise slower; indeed, it would be shocking if that were not the case, since inflation essentially just is an average of all price changes over time. But if you look closely at the kinds of things that got cheaper versus more expensive, you can begin to see why the statistics keep saying we are getting richer but we don’t feel any richer.

The things that increased the most in price are things you basically can’t do without: Education, childcare, and healthcare. Yes, okay, theoretically you could do without these things, but the effects on your life would be catastrophic—indeed, going without healthcare could literally kill you. They are necessities.

The things that decreased the most in price are things that people have done without for most of human history: Cellphones, software, and computer software. They are newfangled high-tech goods that are now ubiquitous, but not at all long ago didn’t even exist. Going without these goods would be inconvenient, but hardly catastrophic. Indeed, they largely only feel as necessary as they are because everyone else already has them. They are luxuries.

This even explains why older generations can be convinced that we are richer than the statistics say: We have all these fancy new high-tech toys that they never had. But what good does that do us when we can’t afford our health insurance?

Housing is also an obvious necessity, and while it has not on average increased in price faster than inflation, this average washes out important geographic variation.

San Francisco has seen housing prices nearly triple in the last 20 years:

Over the same period, Detroit’s housing prices plummeted, then returned to normal, and are now only 30% higher than they were 20 years ago (comparable to inflation):

It’s hardly surprising that the cities where the most people are moving to are the most expensive to live in; that’s basic supply and demand. But the magnitude of the difference is so large that most of us are experiencing rising housing prices, even though on average housing prices aren’t really rising.

Put this all together, and we can see that while by the usual measures our “standard of living” is increasing, our financial situation feels ever more precarious, because more and more of our spending is immediately captured by things we can’t do without. I suggest we call this effect necessitization; our consumption has been necessitized.

Healthcare is the most extreme example: In 1960, healthcare spending was only 5% of US GDP. As recently as 2000, it was 13%. Today, it is 18%. Medical technology has greatly improved over that time period, increasing our life expectancy from 70 years in 1960 to 76 years in 2000 to 78 years today, so perhaps this additional spending is worth it? But if we compare 2000 to 2020, we can see that an additional 5% of GDP in the last 20 years has only bought us two years of life. So we have spent an additional 5% of our income to gain 2.6% more life—that doesn’t sound like such a great deal to me. (Also, if you look closely at the data, most of the gains in life expectancy seem to be from things like antibiotics and vaccines that aren’t a large part of our healthcare spending, while most of the increased spending seems to be on specialists, testing, high-tech equipment, and administrative costs that don’t seem to contribute much to life expectancy.)

Moreover, even if we decide that all this healthcare spending is worth it, it doesn’t make us richer in the usual sense. We have better health, but we don’t have greater wealth or financial security.

AEI sees that the industries with the largest price increases have the most government intervention, and blames the government; this is clearly confusing cause with effect. The reason the government intervenes so much in education and healthcare is because these are necessities and they are getting so expensive. Removing those interventions wouldn’t stop prices from rising; they’d just remove the programs like Medicaid and federal student loans that currently allow most people to (barely) afford them.

But they are right about one thing: Prices have risen much faster in some industries than others, and the services that have gotten the most expensive are generally the services that are most important.

Why have these services gotten so expensive? A major reason seems to be that they are difficult to automate. Manufacturing electronics is very easy to automate—indeed, there’s even a positive feedback loop there: the better you get at automating making electronics, the better you get at automating everything, including making electronics. But automating healthcare and education is considerably more difficult. Yes, there are MOOCs, and automated therapy software, and algorithms will soon be outperforming the average radiologist; but there are a lot of functions that doctors, nurses, and teachers provide that are very difficult to replace with machines or software.

Suppose we do figure out how to automate more functions of education and healthcare; would that solve the problem? Maybe—but only if we really do manage to automate the important parts.

Right now, MOOCs are honestly terrible. The sales pitch is that you can get taught by a world-class professor from anywhere in the world, but the truth is that the things that make someone a world-class professor don’t translate over when you are watching recorded video lectures and doing multiple-choice quizzes. Really good teaching requires direct interaction between teacher and student. Of course, a big lecture hall full of hundreds of students often lacks such interaction—but so much the worse for big lecture halls. If indeed that’s the only way colleges know how to teach, then they deserve to be replaced by MOOCs. But there are better ways of teaching that online courses currently cannot provide, and if college administrators were wise, they would be focusing on pressing that advantage. If this doesn’t happen, and education does become heavily automated, it will be cheaper—but it will also be worse.

Similarly, some aspects of healthcare provision can be automated, but there are clearly major benefits to having actual doctors and nurses physically there to interact with patients. If we want to make healthcare more affordable, we will probably have to find other ways (a single-payer health system comes to mind).

For now, it is at least worth recognizing that there are serious limitations in our usual methods of measuring standard of living; due to effects like necessitization, the statistics can say that we are much richer even as we hardly feel richer at all.

Adversity is not a gift

Nov 29 JDN 2459183

For the last several weeks I’ve been participating in a program called “positive intelligence” (which they abbreviate “PQ” even though that doesn’t make sense); it’s basically a self-help program that is designed to improve mood and increase productivity. I am generally skeptical of such things, and I could tell from the start that it was being massively oversold, but I had the opportunity to participate for free, and I looked into the techniques involved and most of them seem to be borrowed from cognitive-behavioral therapy and mindfulness meditation.

Overall, I would say that the program has had small but genuine benefits for me. I think the most helpful part was actually getting the chance to participate in group sessions (via Zoom of course) with others also going through the program. That kind of mutual social support can make a big difference. The group I joined was all comprised of fellow economists (some other grad students, some faculty), so we had a lot of shared experiences.

Some of the techniques feel very foolish, and others just don’t seem to work for me; but I did find at least some of the meditation techniques (which they annoyingly insist on calling by the silly name “PQ reps”) have helped me relax.

But there’s one part of the PQ program in particular that I just can’t buy into, and this is the idea that adversity is a gift and an opportunity.

They call it the “Sage perspective”: You observe the world without judging what is good or bad, and any time you think something is bad, you find a way to transform it into a gift and an opportunity. The claim is that everything—or nearly everything—that happens to you can make you better off. There’s a lot of overlap here with the attitude “Everything happens for a reason”.

I don’t doubt that sincerely believing this would make you happier. Nevertheless, it is obviously false.

If indeed adversity were a gift, we would seek it out. If getting fired or going bankrupt or getting sick were a gift and an opportunity, we’d work to make these things happen.

Yes, it’s true that sometimes an event which seems bad at the time can turn out to have good consequences in the long run. This is simply because we are unable to foresee all future ramifications. Sometimes things turn out differently than you think they will. But most of the time, when something seems bad, it is actually bad.

There might be some small amount of discomfort or risk that would be preferable to a life of complete safety and complacency; but we are perfectly capable of seeking out whatever discomfort or risk we choose. Most of us live with far more discomfort and risk than we would prefer, and simply have no choice in the matter.

If adversity were a gift, people would thank you for giving it to them. “Thanks for dumping me!” “Thanks for firing me!” “Thanks for punching me!” These aren’t the sort of thing we hear very often (at least not sincerely).

I think this is fairly obvious, honestly, so I won’t belabor it any further. But it raises a question: Is there a way to salvage the mental health benefits of this attitude while abandoning its obvious falsehood?

“Everything happens for a reason” doesn’t work; we live in a universe of deep randomness, ruled by the blind idiot gods of natural law.

“Every cloud has a silver lining” is better; but clearly not every bad thing has an upside, or if it does the upside can be so small as to be utterly negligible. (What was the upside of Rwandan genocide?) Restricted to ordinary events like getting fired this one works pretty well; but it obviously fails for the most extreme traumas, and doesn’t seem particularly helpful for the death of a loved one either.

“What doesn’t kill me makes me stronger” is better still, but clearly not true in every case; some bad events that don’t actually kill us can traumatize us and make the rest of our lives harder. Perhaps “What doesn’t permanently damage me makes me stronger”?

I think the version of this attitude that I have found closest to the truth is “Everything is raw material”. Sometimes bad things just happen: Bad luck, or bad actions, can harm just about anyone at just about any time. But it is within our power to decide how we will respond to what happens to us, and wallowing in despair is almost never the best response.

Thus, while it is foolish to see adversity as a gift, it is not so foolish to see it as an opportunity. Don’t try to pretend that bad things aren’t bad. There’s no sense in denying that we would prefer some outcomes over others, and we feel hurt or disappointed when things don’t turn out how we wanted. Yet even what is bad can still contain within it chances to learn or make things better.

How men would benefit from a less sexist world

Nov 22 JDN 2459176

November 19 is International Men’s Day, so this week seemed an appropriate time for this post.

It’s obvious that a less sexist world would benefit women. But there are many ways in which it would benefit men as well.

First, there is the overwhelming pressure of conforming to norms of masculinity. I don’t think most women realize just how oppressive this is, how nearly every moment of our lives we are struggling to conform to a particular narrow vision of what it is to be a man, from which even small deviations can be severely punished. A less sexist world would mean a world where these pressures are greatly reduced.

Second, there is the fact that men are subjected to far more violence than women. Men are three times as likely to be murdered as women. This violence has many causes—indeed, the fact that men are much more likely to be both victims and perpetrators of violence nearly everywhere in the world suggests genetic causes—but a less sexist world could be a world with less violence in general, and men would benefit most from that.

Third, a less sexist world is a world where men and women feel more equal and more comfortable with one another, a world in which relationships between men and women can be deeper and more authentic. Another part of the male experience that most women don’t seem to understand is how incredibly painful it is to be treated as “Schrodinger’s Rapist”, where you are considered a potential predator by default and have to constantly signal that you are not threatening. To be clear, the problem isn’t that women are trying to protect themselves from harm; it’s that their risk of being harmed is high enough that they have to do this. I’m not saying women should stop trying to play it safe around men; I’m saying that we should be trying to find ways to greatly reduce the risk of harm that they face—and that doing so would benefit both women, who would be safer, and men, who wouldn’t have to be treated as potential predators at all times.

Feminists have actually done a lot of things that directly benefit men, including removing numerous laws that discriminate against men.

Are there some men who stand to be harmed by a less sexist society? Sure. Rapists clearly stand to be harmed. Extremely misogynist men will be pressured to change, which could be harmful to them. And, to be clear, it won’t all be benefits even for the rest of us. We will have to learn new things, change how we behave, challenge some of our most deep-seated norms and attitudes. But overall, I think that most men are already better off because of feminism, and would continue to be even better off still if the world became more feminist.

Why does this matter? Wouldn’t the benefits to women be a sufficient reason to make a less sexist world, even if it did end up harming most men?

Well, yes and no: It actually depends on how much it would harm men. If those harms were actually large enough, they would present a compelling reason not to make a more feminist world. That is clearly not the case, and this should be obvious to just about anyone; but it’s not a logical impossibility. Indeed, even knowing that the harms are not enough to justify abandoning the entire project, they could still be large enough to justify slowing it down or seeking other approaches to solving the problems feminism was intended to solve.

But yes, clearly feminism would be worth doing even if it had no net benefit to men. Yet, the fact that it does have a net benefit to most men is useful information.

First, it tells us that the world is nonzero-sum, that we can make some people better off without making others equally worse off. This is a deep and important insight that I think far too few people have really internalized.

Second, it provides numerous strategic benefits for recruiting men to the cause. Consider the following two potential sales pitches for feminism:

“You benefit from this system, but women are harmed by it. You should help us change it, even though that would harm you! If you don’t, you’re a bad person!”

“Women are harmed most by this system, but you are harmed by it too. You can help us change it, and we’ll make almost everyone better off, including you!”

Which of those two sales pitches seems more likely to convince someone who is on the fence?

Consider in particular men who aren’t particularly well-off themselves. If you are an unemployed, poor Black man, you probably find that the phrase “male privilege” rings a little hollow. Yes, perhaps you would be even worse off if you were a woman, but you’re not doing great right now, and you probably aren’t thrilled with the idea of risking being made even worse off, even by changes that you would otherwise agree are beneficial to society as a whole.

Similar reasoning applies to other “privileged” groups: Poor White men dying from treatable diseases because they can’t afford healthcare probably aren’t terribly moved by the phrase “White privilege”. Emphasizing the ways that your social movement will harm people seems like a really awful way of recruiting support, doesn’t it?

Yes, sometimes things that are overall good will harm some people, and we have to accept that. But the world is not always this way, and in fact some of the greatest progress in human civilization has been of the sort that benefits nearly everyone. Indeed, perhaps we should focus our efforts on the things that will benefit the most people, and then maybe come back later for things that benefit some at the expense of others?