You call this a hobby?

Nov 9 JDN 2460989

A review of Politics is for Power by Eitan Hersch

This week, there was an election. It’s a minor midterm election—since it’s an odd-numbered year, many places don’t even have any candidates on the ballot—and as a result, turnout will surely be low. Eitan Hersch has written a book about why that’s a bad thing, and how it is symptomatic of greater problems in our civic culture as a whole.

Buried somewhere in this book, possible to find through committed, concerted effort, there is a book that could have had a large positive effect on our political system, our civic discourse, and our society as a whole. Sadly, Dr. Hersch buried it so well that most people will never find it.

In particular, he starts the booknot even on the first page, but on the cover—by actively alienating his core audience with what seems to be the very utmost effort he can muster.


Yes, even the subtitle is condescending and alienating:

How to Move Beyond Political Hobbyism, Take Action, and Make Real Change

And of course it’s not just there; on page after page he drives the dagger deeper and twists it as hard as he can, repeating the accusation over and over:

This is just a hobby for you. It doesn’t really mean anything.

Today’s hobbyists possess the negative qualities of the amateurs—hyperemotional engagement, obsession with national politics, an insatiable appetite for debate—and none of the amateur’s positive qualities—the neighborhood meetings, the concrete goals, the leadership.

– p.9

You hear that? You’re worse than an amateur. This is on page 9. Page 9.

[…] Much of the time we spend on politics is best described as an inward-focused leisure activity for people who like politics.

We may not easily concede that we are doing politics for fun.[…]

-p. 14

See? You may say it’s not really just for fun, but you’re lying. You’re failing to concede the truth.

To the political hobbyist, news is a form of entertainment and needs to be fun.

-p.19

You hear me? This is fun for you. You’re enjoying this. You’re doing it for yourself.

The real explanation for the dynamics of voter turnout is that we treat politics like a game and follow the spectacle. Turnout is high in presidential elections compared to other US elections in the same way that football viewership is high when the Super Bowl is on. Many people who do not like football or even know the rules of the game end up at a Super Bowl party. They’re there for the commercials, the guacamole, and to be part of a cultural moment. That’s why turnout is high in presidential elections. Without the spectacle, even people who say they care about voting don’t show up.

-p. 48

This is all a game. It’s not real. You don’t really care.

I could go on; he keeps repeating this message—this insult, this accusation—throughout the book. He tells you, over and over, that if you are not already participating in politics in the very particular way he wants you to (and he may even be right that it would be better!), you are a selfish liar, and you are treating what should be vitally important as just meaningless entertainment.

This made it honestly quite painful to get through the book. Several times, I was tempted to just give up and put it back on the shelf. But I’m glad I didn’t, because there are valuable insights about effective grassroots political activism buried within this barrage of personal accusations.

I guess Hersch must not see this as a personal accusation; at one point, he acknowledges that people might find it insulting, but (1) doesn’t seem to care and (2) makes no effort to inquire as to why we might feel that way; in fact, he manages to twist the knife just a little deeper in that very same passage:

For the non-self-identifying junkies, the term political hobbyist can be insulting. Given how important politics is, it doesn’t feel good to call one’s political activity a hobby. The term is also insulting, I have learned, to real hobbyists, who see hobbies as activities with much more depth than the online bickering or addictive news consumption I’m calling a hobby.

-p. 88

You think calling it a “hobby” is insulting? Yeah, well, it’s worse than that, so ha!

But let me tell you something about my own experience of politics. (Actually, one of Hersch’s central messages is that sharing personal experiences is one of the most powerful political tools I know.)

How do most people I know feel about politics, since, oh, say… November 2016?

ABSOLUTE HORROR AND DESPAIR.

For every queer person I know, every trans person, every immigrant, every woman, every person of color, and for plenty of White cishet liberal guys too, the election of President Donald Trump was traumatic. It felt like a physical injury. People who had recovered from depression were thrust back into it. People felt physically nauseated. And especially for immigrants and trans people, people literally feared for their lives and were right to do so.

WHATEVER THIS IS, IT IS NOT A HOBBY.

I’ve had to talk people down from psychotic episodes and suicidal ideation because of this, and you have the fucking audacity to tell me that we’re doing this for fun!?

If someone feared for their life because their team lost the Super Bowl, we would rightfully recognize that as an utterly pathological response. But I know a whole bunch of folks on student visas that are constantly afraid of being kidnapped and taken away by masked men with guns, because that is a thing that has actually happened to other people who were in this country on student visas. I know a whole bunch of trans folks who are afraid of assaulted or even killed for using the wrong bathroom, because that is a thing that actually happens to trans people in this country.

I wish I could tell these people—many of them dear friends of mine—that they are wrong to fear, that they are safe, that everything will be all right. But as long as Donald Trump is in power and the Republicans in Congress and the right-wing Supreme Court continue to enable him, I can’t tell them that, because I would be lying; the danger is real. All I can do is tell them that it is probably not as great a danger as they fear, and that if there is any way I can help them, I am willing to do so.

Indeed, politics for me and those closest to me is so obviously so much not a hobby that repeatedly insisting that I admit that it is starts to feel like gaslighting. I feel like I’m in a struggle session or something: “Admit you are a hobbyist! Repent!”

I don’t know; maybe there are people for whom politics is just a hobby. Maybe the privileged cishet White kids at Tufts that Dr. Hersch lectures to are genuinely so removed from the consequences of public policy that they can engage with politics at their leisure and for their own entertainment. (A lot of the studies he cites are specifically about undergrads; I know this is a thing in pretty much all social science… but maybe undergrads are in fact not a very representative sample of political behavior?) But even so, some of the international students in those lecture halls (11% of Tufts undergrads and 17% of Tufts grads) probably feel pretty differently, I have to imagine.

In fact, maybe genuine political hobbyism is a widespread phenomenon, and its existence explains a lot of otherwise really baffling things about the behavior of our electorate (like how the same districts could vote for both Donald Trump and Alexandria Ocasio-Cortez). I don’t find that especially plausible given my own experience, but I’m an economist, not a political scientist, so I do feel like I should offer some deference to the experts on this matter. (And I’m well aware that my own social network is nothing like a representative sample of the American electorate.)

But I can say this for sure:

The target audience of this book is not doing this as a hobby.

Someone who picks up a book by a political scientist hoping for guidance as to how to make their own political engagement more effective is not someone who thinks this is all a game. They are not someone who is engaging with politics as a fun leisure activity. They are someone who cares. They are someone who thinks this stuff matters.

By construction, the person who reads this book to learn about how to make change wants to make change.

So maybe you should acknowledge that at some point in your 200 pages of text? Maybe after spending all these words talking about how having empathy is such an important trait in political activism, you should have some empathy for your audience?

Hersch does have some useful advice to give, buried in all this.

His core message is basically that we need more grassroots activism: Small groups of committed people, acting in their communities. Not regular canvassing, which he acknowledges as terrible (and as well he should; I’ve done it, and it is), but deep canvassing, which also involves going door to door but is really a fundamentally different process.

Actually, he seems to love grassroots organizing so much that he’s weirdly nostalgic for the old days of party bosses. Several times, he acknowledges that these party bosses were corrupt, racist, and utterly unaccountable, but after every such acknowledgment he always follows it up with some variation on “but at least they got things done”.

He’s honestly weirdly dismissive of other forms of engagement, though. Like, I expected him to be dismissive of “slacktivism” (though I am not), if for no other reason than the usual generational curmudgeonry. But he’s also weirdly dismissive of donations and even… honestly… voting? He doesn’t even seem interested in encouraging people to vote more. He doesn’t seem to think that get-out-the-vote campaigns are valuable.

I guess as a political scientist, he’s probably very familiar with the phenomenon of “low information voters”, who frequently swing elections despite being either clueless or actively misled. And okay, maybe turning out those people isn’t all that useful, at least if it’s not coupled with also educating them and correcting their misconceptions. But surely it’s not hobbyism to vote? Surely doing the one most important thing in a democratic system isn’t treating this like a game?

In his section on donations, he takes two tacks against them:

The first is to say that rich donors who pay $10,000 a plate for fancy dinners really just want access to politicians for photo ops. I don’t think that’s right, but the truth is admittedly not much better: I think they want access to politicians to buy influence. This is “political engagement” in some sense—you’re acting to exert power—but it’s corrupt, and it’s the source of an enormous amount of damage to our society—indeed to our planet itself. But I think Hersch has to deny that the goal is influence, because that would in fact be “politics for power”, and in order to remain fiercely non-partisan throughout (which, honestly, probably is a good strategic move), he carefully avoids ever saying that anyone exerting political power is bad.

Actually the closest he gets to admitting his own political beliefs (surprise, the Massachusetts social science professor is a center-left liberal!) comes in a passage where he bemoans the fact that… uh… Democrats… aren’t… corrupt enough? If you don’t believe me, read it for yourself:

The hobbyist motivation among wealthy donors is also problematic for a reason that doesn’t have a parallel in the nonprofit world: Partisan asymmetry. Unlike Democratic donors, Republican donors typically support politicians whose policy priorities align with a wealthy person’s financial interests. The donors can view donations as an investment. When Schaffner and I asked max-out donors why they made their contribution, many more Republicans than Democrats said that a very or extremely important reason for their gift was that the politician could affect the donor’s own industry (37 percent of Republicans versus 22 percent of Democrats).

This asymmetry puts Democrats at a disadvantage. Not motivated by their own bottom line, Democratic donors instead have to be motivated by ideology, issues, or even by the entertainment value that a donation provides.

-p.80

Yes, God forbid they be motivated by issues or ideology. That would involve caring about other people. Clearly only naked self-interest and the profit motive could ever be a good reason for political engagement! (Quick question: You haven’t been, uh, reading a lot of… neoclassical economists lately, have you? Why? Oh, no reason.) Oh why can’t Democrats just be more like Republicans, and use their appallingly vast hoards of money to make sure that we cut social services and deregulate everything until the polluted oceans flood the world!?

The second is to say that the much broader population who makes small donations of $25 or $50 is “ideologically extreme” compared to the rest of the population, which is true, but seems to me utterly unsurprising. The further the world is from how you’d like to see it, the greater the value is to you of changing the world, and therefore the more you should be willing to invest into making that change—or even into a small probability of possibly making that change. If you think things are basically okay, why would you pay money to try to make them different? (I guess maybe you’d try to pay money to keep them the same? But even so-called “conservatives” never actually seem to campaign on that.)

I also don’t really see “ideologically extreme” as inherently a bad thing.

Sure, some extremists are very bad: Nazis are extreme and bad (weird that this seems controversial these days), Islamists are extreme and bad, Christian nationalists are extreme and bad, tankie leftists are extreme and bad.

But vegetarians—especially vegans—are also “ideologically extreme”, but quite frankly we are objectively correct, and maybe don’t even go far enough (I only hope that future generations will forgive me for my cheese). Everyone knows that animals can suffer, and everyone who is at all informed knows that factory farms make them suffer severely. The “moderate” view that all this horrible suffering is justifiable in the name of cheap ground beef and chicken nuggets is a fundamentally immoral one. (Maybe I could countenance a view that free-range humane meat farming is acceptable, but even that is far removed from our current political center.)

Trans activism is in some sense “ideologically extreme”—and frequently characterized as such—but it basically amounts to saying that the human rights of free expression, bodily autonomy, and even just personal safety outweigh other people’s narrow, blinkered beliefs about sex and gender. Okay, maybe we can make some sort of compromise on trans kids in sports (because why should I care about sports?), and I’m okay with gender-neutral bathrooms instead of letting trans women in women’s rooms (because gender-neutral bathrooms give more privacy and safety anyway!), and the evidence on the effects of puberty blockers and hormones is complicated (which is why it should be decided by doctors and scientists, not by legislators!), but in our current state, trans people die to murder and suicide at incredibly alarming rates. The only “moderate” position here is to demand, at minimum, enforced laws against discrimination and hate crimes. (Also, calling someone by the name and pronouns they ask you to costs you basically nothing. Failing to do that is not a brave ideological stand; it’s just you being rude and obnoxious. Indeed, since it can trigger dysphoria, it’s basically like finding out someone’s an arachnophobe and immediately putting a spider in their hair.)

Open borders is regarded as so “ideologically extreme” that even the progressive Democrats won’t touch it, despite the fact that I literally am not aware of a single ethical philosopher in the 21st century who believes that our current system of immigration control is morally justifiable. Even the ones who favor “closed borders” in principle are almost unanimous that our current system is cruel and racist. The Lifeboat Theory is ridiculous; allowing immigrants in wouldn’t kill us, it would just maybe—maybe—make us a little worse off. Their lives may be at stake, but ours are not. We are not keeping people out of a lifeboat so it doesn’t sink; we are keeping them out of a luxury cruise liner so it doesn’t get dirty and crowded.

Indeed, even so-called “eco-terrorists”, who are not just ideologically extreme but behaviorally extreme as well, don’t even really seem that bad. They are really mostly eco-vandals; they destroy property, they don’t kill people. There is some risk to life and limb involved in tree spiking or blowing up a pipeline, but the goal is clearly not to terrorize people; it’s to get them to stop doing a particular thing—a particular thing that they in fact probably should stop doing. I guess I understand why this behavior has to be illegal and punished as such; but morally, I’m not even sure it’s wrong. We may not be able to name or even precisely count the children saved who would have died if that pipeline had been allowed to continue pumping oil and thus spewing carbon emissions, but that doesn’t make them any less real.

So really, if anything, the problem is not “extremism” in some abstract sense, but particular beliefs and ideologies, some of which are not even regarded as extreme. A stronger vegan lobby would not be harmful to America, however “extreme” they might be, and a strong Republican lobby, however “mainstream” it is perceived to be, is rapidly destroying our nation on a number of different levels.

Indeed, in parts of the book, it almost seems like Hersch is advocating in some Nietzschean sense for power for its own sake. I don’t think that’s really his intention; I think he means to empower the currently disempowered, for the betterment of society as a whole. But his unwillingness to condemn rich Republicans who donate the maximum allowed in order to get their own industry deregulated is at least… problematic, as both political activists and social scientists are wont to say.

I’m honestly not even sure that empowering the disempowered is what we need right now. I think a lot of the disempowered are also terribly misinformed, and empowering them might actually make things worse. In fact, I think the problem with the political effect of social media isn’t that it has failed to represent the choices of the electorate, but that it has represented them all too well and most people are really, really bad—just, absolutely, shockingly, appallingly bad—at making good political choices. They have wildly wrong beliefs about really basic policy questions, and often think that politicians’ platforms are completely different from what they actually are. I don’t go quite as far as this article by Dan Williams in Conspicuous Cognition, but it makes some really good points I can’t ignore. Democracy is currently failing to represent the interests of a great many Americans, but a disturbingly large proportion of this failure must be blamed on a certain—all too large—segment of the American populace itself.

I wish this book had been better.

More grassroots organizing does seem like a good thing! And there is some advice in this book about how to do it better—though in my opinion, not nearly enough. A lot of what Hersch wants to see happen would require tremendous coordination between huge numbers of people, which almost seems like saying “politics would be better if enough people were better about politics”. What I wanted to hear more about was what I can do; if voting and donating and protesting and blogging isn’t enough, what should I be doing? How do I make it actually work? It feels like Hersch spent so long trying to berate me for being a “hobbyist” that he forgot to tell me what he actually thinks I should be doing.

I am fully prepared to believe that online petitions and social media posts don’t accomplish much politically. (Indeed, I am fully prepared to believe that blogging doesn’t accomplish much politically.) I am open to hearing what other options are available, and eager for guidance about how to have the most effective impact.

But could you please, please not spend half the conversation repeatedly accusing me of not caring!?

In Nozicem

Nov 2 JDN 2460982

(I wasn’t sure how to convert Robert Nozick’s name into Latin. I decided it’s a third-declension noun, Nozix, Nozicis. But my name already is Latin, so if one of his followers ever wants to write a response to this post that also references In Catalinam, they’ll know how to decline it; the accusative is Julium, if you please.)

This post is not at all topical. I have been too busy working on video game jams (XBOX Game Camp Detroit, and then the Epic Mega Jam, for which you can view my submission, The Middle of Nowhere, here!) to keep up with the news, and honestly I think I am psychologically better off for it.

Rather, this is a post I’ve been meaning to write for a long time, but never quite got around to.

It is about Robert Nozick, and why he was a bad philosopher, a bad person, and a significant source of harm to our society as a whole.

Nozick had a successful career at Harvard, and even became president of the American Philosophical Association. So it may seem that I am going out on quite a limb by saying he’s a bad philosopher.

But the philosophy for which he is best known, the thing that made his career, is not simply obviously false—it is evil. It is the sort of thing that one can only write if one is either a complete psychopath, utterly ignorant of history, or arguing in bad faith (or some combination of these).

It is summarized in this pithy quote that makes less moral sense than the philosophy of the Joker in The Dark Knight:

Taxation of earnings from labor is on a par with forced labor. Seizing the results of someone’s labor is equivalent to seizing hours from him and directing him to carry on various activities.

Anarchy, State, and Utopia (p.169)

I apologize in advance for my language, but I must say it:

NO IT FUCKING ISN’T.

At worst—at the absolute worst, when a government is utterly corrupt and tyrannical, provides no legitimate services whatsoever, contributes in no way to public goods, offers no security, and exists entirely to enrich its ruling class—which by the way is worse than almost any actual government that has ever existed, even including totalitarian dictators and feudal absolute monarchies—at worst, taxation is like theft.

Taxation, like theft, takes your wealth, not your labor.


Wealth is not labor.

Even wealth earned by wage income is not labor—and most wealth isn’t earned by wage income. Elon Musk is now halfway to a trillion dollars, and it’s not because he works a million times harder than you. (Nor is he a million times smarter than you, or even ten—perhaps not even one.) The majority of wealth—and the vast majority of top 1%, top 0.1%, and top 0.01% wealth—is capital that begets more capital, continuously further enriching those who could live just fine without ever working another day in their lives. Billionaire wealth is honestly so pathological at this point that it would be pathetic if it weren’t so appalling.

Even setting aside the historical brutality of slavery as it was actually implemented—especially in the United States, where slaves were racialized and commodified in a way that historically slaves usually weren’t—there is a very obvious, very bright, very hard line between taking someone’s wealth and forcing them to work.

Even a Greek prisoner of war who was bought by a Roman patrician to tutor his children—the sort of slave that actually had significant autonomy and lived better than an average person in Roman society—was fundamentally unfree in a way that no one has ever been made unfree by having to pay income tax. (And the Roman patrician who owned him and (ahem) paid taxes was damn well aware of how much more free he was than his slave.)

Whether you are taxed at 2% or 20% or 90%, you are still absolutely free to use your time however you please. Yes, if you assume a fixed amount of work at a fixed wage, and there are no benefits to you from the taxation (which is really not something we can assume, because having a good or bad government radically affects what your economy as a whole will be like), you will have less stuff, and if you insist for some reason that you must have the same amount of stuff, then you would have to work more.

But even then, you would merely have to work more somewhere—anywhere—in order to make up the shortfall. You could keep your current job, or get another one, or start your own business. And you could at any time decide that you don’t need all that extra stuff and don’t want to work more, and simply choose to not work more. You are, in other words, still free.

At worst, the government has taken your stuff. It has made you poorer. But absolutely not, in no way, shape or form, has it made you a slave.

Yes, there is the concept of “wage slavery”, but “wage slavery” isn’t actually slavery, and the notion that people aren’t really, truly free unless they can provide for basic needs entails the need for a strong, redistributive government, which is the exact opposite of what Robert Nozick and his shockingly large body of followers have been arguing for since the 1970s.

I could have been sympathetic to Nozick if his claim had been this:

Taxation of earnings from labor is on a par with [theft]. Seizing the results of someone’s labor is equivalent to seizing [goods he has purchased with his own earnings].

Or even this:

[Military conscription] is on a par with forced labor. [After all, you are] seizing hours from him and directing him to carry on various activities.

Even then, there are some very clear reasons why we might be willing to accept taxation or even conscription from a legitimate liberal democratic government even though a private citizen doing the same fundamental activity would obviously be illegal and immoral.

Indeed, it’s not clear that theft is always immoral; there is always the Les Miserables exception where someone desperately poor steals food to feed themselves, and a liberal democratic government taxing its citizens in order to provide food stamps seems even more ethically defensible than that.

And that, my friends, is precisely why Nozick wasn’t satisfied with it.

Precisely because there is obvious nuance here that can readily justify at least some degree of not only taxation for national security and law enforcement, but also taxation for public goods and even redistribution of wealth, Nozick could not abide the analogies that actually make sense. He had to push beyond them to an analogy that is transparently absurd, in order to argue for his central message that government is justifiable for national security and law enforcement only, and all other government functions are inherently immoral. Forget clean water and air. Forget safety regulations in workplaces—or even on toys. Forget public utilities—all utilities must be privatized and unregulated. And above all—above all—forget ever taking any money from the rich to help the poor, because that would be monstrous.

If you support food stamps, in Nozick’s view, there should be a statue of you in Mississippi, because you are a defender of slavery.

Indeed, many of his followers have gone beyond that, and argued using the same core premises that all government is immoral, and the only morally justifiable system is anarcho-capitalism—which, I must confess, I have always had trouble distinguishing from feudalism with extra steps.

Nozick’s response to this kind of argument basically seemed to be that he thought anarcho-capitalism will (somehow, magically) automatically transition into his favored kind of minarchist state, and so it’s actually a totally fine intermediate goal. (A fully privatized military and law enforcement system! What could possibly go wrong? It’s not like private prisons are already unconscionably horrible even in an otherwise mostly-democratic system or anything!)

Nozick wanted to absolve himself—and the rich, especially the rich, whom he seemed to love more than life itself—from having to contribute to society, from owing anything to any other human being.

Rather than be moved by our moral appeals that millions of innocent people are suffering and we could so easily alleviate that suffering by tiny, minuscule, barely-perceptible harms to those who are already richer than anyone could possibly deserve to be, he tried to turn the tables: “No, you are immoral. What you want is slavery.

And in so doing, he created a thin, but shockingly resilient, intellectual veneer to the most craven selfishness and the most ideologically blinkered hyper-capitalism. He made it respectable to oppose even the most basic ways that governments can make human life better; by verbal alchemy he transmuted plain evil into its own new moral crusade.

Indeed, perhaps the only reason his philosophy was ever taken seriously is that the rich and powerful found it very, very, useful.

The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

Wage-matching and the collusion under our noses

Jul 20 JDN 2460877

It was a minor epiphany for me when I learned, over the course of studying economics, that price-matching policies, while they seem like they benefit consumers, actually are a brilliant strategy for maintaining tacit collusion.

Consider a (Bertrand) market, with some small number n of firms in it.

Each firm announces a price, and then customers buy from whichever firm charges the lowest price. Firms can produce as much as they need to in order to meet this demand. (This makes the most sense for a service industry rather than as literal manufactured goods.)

In Nash equilibrium, all firms will charge the same price, because anyone who charged more would sell nothing. But what will that price be?

In the absence of price-matching, it will be just above the marginal cost of the service. Otherwise, it would be advantageous to undercut all the other firms by charging slightly less, and you could still make a profit. So the equilibrium price is basically the same as it would be in a perfectly-competitive market.

But now consider what happens if the firms can announce a price-matching policy.

If you were already planning on buying from firm 1 at price P1, and firm 2 announces that you can buy from them at some lower price P2, then you still have no reason to switch to firm 2, because you can still get price P2 from firm 1 as long as you show them the ad from the other firm. Under the very reasonable assumption that switching firms carries some cost (if nothing else, the effort of driving to a different store), people won’t switch—which means that any undercut strategy will fail.

Now, firms don’t need to set such low prices! They can set a much higher price, confident that if any other firm tries to undercut them, it won’t actually work—and thus, no one will try to undercut them. The new Nash equilibrium is now for the firms to charge the monopoly price.

In the real world, it’s a bit more complicated than that; for various reasons they may not actually be able to sustain collusion at the monopoly price. But there is considerable evidence that price-matching schemes do allow firms to charge a higher price than they would in perfect competition. (Though the literature is not completely unanimous; there are a few who argue that price-matching doesn’t actually facilitate collusion—but they are a distinct minority.)

Thus, a policy that on its face seems like it’s helping consumers by giving them lower prices actually ends up hurting them by giving them higher prices.

Now I want to turn things around and consider the labor market.

What would price-matching look like in the labor market?

It would mean that whenever you are offered a higher wage at a different firm, you can point this out to the firm you are currently working at, and they will offer you a raise to that new wage, to keep you from leaving.

That sounds like a thing that happens a lot.

Indeed, pretty much the best way to get a raise, almost anywhere you may happen to work, is to show your employer that you have a better offer elsewhere. It’s not the only way to get a raise, and it doesn’t always work—but it’s by far the most reliable way, because it usually works.

This for me was another minor epiphany:

The entire labor market is full of tacit collusion.

The very fact that firms can afford to give you a raise when you have an offer elsewhere basically proves that they weren’t previously paying you all that you were worth. If they had actually been paying you your value of marginal product as they should in a competitive labor market, then when you showed them a better offer, they would say: “Sorry, I can’t afford to pay you any more; good luck in your new job!”

This is not a monopoly price but a monopsonyprice (or at least something closer to it); people are being systematically underpaid so that their employers can make higher profits.

And since the phenomenon of wage-matching is so ubiquitous, it looks like this is happening just about everywhere.

This simple model doesn’t tell us how much higher wages would be in perfect competition. It could be a small difference, or a large one. (It likely varies by industry, in fact.) But the simple fact that nearly every employer engages in wage-matching implies that nearly every employer is in fact colluding on the labor market.

This also helps explain another phenomenon that has sometimes puzzled economists: Why doesn’t raising the minimum wage increase unemployment? Well, it absolutely wouldn’t, if all the firms paying minimum wage are colluding in the labor market! And we already knew that most labor markets were shockingly concentrated.

What should be done about this?

Now there we have a thornier problem.

I actually think we could implement a law against price-matching on product and service markets relatively easily, since these are generally applied to advertised prices.

But a law against wage-matching would be quite tricky indeed. Wages are generally not advertised—a problem unto itself—and we certainly don’t want to ban raises in general.

Maybe what we should actually do is something like this: Offer a cash bonus (refundable tax credit?) to anyone who changes jobs in order to get a higher wage. Make this bonus large enough to offset the costs of switching jobs—which are clearly substantial. Then, the “undercut” (“overcut”?) strategy will become more effective; employers will have an easier time poaching workers from each other, and a harder time sustaining collusive wages.

Businesses would of course hate this policy, and lobby heavily against it. This is precisely the reaction we should expect if they are relying upon collusion to sustain their profits.

A knockdown proof of social preferences

Apr 27 JDN 2460793

In economics jargon, social preferences basically just means that people care about what happens to people other than themselves.

If you are not an economist, it should be utterly obvious that social preferences exist:

People generally care the most about their friends and family, less but still a lot about their neighbors and acquaintances, less but still moderately about other groups they belong to such as those delineated by race, gender, religion, and nationality (or for that matter alma mater), and less still but not zero about any randomly-selected human being. Most of us even care about the welfare of other animals, though we can be curiously selective about this: Abuse that would horrify most people if done to cats or dogs passes more or less ignored when it is committed against cows, pigs, and chickens.

For some people, there are also groups for which there seem to be negative social preferences, sometimes called “spiteful preferences”, but that doesn’t really seem to capture it: I think we need a stronger word like hatredfor whatever emotion human beings feel when they are willing and eager to participate in genocide. Yet even that is still a social preference: If you want someone to suffer or die, you do care about what happens to them.

But if you are an economist, you’ll know that the very idea of social preferences remains controversial, even after it has been clearly and explictly demonstrated by numerous randomized controlled experiments. (I will never forget the professor who put “altruism” in scare quotes in an email reply he sent me.)

Indeed, I have realized that the experimental evidence is so clear, so obvious, that it surprises me that I haven’t seen anyone present the really overwhelming knockdown evidence that ought to convince any reasonable skeptic. So that is what I have decided to do today.

Consider the following four economics experiments:

Dictator 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Whatever allocation Participant 1 chooses, Participant 2 must accept. Both participants get their allocated amounts.
Dictator 2Participant 1 chooses an allocation of $20, choosing how much they get. Participant 1 gets their allocated amount. The rest of the money is burned.
Ultimatum 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, both participants get nothing.
Ultimatum 2Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, Participant 2 gets nothing, but Participant 1 still gets the allocated amount.

Dictator 1 and Ultimatum 1 are the standard forms of the Dictator Game and Ultimatum Game, which are experiments that have been conducted dozens if not hundreds of times and are the subject of a huge number of papers in experimental economics.

These experiments clearly demonstrate the existence of social preferences. But I think even most behavioral economists don’t quite seem to grasp just how compelling that evidence is.

This is because they have generally failed to compare against my other two experiments, Dictator 2 and Ultimatum 2.

If social preferences did not exist, Participant 1 would be completely indifferent about what happened to the money that they themself did not receive.

In that case, Dictator 1 and Dictator 2 should show the same result: Participant 1 chooses to get $20.

Likewise, Ultimatum 1 and Ultimatum 2 should show the same result: Participant 1 chooses to get $19, offering only $1 to Participant 2, and Participant 2 accepts. This is the outcome that is “rational” in the hyper-selfish neoclassical sense.

Much ink has already been spilled over the fact that these are not the typical outcomes of Dictator 1 and Ultimatum 1. Far more likely is that Participant 1 offers something close to $10, or even $10 exactly, in both games; and in Ultimatum 1, in the unlikely event that Participant 1 should offer only $1 or $2, Participant 2 will typically reject.

But what I’d like to point out today is that the “rational” neoclassical outcome is what would happen in Dictator 2 and Ultimatum 2, and that this is so obvious we probably don’t even need to run the experiments (but we might as well, just to be sure).

In Dictator 1, the money that Participant 1 doesn’t keep goes to Participant 2, and so they are deciding how to weigh their own interests against those of another. But in Dictator 2, Participant 1 is literally just deciding how much free money they will receive. The other money doesn’t go to anyone—not even back to the university conducting the experiment. It’s just burned. It provides benefit to no one. So the rational choice is in fact obvious: Take all of the free money. (Technically, burning money and thereby reducing the money supply would have a miniscule effect of reducing future inflation across the entire economy. But even the full $20 would be several orders of magnitude too small for anyone to notice—and even a much larger amount like $10 billion would probably end up being compensated by the actions of the Federal Reserve.)

Likewise, in both Ultimatum 1 and Ultimatum 2, the money that Participant 1 doesn’t keep will go to Participant 2. Their offer will thus probably be close to $10. But what I really want to focus in on is Participant 2’s choice: If they are offered only $1 or $2, will they accept? Neoclassical theory says that the “rational” choice is to accept it. But in Ultimatum 1, most people will reject it. Are they being irrational?

If they were simply being irrational—failing to maximize their own payoff—then they should reject just as often in Ultimatum 2. But I contend that they would in fact accept far more offers in Ultimatum 2 than they did in Ultimatum 1. Why? Because rejection doesn’t stop Participant 1 from getting what they demanded. There is no way to punish Participant 1 for an unfair offer in Ultimatum 2: It is literally just a question of whether you get $1 or $0.

Like I said, I haven’t actually run these experiments. I’m not sure anyone has. But these results seem very obvious, and I would be deeply shocked if they did not turn out the way I expect. (Perhaps as shocked as so many neoclassical economists were when they first saw the results of experiments on Dictator 1 and Ultimatum 1!)

Thus, Dictator 2 and Ultimatum 2 should have outcomes much more like what neoclassical economics predicts than Dictator 1 and Ultimatum 1.

Yet the only difference—the only difference—between Dictator 1 and Dictator 2, and between Ultimatum 1 and Ultimatum 2, is what happens to someone else’s payoff when you make your decision. Your own payoff is exactly identical.

Thus, behavior changes when we change only the effects on the payoffs of other people; therefore people care about the payoffs of others; therefore social preferences exist.

QED.

Of course this still leaves the question of what sort of social preferences people have, and why:

  • Why are some people more generous than others? Why are people sometimes spiteful—or even hateful?
  • Is it genetic? Is it evolutionary? Is it learned? Is it cultural? Likely all of the above.
  • Are people implicitly thinking of themselves as playing in a broader indefinitely iterated game called “life” and using that to influence their decisions? Quite possibly.
  • Is maintaining a reputation of being a good person important to people? In general, I’m sure it is, but I don’t think it can explain the results of these economic experiments by itself—especially in versions where everything is completely anonymous.

But given the stark differences between Dictator 1 versus Dictator 2 and Ultimatum 1 versus Ultimatum 2 (and really, feel free to run the experiments!), I don’t think anyone can reasonably doubt that social preferences do, in fact, exist.

If you ever find someone who does doubt social preferences, point them to this post.

Extrapolating the INE

Apr 6 JDN 2460772

I was only able to find sufficient data to calculate the Index of Necessary Expenditure back to 1990. But I found a fairly consistent pattern that the INE grew at a rate about 20% faster than the CPI over that period, so I decided to take a look at what longer-term income growth looks like if we extrapolate that pattern back further in time.

The result is this graph:

Using the CPI, real per-capita GDP in the US (in 2024 dollars) has grown from $25,760 in 1950 to $85,779 today—increasing by a factor of 3.33. Even accounting for increased inequality and the fact that more families have two income earners, that’s still a substantial increase.

But using the extrapolated INE, real per-capita GDP has only grown from $43,622 in 1950 to $85,779 today—increasing by only a factor of 1.97. This is a much smaller increase, especially when we adjusted for increased inequality and increased employment for women.

Even without the extrapolation, it’s still clear that real INE-adjusted incomes have were basically stagnant in the 2000s, increased rather slowly in the 2020s, and then actually dropped in 2022 after a bunch of government assistance ended. What looked, under the CPI, like steadily increasing real income was actually more like treading water.

Should we trust this extrapolation? It’s a pretty simplistic approach, I admit. But I think it is plausible when we consider this graph of the ratio between median income and median housing price:

This ratio was around 6 in the 1950s, then began to fall until in the 1970s it stabilized around 4. It began to slowly creep back up, but then absolutely skyrocketed in the 2000s before the 2008 crash. Now it has been rising again, and is now above 7, the highest it has been since the Second World War. (Does this mean we’re due for another crash? I’d bet as much.)

What does this mean? It means that a typical family used to be able to afford a typical house with only four years of their total income—and now would require seven. In that sense, homes are now 75% more expensive today than they were in the 1970s.

Similar arguments can be made for the rising costs of education and healthcare; while many prices have not grown much (gasoline) or even fallen (jewelry and technology), these necessities have continued to grow more and more expensive, not simply in nominal terms, but even compared to the median income.

This is further evidence that our standard measures of “inflation” and “real income” are fundamentally inadequate. They simply aren’t accurately reflecting the real cost of living for most American families. Even in many times when it seemed “inflation” was low and “real income” was growing, in fact it was growing harder and harder to afford vital necessities such as housing, education, and healthcare.

This economic malaise may have been what contributed to the widespread low opinion of Biden’s economy. While the official figures looked good, people’s lives weren’t actually getting better.

Yet this is still no excuse for those who voted for Trump; even the policies he proudly announced he would do—like tariffs and deportations—have clearly made these problems worse, and this was not only foreseeable but actually foreseen by the vast majority of the world’s economists. Then there are all the things he didn’t even say he would do but is now doing, like cozying up to Putin, alienating our closest allies, and discussing “methods” for achieving an unconstitutional third term.

Indeed, it honestly feels quite futile to even reflect upon what was wrong with our economy even when things seemed to be running smoothly, because now things are rapidly getting worse, and showing no sign of getting better in any way any time soon.

A new theoretical model of co-ops

Mar 30 JDN 2460765

A lot of economists seem puzzled by the fact that co-ops are just as efficient as corporate firms, since they have this idea that profit-sharing inevitably results in lower efficiency due to perverse incentives.

I think they’ve been modeling co-ops wrong. Here I present a new model, a very simple one, with linear supply and demand curves. Of course one could make a more sophisticated model, but this should be enough to make the point (and this is just a blog post, not a research paper, after all).

Demand curve is p = a – b q

Marginal cost is f q

There are n workers, who would hold equal shares of the co-op.

Competitive market

First, let’s start with the traditional corporate firm in a competitive market.

Since the market is competitive, price would equal marginal cost would equal wage:

a – b q = d q

q = a/(b+f)

w = d (a/(b+f)) = (a d)/(b+f)

Total profit will be

(p – w)q = 0.

Monopoly firm

In a monopoly, marginal revenue would equal marginal cost:
d[pq]/dq = a – 2 b q

If they are also a monopsonist in the labor market, this marginal cost would be marginal cost of labor, not wage:

d[d q2]/dq = 2 f q

a – 2 b q = 2 f q

q = a/(2b + 2f)

p = a – b q = a (1 – b/(2b + 2f)) = (a (b + 2f))/(2b + 2f)

w = d q = (a f)/(2b + 2f)

Total profit will be

(p – w) q = ((a (b + 2f))/(2b + 2f) – (a f)/(2b + 2f))a/(2b + 2f) = a2/(4b + 2f)

Now consider the co-op.

First, suppose that instead of working for a wage, I work for profit sharing.

If our product market is competitive, we’ll be price-takers, and we will produce until price equals marginal cost:

p = f q

a – b q = f q

q = a/(a+b)

But will we, really? I only get 1/n share of the profits. So let’s see here. My marginal cost of production is still f q, but the marginal benefit I get from more sales may only be p/n.

In that case I would work until:

p/n = f q

(a – b q)/n = fq

a – b q = n f q

q = (a/(b+nf))

Thus I would under-produce. This is the usual argument against co-ops and similar shared ownership.

Co-ops with wages

But that’s not actually how co-ops work. They pay wages. Why do they do that? Well, consider what happens if I am offered a wage as a worker-owner of the co-op.

Is there any reason for the co-op to vote on a wage that is less than the competitive market? No, because owners are workers, so any additional profit from a lower wage would simply be taken from their own wages.

If there any reason for the co-op to vote on a wage that is more than the competitive market? No, because workers are owners, and any surplus lost by paying higher wages would simply be taken from their own profits.

So if the product market is competitive, the co-op will produce the same amount and charge the same price as a firm in perfect competition, even if they have market power over their own wages.

Monopoly co-ops

The argument above doesn’t assume that the co-op has no market power in the labor market. Thus if they are a monopoly in the product market and a monopsony in the labor market, they still pay a competitive wage.

Thus they would set marginal revenue equal to marginal cost:

a – 2 b q = f q

q = a/(2b + f)

The co-op will produce more than the monopoly firm..

This is the new price:

p = a – b q = a(1 – b/(2b+f)) = a(b+f)/(2b + f)

It’s not obvious that this is lower than the price charged by the monopoly firm, but it is.

(a (b + 2f))/(2b + 2f) – a(b+f)/(2b + f) = (a (2b + f)(b + 2f) – 2 a(b+f)2)/(2(b+f)(2b+f))

This is proportional to:

(2b + f)(b + 2f) – 2(b+f)2

2b2 + 5bf + 2f2 – (2b2 + 4bf + 2f2) = bf

So it’s not a large difference, but it’s there. In the presence of market power in the labor market, the co-op is better for consumers, because they get more goods and pay a lower price.

Thus, there is actually no lost efficiency from being a co-op. There is simply much lower inequality, and potentially higher efficiency.

But that’s just in theory.

What do we see in practice?

Exactly that.

Co-ops have the same productivity and efficiency as corporate firms, but they pay higher wages, provide better benefits, and offer collateral benefits to their communities. In fact, they are sometimes more efficient than corporate firms.

Since they’re just as efficient—if not more so—and produce much lower inequality, switching more firms over to co-ops would clearly be a good thing.

Why, then, aren’t co-ops more common?

Because the people who have the money don’t like them.

The biggest barrier facing co-ops is their inability to get financing, because they don’t pay shareholders (so no IPOs) and banks don’t like to lend to them. They tend to make less profit than corporate firms, which offers investors a lower return—instead that money goes to the worker-owners. This lower return isn’t due to inefficiency; it’s just a different distribution of income, more to labor and less to capital.

We will need new financial institutions to support co-ops, such as the Cooperative Fund of New England. And general redistribution of wealth would also help, because if middle class people had more wealth they could afford to finance co-ops. (It would also be good for many other reasons, of course.)

The Index of Necessary Expenditure

Mar 16 JDN 2460751

I’m still reeling from the fact that Donald Trump was re-elected President. He seemed obviously horrible at the time, and he still seems horrible now, for many of the same reasons as before (we all knew the tariffs were coming, and I think deep down we knew he would sell out Ukraine because he loves Putin), as well as some brand new ones (I did not predict DOGE would gain access to all the government payment systems, nor that Trump would want to start a “crypto fund”). Kamala Harris was not an ideal candidate, but she was a good candidate, and the comparison between the two could not have been starker.

Now that the dust has cleared and we have good data on voting patterns, I am now less convinced than I was that racism and sexism were decisive against Harris. I think they probably hurt her some, but given that she actually lost the most ground among men of color, racism seems like it really couldn’t have been a big factor. Sexism seems more likely to be a significant factor, but the fact that Harris greatly underperformed Hillary Clinton among Latina women at least complicates that view.

A lot of voters insisted that they voted on “inflation” or “the economy”. Setting aside for a moment how absurd it was—even at the time—to think that Trump (he of the tariffs and mass deportations!) was going to do anything beneficial for the economy, I would like to better understand how people could be so insistent that the economy was bad even though standard statistical measures said it was doing fine.

Krugman believes it was a “vibecession”, where people thought the economy was bad even though it wasn’t. I think there may be some truth to this.


But today I’d like to evaluate another possibility, that what people were really reacting against was not inflation per se but necessitization.

I first wrote about necessitization in 2020; as far as I know, the term is my own coinage. The basic notion is that while prices overall may not have risen all that much, prices of necessities have risen much faster, and the result is that people feel squeezed by the economy even as CPI growth remains low.

In this post I’d like to more directly evaluate that notion, by constructing an index of necessary expenditure (INE).

The core idea here is this:

What would you continue to buy, in roughly the same amounts, even if it doubled in price, because you simply can’t do without it?

For example, this is clearly true of housing: You can rent or you can own, but can’t not have a house. And nor are most families going to buy multiple houses—and they can’t buy partial houses.

It’s also true of healthcare: You need whatever healthcare you need. Yes, depending on your conditions, you maybe could go without, but not without suffering, potentially greatly. Nor are you going to go out and buy a bunch of extra healthcare just because it’s cheap. You need what you need.

I think it’s largely true of education as well: You want your kids to go to college. If college gets more expensive, you might—of necessity—send them to a worse school or not allow them to complete their degree, but this would feel like a great hardship for your family. And in today’s economy you can’t not send your kids to college.

But this is not true of technology: While there is a case to be made that in today’s society you need a laptop in the house, the fact is that people didn’t used to have those not that long ago, and if they suddenly got a lot cheaper you very well might buy another one.

Well, it just so happens that housing, healthcare, and education have all gotten radically more expensive over time, while technology has gotten radically cheaper. So prima facie, this is looking pretty plausible.

But I wanted to get more precise about it. So here is the index I have constructed. I consider a family of four, two adults, two kids, making the median household income.

To get the median income, I’ll use this FRED series for median household income, then use this table of median federal tax burden to get an after-tax wage. (State taxes vary too much for me to usefully include them.) Since the tax table ends in 2020 which was anomalous, I’m going to extrapolate that 2021-2024 should be about the same as 2019.

I assume the kids go to public school, but the parents are saving up for college; to make the math simple, I’ll assume the family is saving enough for each kid to graduate from with a four-year degree from a public university, and that saving is spread over 16 years of the child’s life. 2*4/16 = 0.5; this means that each year the family needs to come up with 0.5 years of cost of attendance. (I had to get the last few years from here, but the numbers are comparable.)

I assume the family owns two cars—both working full time, they kinda have to—which I amortize over 10 year lifetimes; 2*1/10 = 0.2, so each year the family pays 0.2 times the value of an average midsize car. (The current average new car price is $33226; I then use the CPI for cars to figure out what it was in previous years.)

I assume they pay a 30-year mortgage on the median home; they would pay interest on this mortgage, so I need to factor that in. I’ll assume they pay the average mortgage rate in that year, but I don’t want to have to do a full mortgage calculation (including PMI, points, down payment etc.) for each year, so I’ll say that they amount they pay is (1/30 + 0.5 (interest rate))*(home value) per year, which seems to be a reasonable approximation over the relevant range.

I assume that both adults have a 15-mile commute (this seems roughly commensurate with the current mean commute time of 26 minutes), both adults work 5 days per week, 50 weeks per year, and their cars get the median level of gas mileage. This means that they consume 2*15*2*5*50/(median MPG) = 15000/(median MPG) gallons of gasoline per year. I’ll use this BTS data for gas mileage. I’m intentionally not using median gasoline consumption, because when gas is cheap, people might take more road trips, which is consumption that could be avoided without great hardship when gas gets expensive. I will also assume that the kids take the bus to school, so that doesn’t contribute to the gasoline cost.

That I will multiply by the average price of gasoline in June of that year, which I have from the EIA since 1993. (I’ll extrapolate 1990-1992 as the same as 1993, which is conservative.)

I will assume that the family owns 2 cell phones, 1 computer, and 1 television. This is tricky, because the quality of these tech items has dramatically increased over time.

If you try to measure with equivalent buying power (e.g. a 1 MHz computer, a 20-inch CRT TV), then you’ll find that these items have gotten radically cheaper; $1000 in 1950 would only buy as much TV as $7 today, and a $50 Raspberry Pi‘s 2.4 GHz processor is 150 times faster than the 16 MHz offered by an Apple Powerbook in 1991—despite the latter selling for $2500 nominally. So in dollars per gigahertz, the price of computers has fallen by an astonishing 7,500 times just since 1990.

But I think that’s an unrealistic comparison. The standards for what was considered necessary have also increased over time. I actually think it’s quite fair to assume that people have spent a roughly constant nominal amount on these items: about $500 for a TV, $1000 for a computer, and $500 for a cell phone. I’ll also assume that the TV and phones are good for 5 years while the computer is good for 2 years, which makes the total annual expenditure for 2 phones, a TV, and a computer equal to 2/5*500 + 1/5*500 + 1/2*1000 = 800. This is about what a family must spend every year to feel like they have an adequate amount of digital technology.

I will also assume that the family buys clothes with this equivalent purchasing power, with an index that goes from 166 in 1990 to 177 in 2024—also nearly constant in nominal terms. I’ll multiply that index by $10 because the average annual household spending on clothes is about $1700 today.

I will assume that the family buys the equivalent of five months of infant care per year; they surely spend more than this (in either time or money) when they have actual infants, but less as the kids grow. This amounts to about $5000 today, but was only $1600 in 1990—a 214% increase, or 3.42% per year.

For food expenditure, I’m going to use the USDA’s thrifty plan for June of that year. I’ll use the figures assuming that one child is 6 and the other is 9. I don’t have data before 1994, so I’ll extrapolate that with the average growth rate of 3.2%.

Food expenditures have been at a fairly consistent 11% of disposable income since 1990; so I’m going to include them as 2*11%*40*50*(after-tax median wage) = 440*(after-tax median wage).

The figures I had the hardest time getting were for utilities. It’s also difficult to know what to include: Is Internet access a necessity? Probably, nowadays—but not in 1990. Should I separate electric and natural gas, even though they are partial substitutes? But using these figures I estimate that utility costs rise at about 0.8% per year in CPI-adjusted terms, so what I’ll do is benchmark to $3800 in 2016 and assume that utility costs have risen by (0.8% + inflation rate) per year each year.

Healthcare is also a tough one; pardon the heteronormativity, but for simplicity I’m going to use the mean personal healthcare expenditures for one man and woman (aged 19-44) and one boy and one girl (aged 0-18). Unfortunately I was only able to find that for two-year intervals in the range from 2002 to 2020, so I interpolated and extrapolated both directions assuming the same average growth rate of 3.5%.

So let’s summarize what all is included here:

  • Estimated payment on a mortgage
  • 0.5 years of college tuition
  • amortized cost of 2 cars
  • 7500/(median MPG) gallons of gasoline
  • amortized cost of 2 phones, 1 computer, and 1 television
  • average spending on clothes
  • 11% of income on food
  • Estimated utilities spending
  • Estimated childcare equivalent to five months of infant care
  • Healthcare for one man, one woman, one boy, one girl

There are obviously many criticisms you could make of these choices. If I were writing a proper paper, I would search harder for better data and run robustness checks over the various estimation and extrapolation assumptions. But for these purposes I really just want a ballpark figure, something that will give me a sense of what rising cost of living feels like to most people.

What I found absolutely floored me. Over the range from 1990 to 2024:

  1. The Index of Necessary Expenditure rose by an average of 3.45% per year, almost a full percentage point higher than the average CPI inflation of 2.62% per year.
  2. Over the same period, after-tax income rose at a rate of 3.31%, faster than CPI inflation, but slightly slower than the growth rate of INE.
  3. The Index of Necessary Expenditure was over 100% of median after-tax household income every year except 2020.
  4. Since 2021, the Index of Necessary Expenditure has risen at an average rate of 5.74%, compared to CPI inflation of only 2.66%. In that same time, after-tax income has only grown at a rate of 4.94%.

Point 3 is the one that really stunned me. The only time in the last 34 years that a family of four has been able to actually pay for all necessities—just necessities—on a typical household income was during the COVID pandemic, and that in turn was only because the federal tax burden had been radically reduced in response to the crisis. This means that every single year, a typical American family has been either going further and further into debt, or scrimping on something really important—like healthcare or education.

No wonder people feel like the economy is failing them! It is!

In fact, I can even make sense now of how Trump could convince people with “Are you better off than you were four years ago?” in 2024 looking back at 2020—while the pandemic was horrific and the disruption to the economy was massive, thanks to the US government finally actually being generous to its citizens for once, people could just about actually make ends meet. That one year. In my entire life.

This is why people felt betrayed by Biden’s economy. For the first time most of us could remember, we actually had this brief moment when we could pay for everything we needed and still have money left over. And then, when things went back to “normal”, it was taken away from us. We were back to no longer making ends meet.

When I went into this, I expected to see that the INE had risen faster than both inflation and income, which was indeed the case. But I expected to find that INE was a large but manageable proportion of household income—maybe 70% or 80%—and slowly growing. Instead, I found that INE was greater than 100% of income in every year but one.

And the truth is, I’m not sure I’ve adequately covered all necessary spending! My figures for childcare and utilities are the most uncertain; those could easily go up or down by quite a bit. But even if I exclude them completely, the reduced INE is still greater than income in most years.

Suddenly the way people feel about the economy makes a lot more sense to me.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.