The housing affordability crisis in one graph

The housing affordability crisis in one graph

Mar 8 JDN 2461108

The graph below, constructed from FRED data, provides a simple measure of housing affordability: How many years of median earnings does it take to afford the median home?

From a low of 4.4 in 1982, this rose to about 5.5 and was relatively stable in the 1990s. Then in the 2000s, it began to rise, peaked at 7.2 just before the housing crisis, and then rapidly dropped to back to 5.5 again.

Then in the 2010s it began to rise again, peaked even higher at 7.6 in 2017, and then dropped down to 6.0 in 2020 before beginning to rise anew. In 2023 it reached a yet higher peak of 8.0, and then has been slowly declining ever since—but is still about 6.5, well above its 1990s level.

I honestly expected worse than this, but I think part of what’s happening is that new homes have gotten a bit smaller in the past few years: median square footage of homes sold has fallen from a peak of 1997 in 2019 to 1788 today. (Unfortunately, FRED doesn’t have this data series going back any earlier than 2016.)

If we adjust for that, the price a typical 2019 home today would be about 7.2 years of median earnings, which is about what it was at the peak of the housing crisis in 2007.

Note of course this isn’t actually how many years you need to save up to buy a house. You clearly can’t save your entire earnings, but you also don’t need to come up with the full price, only the down payment. And what you can afford also depends upon interest rates and such. But still, it’s a pretty clear sign that housing is radically more expensive now than it was in the 1980s or even 1990s.

In my view, this is the affordability crisis.

Gas prices really aren’t that important. Car prices are relatively stable. Food prices are volatile but don’t have a bad long-term trend. We do still have serious problems with affordability in education and healthcare, but we have obvious solutions available (that several other countries are already doing successfully); we’re just not doing them because Republicans don’t like them. But housing? We have no clear solutions on the table, certainly not anything that would be politically viable. Fundamentally, we need to build more housing in places people want to live—a lot more housing—and force the price of housing down.

And with our society structured the way it is, when you price people out of housing, you price them out of adulthood. Millennials are not having kids at anywhere near the rate of previous generations, because raising kids requires living space. Especially with immigration collapsing after Trump, this housing affordability crisis is going to turn into a population crisis.

I guess what I’m hoping for at the moment is just consciousness-raising, making people see that this is actually a problem. For some reason, everyone agrees that rising prices of goods are a bad thing, except when it comes to housing.

Inflation in food? An urgent crisis that must be immediately resolved.

Inflation in gas prices? So terrible it’s worth invading other countries over.

Inflation in housing? No, somehow that’s good actually, because it makes homeowners feel richer (even though they actually owe more in property taxes). We treat housing like an asset instead of a good, which is something we should absolutely never, ever do with a good that people need to live.

How could we make job search less of a nightmare?

Mar 1 JDN 2461101

This has been my “career” for the last two years:

I search through thousands of job postings, which, despite various filters and tags on my searches, almost none of which are actually good fits for me—in part because the search engines simply do not contain a great deal of information that would be vital, like “LGBT friendly”, “supportive of neurodivergent employees”, or “good at accommodating disabilities”. Instead it’s all sorted by “job title”, which at this point is clearly an arms race of search-engine optimization, because I keep getting listings called “tutor” which are actually some sort of interactive training of yet another large language model nobody actually needs. (Actual tutoring of actual human students often is a good fit for me—though it pays much better if you’re freelance than if you work for a company, because the companies take a huge cut of what the customers pay.)

But, after an hour or two of searching, I find a few that seem like they might be worth applying to. They’re never a perfect fit, but beggars can’t be choosers, so I decide I’ll go ahead and apply to them.

They ask for a resume. No problem. Perfectly sensible, I have one handy; maybe I’ll tweak it a bit, but if it’s an industry I often apply to, I may already have a tweaked version ready to go.

They ask for a cover letter. Okay, I guess. There usually isn’t much I can really say there that isn’t already in my resume, but occasionally there’s something worth adding, and it’s only maybe half an hour of work to update an existing cover letter for a new application.

Then, they ask me to input my work history in their proprietary format on their website. WHAT!? WHY!? I just gave you a resume! You aren’t even willing to read it? You want to be able to automate the reading of my resume, so I have to enter into your proprietary database? But okay, fine; beggars can’t be choosers, I remind myself. So I enter everything that’s in my resume again.

Then, they ask me what salary I want. I know this game. You’re trying to make me reveal my preference in this bargaining game so you can gain bargaining power. So I look up what kind of salaries companies like them usually offer for jobs like this, and then I hike it up a bit as the opening bid in a negotiation.

Then, they ask me to fill out some questions that are supposed to assess… something. Some kind of personality test, or “culture fit”, or something similarly fuzzy. I try to interpolate my answers between my genuine feelings and the kind of hyper-obedient corporate drone they’re probably looking for, because I’m not an idiot who would answer honestly (I’m not that autistic), butI wouldn’t actually want to work for anyone who required the very topmost corporate-drone answers.

And then, what happens?

Absolutely nothing.

No response. Weeks pass. At some point, I have to assume that they’ve filled the position or closed it, or maybe that the vacancy was never real at all and they posted it for some other reason—likely to give some sense of searching when they in fact already have someone in mind. (Apparently over a third of online job postings are fake.)

I have done this process over two hundred times.

And in doing so, I have chipped off pieces of my soul. I feel like a shell of the person I was. And I have absolutely nothing to show for it all.

I am not even unusual in this regard: Recruiters often complain that they are swamped because they get 200 applicants per posting—but that means, mathematically, that an average job-seeker must apply to 200 postings before they can expect to get hired. (And which is more work, do you think: Writing a cover letter, or reading one?)

How could we make this better?

There are a lot of problems to fix here, but I have one very simple intervention that would only slightly inconvenience recruiters, while making life dramatically better for applicants. Here goes:

Require them to show you the resume of the person they actually hired.

There should be a time window: Maybe 30 days after you applied; or if it’s a position like in academia where they don’t do interviews for a long time after the application deadline, within 7 days of them starting interviews.

Anonymize the resume appropriately, of course; no photos, no names, no contact information. We don’t want the new hire to get harassed by their competitors. (And this takes, what, 5 minutes to do?)

But having to send that resume solves several problems simultaneously:

  1. It means they have to actually respond—they cannot ghost you. It can be a two-line form letter email with a one-page attachment that’s the same for all 200 applicants—but they have to send you something.
  2. It means they have to actually hire someone—the posting cannot be completely fake. If they are for some reason unable to fill the vacancy and have to close it, they should have to tell you that, and give a reason—and that reason should be legally binding such that if you ever find out it’s not true, you can sue them.
  3. It means that person had to actually apply—they couldn’t have been someone’s nephew who was automatically given the job and the posting was only made to make it look like there was a hiring process. At the very least, said nephew had to actually cough up a resume like the rest of us.
  4. It allows you to compare qualifications—you can see how you stack up against the new hire. If they are genuinely far more qualified? Well, fair enough; perhaps this job was a stretch for you, or it’s a very rough market. If they are about as qualified, or better in some ways, worse in others? Well, you surely were to apply, but you can’t win ’em all. But if they are far less qualified? You now have the basis for a lawsuit, because that looks like nepotism at best and discrimination at worst—and they had to give you that evidence, in writing, in a timely fashion.

The penalty for failing to comply with this regulation could be a small fine, perhaps $100—per applicant. The more people you ghost, the more you have to pay up.

This is clearly a very small amount of extra effort for the recruiters. They already have the resume—hopefully—and all they need to do is anonymize it, grab a standard form letter rejection email, BCC all the applicants to this position (which are—again, hopefully—already stored in one place in the company’s database), attach the anonymized resume, and click Send. We’re talking 15 minutes of work here, regardless of the number of applicants. In fact, it could probably be automated so as to require almost zero marginal effort for each new job: Just check the box next to the name of the person who was hired in the applicant tracking system, and it does the rest. (And if the person you hired wasn’t in the applicant tracking system? That sounds like a you problem, because you’re clearly not treating the other applicants fairly.)

What if we just banned banks?

Feb 22 JDN 2461094

I got a mailer from Wells Fargo today offering me a new credit card. The offer seemed decent, but the first thing that came to my mind was: Why is this company still allowed to exist?

In case you didn’t know, Wells Fargo was caught in 2016 creating millions of fraudulent accounts. They paid a fine of $185 million—which likely was less than the revenue they earned via this massive fraud scheme. How am I supposed to trust them ever again? How is anyone?

It’s hardly just them, of course. Almost every major bank has been implicated in some heinous crime.

JP Morgan Chase helped Jeffrey Epstein conceal assets, rigged municipal bonds transactions, and of course misrepresented thousands of mortgages in a way that directly contributed to the 2008 crisis.

Bank of America also committed mass fraud that contributed to the 2008 crisis.

A case against Citi is currently being tried for failing to protect its customers against fraud.

Capital One is being sued for failing to pay the interest rates it promised on savings accounts.

And let’s not forget HSBC, which laundered money for terrorists.

If these were individuals committing these crimes, they would be in prison, probably for the rest of their lives. But because they are corporations, they get slapped with a fine, or pay a settlement—typically less than what they made in the criminal activity—and then they get to go right back to work as if nothing had happened.

I think it’s time to do something much more radical.

Let’s ban banks.

This might sound crazy at first: Don’t we need banks? Doesn’t our whole financial system rest upon them?

But in fact, we do not need banks at all. We need loans, we need deposits, we need mortgages. But we already have a fully-functional alternative system for providing those services which is not implicated in crime after crime after heinous crime:

They are called credit unions.

Credit unions already provide almost all the services currently provided by banks—and most of the ones they don’t provide, we probably didn’t actually need anyway. There are already nearly 5,000 credit unions in the US with over 130 million customers.

Credit unions almost always fare better in financial crises, because they don’t overleverage themselves. They are far less likely to be involved in fraud. They don’t get involved in high-risk speculation. They offer higher yields on savings and lower rates on loans and credit cards. Basically they are better than banks in every way.

Why are credit unions so much better-behaved?

Because they are co-ops instead of for-profit corporations.

Customers of credit unions are also owners of credit unions, so there are no extra profits being siphoned off somewhere to greedy shareholders whose only goal in life is number go up.

Free markets are genuinely more efficient than centrally-planned systems. But there’s nothing about free markets that requires the owners of capital to be their own class of people who aren’t workers or customers and make their money by buying, selling, and owning things. That’s what’s wrong with capitalism—not too little central planning, but too concentrated ownership.

As I’ve written about before, co-ops are just as efficient as corporations, and produce much lower inequality.

For many industries, transitioning to co-ops would be a major change, and require lots of new organization that isn’t there. But for banking, the co-ops already exist. All we need to do is ban the alternative and force everyone to use the better, safer system. Come up with some way to transfer all the accounts fairly to credit unions, and—very intentionally—leave the shareholders of these criminal enterprises with absolutely nothing.

In fact, since credit unions are more likely to support other co-ops, forcing the financial system to transition to credit unions might actually make the process of transitioning our entire economy to co-ops easier.

It may seem extreme, but please, take a look again at all those crimes that all these major, highly-successful, market-dominating banks have committed. They’ve had their chance to prove that they can be honest and law-abiding, and they have failed.

Get rid of them.

How to be a deontological consequentialist

Dec 7 JDN 2461017

As is commonly understood, there are two main branches of normative ethics:

  • Deontology, on which morality consists in following rules and fulfilling obligations, and
  • Consequentialism, on which morality consists in maximizing good consequences.

The conflict between them has raged for centuries, with Kantians leading the deontologists and utilitarians leading the consequentialists. Both theories seem to have a lot of good points, but neither can decisively defeat the other.

I think this is because they are both basically correct.

In their strongest forms, deontology and consequentialism are mutually contradictory; but it turns out that you can soften each of them a little bit, and the results become compatible.

To make deontology a little more consequentialist, let’s ask a simple question:

What makes a rule worth following?

I contend that the best answer we have is “because following that rule would make the world better off than not following that rule”. (Even Kantians pretty much have to admit this: What maxim could you will to be an absolute law? Only a law that would yield good outcomes.)

That is, the ultimate justification of a sound deontology would be fundamentally consequentialist.

But lest the consequentialists get too smug, we can also ask them another question, which is a bit subtler:

How do you know which actions will ultimately have good consequences?

Sure, if we were omniscient beings who could perfectly predict the consequences of our actions across the entire galaxy on into the indefinite future, we could be proper act utilitarians who literally choose every single action according to a calculation of the expected utility.

But in practice, we have radical uncertainty about the long-term consequences of our actions, and can generally only predict the immediate consequences.

That leads to the next question:

Would you really want to live in a world where people optimized immediate consequences?

I contend that you would not, that such a world actually sounds like a dystopian nightmare.

Immediate consequences say that if a healthy person walks into a hospital and happens to have compatible organs for five people who need donations, we should kill that person, harvest their organs, and give them to the donors. (This is the organ transplant variant of the Trolley Problem.)

Basically everyone recognizes that this is wrong. But why is it wrong? That’s thornier. One pretty convincing case is that a systematic policy of this kind would undermine trust in hospitals and destroy the effectiveness of healthcare in general, resulting in disastrous consequences far outweighing the benefit of saving those five people. But those aren’t immediate consequences, and indeed, it’s quite difficult to predict exactly how many crazy actions like this it would take to undermine people’s trust in hospitals, just how much it would undermine that trust, or exactly what the consequences of that lost trust would be.

So it seems like it’s actually better to have a rule about this.

This makes us into rule utilitarians, who instead of trying to optimize literally every single action—which requires information we do not have and never will—we instead develop a system of rules that we can follow, heuristics that will allow us to get better outcomes generally even if they can’t be guaranteed to produce the best possible outcome in any particular case.

That is, the output of a sophisticated consequentialism is fundamentally deontological.

We have come at the question of normative ethics from two very different directions, but the results turned out basically the same:

We should follow the rules that would have the best consequences.

The output of our moral theory is rules, like deontology; but its fundamental justification is based on outcomes, like consequentialism.

In my experience, when I present this account to staunch deontologists, they are pretty much convinced by it. They’re prepared to give up the fundamental justification to consequences if it allows them to have their rules.

The resistance I get is mainly from staunch consequentialists, who insist that it’s not so difficult to optimize individual actions, and so we should just do that instead of making all these rules.

So it is to those consequentialists, particularly those who say “rule utilitarianism collapses into act utilitarianism”, to whom the rest of the post is addressed.

First, let me say that I agree.

In the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents, rule utilitarianism mathematically collapses into act utilitarianism. That is a correct theorem.

However, we do not live in the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents. We are not even close to that ideal case; we will never be close to that ideal case. Indeed, I think part of the problem here is that you fail to fully grasp the depth and width of the chasm between here and there. Even a galactic civilization of a quintillion superhuman AIs would still not be close to that ideal case.

Quite frankly, humans aren’t even particularly good at forecasting what will make themselves happy.

There are massive errors and systematic biases in human affective forecasting.

One of the post important biases is impact bias: People systematically overestimate the impact of individual events on their long-term happiness. Some of this seems to be just due to focus: Paying attention to a particular event exaggerates its importance in your mind, and makes it harder for you to recall other events that might push your emotions in a different direction. Another component is called immune neglect: people fail to account for their own capacity to habituate to both pleasant and unpleasant experiences. (This effect is often overstated: It’s a common misconception that lottery winners are no happier than they were before. No, they absolutely are happier, on average; they’re just not as much happier as they predicted themselves to be.)

People also use inconsistent time discounting: $10 today is judged as better than $11 tomorrow, but $10 in 364 days is not regarded as better than $11 in 365 days—so if I made a decision a year ago, I’d want to change it now. (The correct answer, by the way, is to take the $11; a discount rate of 10% per day is a staggering 120,000,000,000,000,000% APR—seriously; check it yourself—so you’d better not be discounting at that rate, unless you’re literally going to die before tomorrow.)

Now, compound that with the fact that different human beings come at the world from radically different perspectives and with radically different preferences.

How good do you think we are at predicting what will make other people happy?

Damn right: We’re abysmal.

Basically everyone assumes that what they want and what they would feel is also what other people will want and feel—which, honestly, explains a lot about politics. As a result, my prediction of your feelings is more strongly correlated with my prediction of my feelings than it is with your actual feelings.

The impact bias is especially strong when forecasting other people’s feelings in response to our own actions: We tend to assume that other people care more about what we do than they actually care—and this seems to be a major source of social anxiety.

People also tend to overestimate the suffering of others, and are generally willing to endure more pain than they are willing to inflict upon others. (This one seems like it might be a good thing!)

Even when we know people well, we can still be totally blindsided by their emotional reactions. We’re just really awful at this.

Does this just mean that morality is hopeless? We have no idea what we’re doing?

Fortunately, no. Because while no individual can correctly predict or control the outcomes of particular actions, the collective action of well-designed institutions can in fact significantly improve the outcomes of policy.

This is why we have things like the following:

  • Laws
  • Courts
  • Regulations
  • Legislatures
  • Constitutions
  • Newspapers
  • Universities

These institutions—which form the backbone of liberal democracy—aren’t simply arbitrary. They are the result of hard-fought centuries, a frothing, volatile, battle-tested mix of intentional design and historical evolution.

Are these institutions optimal? Good heavens, no!

But we have no idea what optimal institutions look like, and probably never will. (Those galaxy-spanning AIs will surely have a better system than this; but even theirs probably won’t be optimal.) Instead, what we are stuck with are the best institutions we’ve come up with so far.

Moreover, we do have very clear empirical evidence at this point that some form of liberal democracy with a mixed economy is the best system we’ve got so far. One can reasonably debate whether Canada is doing better or worse than France, or whether the system in Denmark could really be scaled to the United States, or just what the best income tax rates are; but there is a large, obvious, and important difference between life in a country like Canada or Denmark and life in a country like Congo or Afghanistan.

Indeed, perhaps there is no better pair to compare than North and South Korea: Those two countries are right next to each other, speak the same language, and started in more or less the same situation; but the south got good institutions and the north got bad ones, and now the difference between them couldn’t be more stark. (Honestly, this is about as close as we’re ever likely to get of a randomized controlled experiment in macroeconomics.)

People in South Korea now live about as well as some of the happiest places in the world; their GDP per capita PPP is about $65,000 per year, roughly the same as Canada. People in North Korea live about as poorly as it is possible for humans to live, subject to totalitarian oppression and living barely above subsistence; their GDP per capita PPP is estimated to be $600 per year—less than 1% as much.

The institutions of South Korea are just that much better.

Indeed, there’s one particular aspect of good institutions that seems really important, yet is actually kind of hard to justify in act-utilitarian terms:

Why is freedom good?

A country’s level of freedom is almost perfectly correlated with its overall level of happiness and development. (Yes, even on this measure, #ScandinaviaIsBetter.)

But why? In theory, letting people do whatever they want could actually lead to really bad outcomes—and indeed, occasionally it does. There’s even a theorem that liberty is incompatible with full Pareto-efficiency. But all the countries with the happiest people seem to have a lot of liberty, and indeed the happiest ones seem to have the most. How come?

My answer:

Personal liberty is a technology for heuristic utility maximization.

In the ideal case, we wouldn’t really need personal liberty; you could just compel everyone to do whatever is optimal all the time, and that would—by construction—be optimal. It might even be sort of nice: You don’t need to make any difficult decisions, you can just follow the script and know that everything will turn out for the best.

But since we don’t know what the optimal choice is—even in really simple cases, like what you should eat for lunch tomorrow—we can’t afford to compel people in this way. (It would also be incredibly costly to implement such totalitarian control, but that doesn’t stop some governments from trying!)

Then there are disagreements: What I think is optimal may not be what you think is optimal, and in truth we’re probably both wrong (but one of us may be less wrong).

And that’s not even getting into conflicts of interest: We aren’t just lacking in rationality, we’re also lacking in benevolence. Some people are clearly much more benevolent than others, but none of us are really 100% selfless. (Sadly, I think some people are 100% selfish.)

In fact, this is a surprisingly deep question:

Would the world be better if we were selfless?

Could there be actually some advantage in aggregate to having some degree of individual self-interest?

Here are some ways that might hold, just off the top of my head:

  • Partial self-interest supports an evolutionary process of moral and intellectual development that otherwise would be stalled or overrun by psychopaths—see my post on Rousseaus and Axelrods
  • Individuals have much deeper knowledge of their own preferences than anyone else’s, and thus can optimize them much better. (Think about it: This is true even of people you know very well. Otherwise, why would we ever need to ask our spouses one of the most common questions in any marriage: “Honey, what do you want for dinner tonight?”)
  • Self-interest allows for more efficient economic incentives, and thus higher overall productivity.

Of course, total selfishness is clearly not optimal—that way lies psychopathy. But some degree of selfishness might actually be better for long-term aggregate outcomes than complete altruism, and this is to some extent an empirical question.

Personal liberty solves a lot of these problems: Since people are best at knowing their own preferences, let people figure out on their own what’s good for them. Give them the freedom to live the kind of life they want to live, within certain reasonable constraints to prevent them from causing great harm to others or suffering some kind of unrecoverable mistake.

This isn’t exactly a new idea; it’s basically the core message of John Stuart Mill’s On Liberty (which I consider a good candidate for the best book every written—seriously, it beats the Bible by a light-year). But by putting it in more modern language, I hope to show that deontology and consequentialism aren’t really so different after all.

And indeed, for all its many and obvious flaws, freedom seems to work pretty well—at least as well as anything we’ve tried.

What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.

You call this a hobby?

Nov 9 JDN 2460989

A review of Politics is for Power by Eitan Hersch

This week, there was an election. It’s a minor midterm election—since it’s an odd-numbered year, many places don’t even have any candidates on the ballot—and as a result, turnout will surely be low. Eitan Hersch has written a book about why that’s a bad thing, and how it is symptomatic of greater problems in our civic culture as a whole.

Buried somewhere in this book, possible to find through committed, concerted effort, there is a book that could have had a large positive effect on our political system, our civic discourse, and our society as a whole. Sadly, Dr. Hersch buried it so well that most people will never find it.

In particular, he starts the booknot even on the first page, but on the cover—by actively alienating his core audience with what seems to be the very utmost effort he can muster.


Yes, even the subtitle is condescending and alienating:

How to Move Beyond Political Hobbyism, Take Action, and Make Real Change

And of course it’s not just there; on page after page he drives the dagger deeper and twists it as hard as he can, repeating the accusation over and over:

This is just a hobby for you. It doesn’t really mean anything.

Today’s hobbyists possess the negative qualities of the amateurs—hyperemotional engagement, obsession with national politics, an insatiable appetite for debate—and none of the amateur’s positive qualities—the neighborhood meetings, the concrete goals, the leadership.

– p.9

You hear that? You’re worse than an amateur. This is on page 9. Page 9.

[…] Much of the time we spend on politics is best described as an inward-focused leisure activity for people who like politics.

We may not easily concede that we are doing politics for fun.[…]

-p. 14

See? You may say it’s not really just for fun, but you’re lying. You’re failing to concede the truth.

To the political hobbyist, news is a form of entertainment and needs to be fun.

-p.19

You hear me? This is fun for you. You’re enjoying this. You’re doing it for yourself.

The real explanation for the dynamics of voter turnout is that we treat politics like a game and follow the spectacle. Turnout is high in presidential elections compared to other US elections in the same way that football viewership is high when the Super Bowl is on. Many people who do not like football or even know the rules of the game end up at a Super Bowl party. They’re there for the commercials, the guacamole, and to be part of a cultural moment. That’s why turnout is high in presidential elections. Without the spectacle, even people who say they care about voting don’t show up.

-p. 48

This is all a game. It’s not real. You don’t really care.

I could go on; he keeps repeating this message—this insult, this accusation—throughout the book. He tells you, over and over, that if you are not already participating in politics in the very particular way he wants you to (and he may even be right that it would be better!), you are a selfish liar, and you are treating what should be vitally important as just meaningless entertainment.

This made it honestly quite painful to get through the book. Several times, I was tempted to just give up and put it back on the shelf. But I’m glad I didn’t, because there are valuable insights about effective grassroots political activism buried within this barrage of personal accusations.

I guess Hersch must not see this as a personal accusation; at one point, he acknowledges that people might find it insulting, but (1) doesn’t seem to care and (2) makes no effort to inquire as to why we might feel that way; in fact, he manages to twist the knife just a little deeper in that very same passage:

For the non-self-identifying junkies, the term political hobbyist can be insulting. Given how important politics is, it doesn’t feel good to call one’s political activity a hobby. The term is also insulting, I have learned, to real hobbyists, who see hobbies as activities with much more depth than the online bickering or addictive news consumption I’m calling a hobby.

-p. 88

You think calling it a “hobby” is insulting? Yeah, well, it’s worse than that, so ha!

But let me tell you something about my own experience of politics. (Actually, one of Hersch’s central messages is that sharing personal experiences is one of the most powerful political tools I know.)

How do most people I know feel about politics, since, oh, say… November 2016?

ABSOLUTE HORROR AND DESPAIR.

For every queer person I know, every trans person, every immigrant, every woman, every person of color, and for plenty of White cishet liberal guys too, the election of President Donald Trump was traumatic. It felt like a physical injury. People who had recovered from depression were thrust back into it. People felt physically nauseated. And especially for immigrants and trans people, people literally feared for their lives and were right to do so.

WHATEVER THIS IS, IT IS NOT A HOBBY.

I’ve had to talk people down from psychotic episodes and suicidal ideation because of this, and you have the fucking audacity to tell me that we’re doing this for fun!?

If someone feared for their life because their team lost the Super Bowl, we would rightfully recognize that as an utterly pathological response. But I know a whole bunch of folks on student visas that are constantly afraid of being kidnapped and taken away by masked men with guns, because that is a thing that has actually happened to other people who were in this country on student visas. I know a whole bunch of trans folks who are afraid of assaulted or even killed for using the wrong bathroom, because that is a thing that actually happens to trans people in this country.

I wish I could tell these people—many of them dear friends of mine—that they are wrong to fear, that they are safe, that everything will be all right. But as long as Donald Trump is in power and the Republicans in Congress and the right-wing Supreme Court continue to enable him, I can’t tell them that, because I would be lying; the danger is real. All I can do is tell them that it is probably not as great a danger as they fear, and that if there is any way I can help them, I am willing to do so.

Indeed, politics for me and those closest to me is so obviously so much not a hobby that repeatedly insisting that I admit that it is starts to feel like gaslighting. I feel like I’m in a struggle session or something: “Admit you are a hobbyist! Repent!”

I don’t know; maybe there are people for whom politics is just a hobby. Maybe the privileged cishet White kids at Tufts that Dr. Hersch lectures to are genuinely so removed from the consequences of public policy that they can engage with politics at their leisure and for their own entertainment. (A lot of the studies he cites are specifically about undergrads; I know this is a thing in pretty much all social science… but maybe undergrads are in fact not a very representative sample of political behavior?) But even so, some of the international students in those lecture halls (11% of Tufts undergrads and 17% of Tufts grads) probably feel pretty differently, I have to imagine.

In fact, maybe genuine political hobbyism is a widespread phenomenon, and its existence explains a lot of otherwise really baffling things about the behavior of our electorate (like how the same districts could vote for both Donald Trump and Alexandria Ocasio-Cortez). I don’t find that especially plausible given my own experience, but I’m an economist, not a political scientist, so I do feel like I should offer some deference to the experts on this matter. (And I’m well aware that my own social network is nothing like a representative sample of the American electorate.)

But I can say this for sure:

The target audience of this book is not doing this as a hobby.

Someone who picks up a book by a political scientist hoping for guidance as to how to make their own political engagement more effective is not someone who thinks this is all a game. They are not someone who is engaging with politics as a fun leisure activity. They are someone who cares. They are someone who thinks this stuff matters.

By construction, the person who reads this book to learn about how to make change wants to make change.

So maybe you should acknowledge that at some point in your 200 pages of text? Maybe after spending all these words talking about how having empathy is such an important trait in political activism, you should have some empathy for your audience?

Hersch does have some useful advice to give, buried in all this.

His core message is basically that we need more grassroots activism: Small groups of committed people, acting in their communities. Not regular canvassing, which he acknowledges as terrible (and as well he should; I’ve done it, and it is), but deep canvassing, which also involves going door to door but is really a fundamentally different process.

Actually, he seems to love grassroots organizing so much that he’s weirdly nostalgic for the old days of party bosses. Several times, he acknowledges that these party bosses were corrupt, racist, and utterly unaccountable, but after every such acknowledgment he always follows it up with some variation on “but at least they got things done”.

He’s honestly weirdly dismissive of other forms of engagement, though. Like, I expected him to be dismissive of “slacktivism” (though I am not), if for no other reason than the usual generational curmudgeonry. But he’s also weirdly dismissive of donations and even… honestly… voting? He doesn’t even seem interested in encouraging people to vote more. He doesn’t seem to think that get-out-the-vote campaigns are valuable.

I guess as a political scientist, he’s probably very familiar with the phenomenon of “low information voters”, who frequently swing elections despite being either clueless or actively misled. And okay, maybe turning out those people isn’t all that useful, at least if it’s not coupled with also educating them and correcting their misconceptions. But surely it’s not hobbyism to vote? Surely doing the one most important thing in a democratic system isn’t treating this like a game?

In his section on donations, he takes two tacks against them:

The first is to say that rich donors who pay $10,000 a plate for fancy dinners really just want access to politicians for photo ops. I don’t think that’s right, but the truth is admittedly not much better: I think they want access to politicians to buy influence. This is “political engagement” in some sense—you’re acting to exert power—but it’s corrupt, and it’s the source of an enormous amount of damage to our society—indeed to our planet itself. But I think Hersch has to deny that the goal is influence, because that would in fact be “politics for power”, and in order to remain fiercely non-partisan throughout (which, honestly, probably is a good strategic move), he carefully avoids ever saying that anyone exerting political power is bad.

Actually the closest he gets to admitting his own political beliefs (surprise, the Massachusetts social science professor is a center-left liberal!) comes in a passage where he bemoans the fact that… uh… Democrats… aren’t… corrupt enough? If you don’t believe me, read it for yourself:

The hobbyist motivation among wealthy donors is also problematic for a reason that doesn’t have a parallel in the nonprofit world: Partisan asymmetry. Unlike Democratic donors, Republican donors typically support politicians whose policy priorities align with a wealthy person’s financial interests. The donors can view donations as an investment. When Schaffner and I asked max-out donors why they made their contribution, many more Republicans than Democrats said that a very or extremely important reason for their gift was that the politician could affect the donor’s own industry (37 percent of Republicans versus 22 percent of Democrats).

This asymmetry puts Democrats at a disadvantage. Not motivated by their own bottom line, Democratic donors instead have to be motivated by ideology, issues, or even by the entertainment value that a donation provides.

-p.80

Yes, God forbid they be motivated by issues or ideology. That would involve caring about other people. Clearly only naked self-interest and the profit motive could ever be a good reason for political engagement! (Quick question: You haven’t been, uh, reading a lot of… neoclassical economists lately, have you? Why? Oh, no reason.) Oh why can’t Democrats just be more like Republicans, and use their appallingly vast hoards of money to make sure that we cut social services and deregulate everything until the polluted oceans flood the world!?

The second is to say that the much broader population who makes small donations of $25 or $50 is “ideologically extreme” compared to the rest of the population, which is true, but seems to me utterly unsurprising. The further the world is from how you’d like to see it, the greater the value is to you of changing the world, and therefore the more you should be willing to invest into making that change—or even into a small probability of possibly making that change. If you think things are basically okay, why would you pay money to try to make them different? (I guess maybe you’d try to pay money to keep them the same? But even so-called “conservatives” never actually seem to campaign on that.)

I also don’t really see “ideologically extreme” as inherently a bad thing.

Sure, some extremists are very bad: Nazis are extreme and bad (weird that this seems controversial these days), Islamists are extreme and bad, Christian nationalists are extreme and bad, tankie leftists are extreme and bad.

But vegetarians—especially vegans—are also “ideologically extreme”, but quite frankly we are objectively correct, and maybe don’t even go far enough (I only hope that future generations will forgive me for my cheese). Everyone knows that animals can suffer, and everyone who is at all informed knows that factory farms make them suffer severely. The “moderate” view that all this horrible suffering is justifiable in the name of cheap ground beef and chicken nuggets is a fundamentally immoral one. (Maybe I could countenance a view that free-range humane meat farming is acceptable, but even that is far removed from our current political center.)

Trans activism is in some sense “ideologically extreme”—and frequently characterized as such—but it basically amounts to saying that the human rights of free expression, bodily autonomy, and even just personal safety outweigh other people’s narrow, blinkered beliefs about sex and gender. Okay, maybe we can make some sort of compromise on trans kids in sports (because why should I care about sports?), and I’m okay with gender-neutral bathrooms instead of letting trans women in women’s rooms (because gender-neutral bathrooms give more privacy and safety anyway!), and the evidence on the effects of puberty blockers and hormones is complicated (which is why it should be decided by doctors and scientists, not by legislators!), but in our current state, trans people die to murder and suicide at incredibly alarming rates. The only “moderate” position here is to demand, at minimum, enforced laws against discrimination and hate crimes. (Also, calling someone by the name and pronouns they ask you to costs you basically nothing. Failing to do that is not a brave ideological stand; it’s just you being rude and obnoxious. Indeed, since it can trigger dysphoria, it’s basically like finding out someone’s an arachnophobe and immediately putting a spider in their hair.)

Open borders is regarded as so “ideologically extreme” that even the progressive Democrats won’t touch it, despite the fact that I literally am not aware of a single ethical philosopher in the 21st century who believes that our current system of immigration control is morally justifiable. Even the ones who favor “closed borders” in principle are almost unanimous that our current system is cruel and racist. The Lifeboat Theory is ridiculous; allowing immigrants in wouldn’t kill us, it would just maybe—maybe—make us a little worse off. Their lives may be at stake, but ours are not. We are not keeping people out of a lifeboat so it doesn’t sink; we are keeping them out of a luxury cruise liner so it doesn’t get dirty and crowded.

Indeed, even so-called “eco-terrorists”, who are not just ideologically extreme but behaviorally extreme as well, don’t even really seem that bad. They are really mostly eco-vandals; they destroy property, they don’t kill people. There is some risk to life and limb involved in tree spiking or blowing up a pipeline, but the goal is clearly not to terrorize people; it’s to get them to stop doing a particular thing—a particular thing that they in fact probably should stop doing. I guess I understand why this behavior has to be illegal and punished as such; but morally, I’m not even sure it’s wrong. We may not be able to name or even precisely count the children saved who would have died if that pipeline had been allowed to continue pumping oil and thus spewing carbon emissions, but that doesn’t make them any less real.

So really, if anything, the problem is not “extremism” in some abstract sense, but particular beliefs and ideologies, some of which are not even regarded as extreme. A stronger vegan lobby would not be harmful to America, however “extreme” they might be, and a strong Republican lobby, however “mainstream” it is perceived to be, is rapidly destroying our nation on a number of different levels.

Indeed, in parts of the book, it almost seems like Hersch is advocating in some Nietzschean sense for power for its own sake. I don’t think that’s really his intention; I think he means to empower the currently disempowered, for the betterment of society as a whole. But his unwillingness to condemn rich Republicans who donate the maximum allowed in order to get their own industry deregulated is at least… problematic, as both political activists and social scientists are wont to say.

I’m honestly not even sure that empowering the disempowered is what we need right now. I think a lot of the disempowered are also terribly misinformed, and empowering them might actually make things worse. In fact, I think the problem with the political effect of social media isn’t that it has failed to represent the choices of the electorate, but that it has represented them all too well and most people are really, really bad—just, absolutely, shockingly, appallingly bad—at making good political choices. They have wildly wrong beliefs about really basic policy questions, and often think that politicians’ platforms are completely different from what they actually are. I don’t go quite as far as this article by Dan Williams in Conspicuous Cognition, but it makes some really good points I can’t ignore. Democracy is currently failing to represent the interests of a great many Americans, but a disturbingly large proportion of this failure must be blamed on a certain—all too large—segment of the American populace itself.

I wish this book had been better.

More grassroots organizing does seem like a good thing! And there is some advice in this book about how to do it better—though in my opinion, not nearly enough. A lot of what Hersch wants to see happen would require tremendous coordination between huge numbers of people, which almost seems like saying “politics would be better if enough people were better about politics”. What I wanted to hear more about was what I can do; if voting and donating and protesting and blogging isn’t enough, what should I be doing? How do I make it actually work? It feels like Hersch spent so long trying to berate me for being a “hobbyist” that he forgot to tell me what he actually thinks I should be doing.

I am fully prepared to believe that online petitions and social media posts don’t accomplish much politically. (Indeed, I am fully prepared to believe that blogging doesn’t accomplish much politically.) I am open to hearing what other options are available, and eager for guidance about how to have the most effective impact.

But could you please, please not spend half the conversation repeatedly accusing me of not caring!?

What is the real impact of AI on the environment?

Oct 19 JDN 2460968

The conventional wisdom is that AI is consuming a huge amount of electricity and water for very little benefit, but when I delved a bit deeper into the data, the results came out a lot more ambiguous. I still agree with the “very little benefit” part, but the energy costs of AI may not actually be as high as many people believe.

So how much energy does AI really use?

This article in MIT Technology Reviewestimates that by 2028, AI will account for 50% of data center usage and 6% of all US energy. But two things strike me about that:

  1. This is a forecast. It’s not what’s currently happening.
  2. 6% of all US energy doesn’t really sound that high, actually.

Note that transportation accounts for 37% of US energy consumed. Clearly we need to bring that down; but it seems odd to panic about a forecast of something that uses one-sixth of that.

Currently, AI is only 14% of data center energy usage. That forecast has it rising to 50%. Could that happen? Sure. But it hasn’t happened yet. Data centers are being rapidly expanded, but that’s not just for AI; it’s for everything the Internet does, as more and more people get access to the Internet and use it for more and more demanding tasks (like cloud computing and video streaming).

Indeed, a lot of the worry really seems to be related to forecasts. Here’s an even more extreme forecast suggesting that AI will account for 21% of global energy usage by 2030. What’s that based on? I have no idea; they don’t say. The article just basically says it “could happen”; okay, sure, a lot of things could happen. And I feel like this sort of forecast comes from the same wide-eyed people who say that the Singularity is imminent and AI will soon bring us to a glorious utopia. (And hey, if it did, that would obviously be worth 21% of global energy usage!)

Even more striking to me is the fact that a lot of other uses of data centers are clearly much more demanding. YouTube uses about 50 times as much energy as ChatGPT; yet nobody seems to be panicking that YouTube is an environmental disaster.

What is a genuine problem is that data centers have strong economies of scale, and so it’s advantageous to build a few very large ones instead of a lot of small ones; and when you build a large data center in a small town it puts a lot of strain on the local energy grid. But that’s not the same thing as saying that data centers in general are wastes of energy; on the contrary, they’re the backbone of the Internet and we all use them almost constantly every day. We should be working on ways to make sure that small towns aren’t harmed by building data centers near them; but we shouldn’t stop building data centers.

What about water usage?

Well, here’s an article estimating that training ChatGPT-3 evaporated hundreds of thousands of liters of fresh water. Once again I have a few notes about that:

  1. Evaporating water is just about the best thing you could do to it aside from leaving it there. It’s much better than polluting it (which is what most water usage does); it’s not even close. That water will simply rain back down later.
  2. Total water usage in the US is estimated at over 300 billion gallons (1.1 trillion liters) per day. Most of that is due to power generation and irrigation. (The best way to save water as a consumer? Become vegetarian—then you’re getting a lot more calories per irrigated acre.)
  3. A typical US household uses about 100 gallons (380 liters) of water per person per day.

So this means that training ChatGPT-3 cost about 4 seconds of US water consumption, or the same as what a single small town uses each day. Once again, that doesn’t seem like something worth panicking over.

A lot of this seems to be that people hear big-sounding numbers and don’t really have the necessary perspective on those numbers. Of course any service that is used by millions of people is going to consume what sounds like a lot of electricity. But in terms of usage per person, or compared to other services with similar reach, AI really doesn’t seem to be uniquely demanding.

This is not to let AI off the hook.

I still agree that the benefits of AI have so far been small, and the risks—both in the relatively short term, of disrupting our economy and causing unemployment, and in the long term, even endangering human civilization itself—are large. I would in fact support an international ban on all for-profit and military research and development of AI; a technology this powerful should be under the control of academic institutions and civilian governments, not corporations.

But I don’t think we need to worry too much about the environmental impact of AI just yet. If we clean up our energy grid (which has just gotten much easier thanks to cheap renewables) and transportation systems, the additional power draw from data centers really won’t be such a big problem.

The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

Solving the student debt problem

Aug 24 JDN 2460912

A lot of people speak about student debt as a “crisis”, which makes it sound like the problem is urgent and will have severe consequences if we don’t soon intervene. I don’t think that’s right. While it’s miserable to be unable to pay your student loans, student loans don’t seem to be driving people to bankruptcy or homelessness the way that medical bills do.

Instead I think what we have here is a long-term problem, something that’s been building for a long time and will slowly but surely continue getting worse if we don’t change course. (I guess you can still call it a “crisis” if you want; climate change is also like this, and arguably a crisis.)

But there is a problem here: Student loan balances are rising much faster than other kinds of debt, and the burden falls the worst on Black women and students who went to for-profit schools. A big part of the problem seems to be predatory schools that charge high prices and make big promises but offer poor results.

Making all this worse is the fact that some of the most important income-based repayment plans were overturned by a federal court, forcing everyone who was on them into forebearance. Income-based repayment was a big reason why student loans actually weren’t as bad a burden as their high loan balances might suggest; unlike a personal loan or a mortgage, if you didn’t have enough income to repay your student loans at the full amount, you could get on a plan that would let you make smaller payments, and if you paid on that plan for long enough—even if it didn’t add up to the full balance—your loans would be forgiven.

Now the forebearance is ending for a lot of borrowers, and so they are going into default; and most of that loan forgiveness has been ruled illegal. (Supposedly this is because Congress didn’t approve it. I’ll believe that was the reason when the courts overrule Trump’s tariffs, which clearly have just as thin a legal justification and will cause far more harm to us and the rest of the world.)

In theory, student loans don’t really seem like a bad idea.

College is expensive, because it requires highly-trained professors, who demand high salaries. (The tuition money also goes other places, of course….)

College is valuable, because it provides you with knowledge and skills that can improve your life and also increase your long-term earnings. It’s a big difference: Median salary for someone with a college degree is about $60k, while median salary for someone with only a high school diploma is about $34k.

Most people don’t have enough liquidity to pay for college.

So, we provide loans, so that people can pay for college, and then when they make more money after graduating, they can pay the loans back.

That’s the theory, anyway.

The problem is that average or even median salaries obscure a lot of variation. Some college graduates become doctors, lawyers, or stockbrokers and make huge salaries. Others can’t find jobs at all. In the absence of income-based repayment plans, all students have to pay back their loans in full, regardless of their actual income after graduation.

There is inherent risk in trying to build a career. Our loan system—especially with the recent changes—puts most of this risk on the student. We treat it as their fault they can’t get a good job, and then punish them with loans they can’t afford to repay.

In fact, right now the job market is pretty badfor recent graduates—while usually unemployment for recent college grads is lower than that of the general population, since about 2018 it has actually been higher. (It’s no longer sky-high like it was during COVID; 4.8% is not bad in the scheme of things.)

Actually the job market may even be worse than it looks, because new hires are actually the lowest rate they’ve been since 2020. Our relatively low unemployment currently seems to reflect a lack of layoffs, not a healthy churn of people entering and leaving jobs. People seem to be locked into their jobs, and if they do leave them, finding another is quite difficult.

What I think we need is a system that makes the government take on more of the risk, instead of the students.

There are lots of ways to do this. Actually, the income-based repayment systems we used to have weren’t too bad.

But there is actually a way to do it without student loans at all. College could be free, paid for by taxes.


Now, I know what you’re thinking: Isn’t this unfair to people who didn’t go to college? Why should they have to pay?

Who said they were paying?

There could simply be a portion of the income tax that you only pay if you have a bachelor’s degree. Then you would only pay this tax if you both graduated from college and make a lot of money.

I don’t think this would create a strong incentive not to get a bachelor’s degree; the benefits of doing so remain quite large, even if your taxes were a bit higher as a result.

It might create incentives to major in subjects that aren’t as closely linked to higher earnings—liberal arts instead of engineering, medicine, law, or business. But this I see as fundamentally a public good: The world needs people with liberal arts education. If the market fails to provide for them, the government should step in.

This plan is not as progressive as Elizabeth Warren’s proposal to use wealth taxes to fund free college; but it might be more politically feasible. The argument that people who didn’t go to college shouldn’t have to pay for people who did actually seems reasonable to me; but this system would ensure that in fact they don’t.

The transfer of wealth here would be from people who went to college and make a lot of money to people who went to college and don’t make a lot of money. It would be the government bearing some of the financial risk of taking on a career in an uncertain world.

On foxes and hedgehogs, part I

Aug 3 JDN 2460891

Today I finally got around to reading Expert Political Judgment by Philip E. Tetlock, more or less in a single sitting because I’ve been sick the last week with some pretty tight limits on what activities I can do. (It’s mostly been reading, watching TV, or playing video games that don’t require intense focus.)

It’s really an excellent book, and I now both understand why it came so highly recommended to me, and now pass on that recommendation to you: Read it.

The central thesis of the book really boils down to three propositions:

  1. Human beings, even experts, are very bad at predicting political outcomes.
  2. Some people, who use an open-minded strategy (called “foxes”), perform substantially better than other people, who use a more dogmatic strategy (called “hedgehogs”).
  3. When rewarding predictors with money, power, fame, prestige, and status, human beings systematically favor (over)confident “hedgehogs” over (correctly) humble “foxes”.

I decided I didn’t want to make this post about current events, but I think you’ll probably agree with me when I say:

That explains a lot.

How did Tetlock determine this?

Well, he studies the issue several different ways, but the core experiment that drives his account is actually a rather simple one:

  1. He gathered a large group of subject-matter experts: Economists, political scientists, historians, and area-studies professors.
  2. He came up with a large set of questions about politics, economics, and similar topics, which could all be formulated as a set of probabilities: “How likely is this to get better/get worse/stay the same?” (For example, this was in the 1980s, so he asked about the fate of the Soviet Union: “By 1990, will they become democratic, remain as they are, or collapse and fragment?”)
  3. Each respondent answered a subset of the questions, some about their own particular field, some about another, more distant field; they assigned probabilities on an 11-point scale, from 0% to 100% in increments of 10%.
  4. A few years later, he compared the predictions to the actual results, scoring them using a Brier score, which penalizes you for assigning high probability to things that didn’t happen or low probability to things that did happen.
  5. He compared the resulting scores between people with different backgrounds, on different topics, with different thinking styles, and a variety of other variables. He also benchmarked them using some automated algorithms like “always say 33%” and “always give ‘stay the same’ 100%”.

I’ll show you the key results of that analysis momentarily, but to help it make more sense to you, let me elaborate a bit more on the “foxes” and “hedgehogs”. The notion is was first popularized by Isaiah Berlin in an essay called, simply, The Hedgehog and the Fox.

“The fox knows many things, but the hedgehog knows one very big thing.”

That is, someone who reasons as a “fox” combines ideas from many different sources and perspective, and tries to weigh them all together into some sort of synthesis that then yields a final answer. This process is messy and complicated, and rarely yields high confidence about anything.

Whereas, someone who reasons as a “hedgehog” has a comprehensive theory of the world, an ideology, that provides clear answers to almost any possible question, with the surely minor, insubstantial flaw that those answers are not particularly likely to be correct.

He also considered “hedge-foxes” (people who are mostly fox but also a little bit hedgehog) and “fox-hogs” (people who are mostly hedgehog but also a little bit fox).

Tetlock has decomposed the scores into two components: calibration and discrimination. (Both very overloaded words, but they are standard in the literature.)

Calibration is how well your stated probabilities matched up with the actual probabilities; that is, if you predicted 10% probability on 20 different events, you have very good calibration if precisely 2 of those events occurred, and very poor calibration if 18 of those events occurred.

Discrimination more or less describes how useful your predictions are, what information they contain above and beyond the simple base rate. If you just assign equal probability to all events, you probably will have reasonably good calibration, but you’ll have zero discrimination; whereas if you somehow managed to assign 100% to everything that happened and 0% to everything that didn’t, your discrimination would be perfect (and we would have to find out how you cheated, or else declare you clairvoyant).

For both measures, higher is better. The ideal for each is 100%, but it’s virtually impossible to get 100% discrimination and actually not that hard to get 100% calibration if you just use the base rates for everything.


There is a bit of a tradeoff between these two: It’s not too hard to get reasonably good calibration if you just never go out on a limb, but then your predictions aren’t as useful; we could have mostly just guessed them from the base rates.

On the graph, you’ll see downward-sloping lines that are meant to represent this tradeoff: Two prediction methods that would yield the same overall score but different levels of calibration and discrimination will be on the same line. In a sense, two points on the same line are equally good methods that prioritize usefulness over accuracy differently.

All right, let’s see the graph at last:

The pattern is quite clear: The more foxy you are, the better you do, and the more hedgehoggy you are, the worse you do.

I’d also like to point out the other two regions here: “Mindless competition” and “Formal models”.

The former includes really simple algorithms like “always return 33%” or “always give ‘stay the same’ 100%”. These perform shockingly well. The most sophisticated of these, “case-specific extrapolation” (35 and 36 on the graph, which basically assumes that each country will continue doing what it’s been doing) actually performs as well if not better than even the foxes.

And what’s that at the upper-right corner, absolutely dominating the graph? That’s “Formal models”. This describes basically taking all the variables you can find and shoving them into a gigantic logit model, and then outputting the result. It’s computationally intensive and requires a lot of data (hence why he didn’t feel like it deserved to be called “mindless”), but it’s really not very complicated, and it’s the best prediction method, in every way, by far.

This has made me feel quite vindicated about a weird nerd thing I do: When I have a big decision to make (especially a financial decision), I create a spreadsheet and assemble a linear utility model to determine which choice will maximize my utility, under different parameterizations based on my past experiences. Whichever result seems to win the most robustly, I choose. This is fundamentally similar to the “formal models” prediction method, where the thing I’m trying to predict is my own happiness. (It’s a bit less formal, actually, since I don’t have detailed happiness data to feed into the regression.) And it has worked for me, astonishingly well. It definitely beats going by my own gut. I highly recommend it.

What does this mean?

Well first of all, it means humans suck at predicting things. At least for this data set, even our experts don’t perform substantially better than mindless models like “always assume the base rate”.

Nor do experts perform much better in their own fields than in other fields; they do all perform better than undergrads or random people (who somehow perform worse than the “mindless” models)

But Tetlock also investigates further, trying to better understand this “fox/hedgehog” distinction and why it yields different performance. He really bends over backwards to try to redeem the hedgehogs, in the following ways:

  1. He allows them to make post-hoc corrections to their scores, based on “value adjustments” (assigning higher probability to events that would be really important) and “difficulty adjustments” (assigning higher scores to questions where the three outcomes were close to equally probable) and “fuzzy sets” (giving some leeway on things that almost happened or things that might still happen later).
  2. He demonstrates a different, related experiment, in which certain manipulations can cause foxes to perform a lot worse than they normally would, and even yield really crazy results like probabilities that add up to 200%.
  3. He has a whole chapter that is a Socratic dialogue (seriously!) between four voices: A “hardline neopositivist”, a “moderate neopositivist”, a “reasonable relativist”, and an “unrelenting relativist”; and all but the “hardline neopositivist” agree that there is some legitimate place for the sort of post hoc corrections that the hedgehogs make to keep themselves from looking so bad.

This post is already getting a bit long, so that will conclude part I. Stay tuned for part II, next week!