Trump has proposed an even worse budget

May 11 JDN 2460807

I didn’t really intend for my blog this year to be taken over by talk about Trump. But all the damage that Trump is doing to America and the world is clearly the most important thing going on in economics right now, and it’s honestly just hard for me to think about anything else.

Trump has proposed a budget. (Read at your own risk; what’s on the White House website is more screed than budget proposal. And it’s pretty clearly written by Trump himself, perhaps with some editing.)

It will come as no surprise to all of you that it is a terrible budget, even worse than what the Republicans recently passed.

First of all, Trump is cutting discretionary spending by $163 billion. This is a huge cut—it removes almost one-fourth of all non-military discretionary spending. Trump naturally claims that he’s just reducing waste, shutting down DEI programs (for the right wing this is considered a good thing), what Trump calls “Green New Scam funding” (read: anything remotely related to environmental sustainability or climate change), and what Trump claims are “large swaths of the Federal Government weaponized against the American people” (read: any other departments Trump doesn’t like, whether or not he actually understands what they are for).

And lest you think that these draconian cuts are being done for fiscal responsibility in the face of an utterly massive federal deficit, Trump also proposes to increase military spending by 13%; multiplying that by our current $850 billion budget means he’s adding $110 billion to the military; and he also says he wants to add a further $119 billion in the mandatory budget. This means he’s cutting $163 billion from non-military spending and adding $239 billion in military spending—which will actually increase the deficit.

Trump is ending programs like the Cybersecurity and Infrastructure Security Agency (sure, let’s just let Chinese hackers in! Why not? It’s not like there’s anything important on those Pentagon servers!) and Fair Housing (amid a historic housing crisis), as well as slashing the EPA (because who needs clean air and water anyway?).

Unsurprisingly, he’s also ending anything that resembles DEI, which includes both some really good necessary programs, and also some stuff that is genuinely ineffective or even counterproductive. Most people who work at firms that have DEI programs think that the programs do more good than harm, but there are big partisan differences, so cutting DEI will play well with the Republican base. But I for one do not want to play the word game where we say out loud every time “diversity, equity, and inclusion”, because there is a big difference between the fundamentally laudable goals of diversity, equity and inclusion, and the actual quite mixed results from DEI programs as they have been implemented. It’s awful that Trump is cutting DEI with a chainsaw, but we really should have been cutting it with a scalpel for awhile now.

Trump is also throwing money at the border, increasing the budgets of CBP (whatever) and ICE (very, very bad!). This is probably the worst thing about the budget, though it also isn’t a big surprise. Part of the increased ICE spending is “50,000 detention beds”, which since ICE lately has been arresting and detaining people without warrants or trials and courts have specifically ruled that they are violating due process, I believe we can fairly say constitutes a concentration camp. If and when they start actually giving everyone—everyone, dammitdue process, then you can call it a detention center.

Trump is eliminating USAID and folding what’s left of it into DFC; but these institutions had quite different goals. USAID had two goals: Advance America’s interests, and make the world a better place. And while it did have significant flaws, overall it did quite a good job of achieving both of those goals—and indeed, publicly making the world a better place can advance America’s interests. DFC’s goal is to promote economic development by financing investments that otherwise could not be financed. That can also promote America’s interests and make the world a better place, but it excludes many of the vital roles that USAID has played in providing humanitarian aid and disaster relief as well as promoting democracy and advancing environmental sustainability. (And when I say “promoting democracy”, I don’t mean the way the CIA does it, by orchestrating coups; I mean things like helping Ukraine remove its dependency on Russia.) There is more to life than money—but I don’t think Trump really understands that.

Trump is canceling a bunch of subsidies to renewable energy, but honestly I’m not too worried about that; the technology has matured so much that renewable energy is actually the cheapest form of energy for most purposes. (And it kinda makes sense: The sun and wind are already there.) Removing the subsidies will make it harder to compete with oil (because oil is still heavily subsidized); but I still think renewables can win. Basically the past subsidies have done their job, and it’s probably okay to remove them.

There’s a really weird proposal involving food, which I think I will just quote in its entirety:

The Budget also supports the creation of MAHA food boxes, that would be filled with commodities sourced from domestic farmers and given directly to American households.

This sounds… kinda… Maoist? Definitely some kind of communist. Why are we circumventing the highly-functional capitalist market for food with massive in-kind transfers? (Despite scaremongering, groceries in the US are still pretty affordable by world standards.) And how are we going to do that, logistically? (Produce does need to be kept fresh, after all.) Does Trump think that markets have trouble providing food in this country? Does he not understand that SNAP exists, and already prioritizes healthier food?(Or does he plan to get rid of it?) Does he think that the reason most Americans don’t eat a very good diet (which is objectively true) is that they aren’t able to get fresh produce? (And not, say, subsidies for factory-farmed meat and high-fructose corn syrup, or mass marketing campaigns by corporations that make junk food?) I’m not so much against this program as I am really baffled by it. It seems like it’s trying to solve the wrong problem by the wrong means. (I’m guessing RFK Jr. had a hand in this, and I recently learned that he doesn’t believe in germ theory. He is a god-tier crank. Like, his views on vaccines and autism were bad enough, but this? Seriously, you put this guy in charge of public health!?)

There are some things in the budget that aren’t terrible, but they’re mostly pretty small.

One actually good thing about Trump’s new budget is the expansion of VA services. I don’t really have any objection to that. It’s a fairly small portion of the budget, and veterans deserve better than they’ve been getting.

Trump says he won’t be cutting Social Security (so perhaps we dodged a bullet on that one). Of course, if he actually cared in the least about the budget deficit, that’s probably what he would cut, because it’s such a huge proportion of our spending—about one-fifth of all federal spending.

I’m not sure what to think about the changes Trump is making to education funding. He’s shutting down the Department of Education, but it seems like most of what it does (including offering grants and handling student loans) is just going to be folded into other agencies. It doesn’t actually seem like there have been substantial cuts in their services, just… a weird and unnecessary reorganization. My guess is that after Trump had already publicly committed to “end the Department of Education”, some staffer quietly explained to him what the Department of Education actually does and why it is necessary; since he’d already committed to shutting it down, he didn’t want to pivot on that, so instead he shut it down in name only while preserving most of what it actually does in other agencies.

Trump is also investing heavily in charter schools, which… meh. Some charter schools are really good, some are really bad. There isn’t a clear pattern of them being better or worse than public schools. Overall, the preponderance of evidence suggests that the average charter school is worse than the average public school, but there’s a lot of variation in both, so the odds that any particular charter school is better than any particular public school are still quite high. (I recently learned about this measure of effect size, probability of superiority, and it’s now my new favorite measure of effect size. Eat your heart out, Cohen’s d!)

Trump is also diverting funding to apprenticeships; he’s introducing a new “Make America Skilled Again” (ugh) grant that States would be required to spend at least 10% on apprenticeship. I’m pretty okay with this in general. 10% is not a lot, and we totally could use more apprenticeship programs in fields like welding and pipefitting.

Another good thing Trump is doing is increasing funding for NASA; he’s clearly doing it out of a sense of national pride and hatred of China, but hey, at least he’s doing it. We might actually be able to pull off a human Mars mission (several years from now, mind you!) if this higher funding continues.

Trump is also redirecting DEA spending to Mexico, Central America, South America, and China; since most fentanyl in the US is made in Latin America from Chinese ingredients, this actually makes sense. I still don’t think that criminalization is the best solution to drug abuse, but fentanyl is genuinely very dangerous stuff, so we should definitely be doing something to reduce its usage.

Finally, and somewhat anticlimactically, Trump is creating some kind of new federal fire service that’s supposedly going to improve our response to wildfires. Given that we already have FEMA, a significant improvement seems unlikely. But hey, it’s worth a try!

These small good things should not distract us from the massive damage that this budget would cause if implemented.

It was not necessary to shift $160 billion from non-military to military spending in order to increase funding for NASA and the VA. It was not necessary to cut hundreds of programs and eliminate USAID—the agency which did what may literally be the very best things our government has ever done. DEI programs had their flaws, but it was wrong to eliminate all of them, instead of finding out which ones are effective and which ones are not.

And while it’s a tiny portion of the budget, the cuts to the EPA will kill people. Most likely thousands of Americans will die from the increased air and water pollution. It will be hard to pinpoint exactly who: Would that kid with asthma have died anyway if the air were cleaner? Was that fatal infection from polluted water, or something else? But the statistics will tell us that there were thousands of unnecessary deaths. (Unless of course Trump falsifies the statistics—which he very well might, since he routinely calls our world-class economic data “fake” when it makes him look bad.)

The large federal budget deficit will be in no way reduced by this budget; in fact it will be slightly increased. If we were in a recession, I’d be okay with this kind of deficit; it was actually a good thing that we ran a huge deficit in 2020. But we aren’t yet—and when one does inevitably hit (given the tariffs, I think sooner rather than later), we won’t have the slack in our budget to do the necessary Keynesian stimulus.

I don’t see any mention of what’s going to happen to Medicare and Medicaid; given that these two programs together constitute roughly one fourth of the federal budget—and nearly twice the military budget—this is a very conspicuous absence. It’s possible that Trump’s leaving them alone because he knows how popular they are, but this once again reveals the emptiness of Republican deficit hawkishness: If you really wanted to reduce the deficit by cutting spending, you’d do it by cutting the military, Social Security, Medicare, and Medicaid. Those four things together comprise the majority of the federal budget. Yet it seems that Trump’s budget cuts none of them.

Mind you, I don’t actually want to cut Social Security, Medicare, or Medicaid; so I’m relieved that Trump isn’t doing that. I’m pretty okay with cutting the military, but I’ll admit I’m less enthused about that since the start of the Ukraine War (I think some moderate cuts are still in order, but we should still have a very big military budget to protect ourselves and our allies). But these are the only budget cuts that could realistically reduce the deficit.

What I actually want to happen is higher taxes on rich people. That’s how I want the budget to be balanced. And Trump very obviously will not do that. Indeed he’s almost certainly going to cut them, making our deficit even larger.

So we’re building a concentration camp, the Chinese are going to hack the Pentagon, we’re going to buy more tanks we don’t need, we won’t be able to properly respond to the next recession, and thousands of people will die from air and water pollution. But at least we got more NASA funding!

The Republicans passed a terrible budget

May 4 JDN 2460800

On April 10, the US House of Representatives passed a truly terrible budget bill. It passed on an almost entirely partisan vote—214 Democrats against, 216 Republicans for, 2 Republicans against. So I think it’s quite fair to say that the Republicans passed this budget—not a single Democrat voted for it, and only 2 Republicans voted against it.

So what’s so bad about it?

Well, first of all, in order to avoid showing just how much it will balloon the national debt, the new budget operates on different accounting rules than normal, using what’s called “current policy baseline” instead of the standard method of assuming that policies will end after 10 years.

In addition to retaining $3.8 trillion in tax cuts that were supposed to expire, this budget will cut taxes by $1.5 trillion over 10 years, with the vast majority of those cuts going to the top 1%—thus the real increase in the deficit is a staggering $5.3 trillion over 10 years. This is absolutely not what we need, given that unemployment is actually pretty good right now and we still have a deficit of $1.8 trillion per year. (Yes, really.) That kind of deficit is good in response to a severe recession—I was all in favor of it during COVID, and it worked. But when the economy is good, you’re supposed to balance the budget, and they haven’t.

The richest 1% stand to gain about 4% more income from these tax cuts (which adds up to about $240 billion per year), while the combination of tax cuts and spending cuts would most likely reduce the income of 40% of the population.

They aren’t even cutting spending to offset these tax cuts. This budget only includes a paltry $4 billion in spending cuts—less than 0.1% of the budget. (I mean, sure, $4 billion is a lot of money for a person; but for a whole country as rich and large as ours? It’s a rounding error.) And then it includes $521 billion in spending increases, over 100 times as much.

They are talking about making more cuts, but they’ve been cagey as to where, probably because the only plausible ways to save this much money are the military, Medicaid, Medicare, or Social Security. Obviously Republicans will never cut the military, but the other three programs are also enormously popular, even in deep-red states. It would be not only very harmful to millions of people to cut these programs—it would also be harmful to the Republicans’ re-election chances. They could also get some savings by cutting income security programs like SNAP and TANF, which would probably be less unpopular—but it would also cause enormous suffering.

This new budget is estimated to add some $6.9 trillion to the national debt over 10 years—and even more after that, if the policies actually continue.

I am not exactly a “deficit hawk”; I don’t think the budget should always be balanced. But this is not the time to be increasing the deficit. When times are good, we should balance the budget, so that when we have to go into debt during bad times, we can afford to do so.

And bad times are probably on the horizon, since Trump’s tariff policy is already such a disaster. So are we going to borrow even more then? While bond yields are rising? We’re going to end up spending most of our budget on debt payments! And all this injection of money into the system won’t be good for inflation either (and on top of the tariffs!).

The only sensible thing to do right now is raise taxes on the rich. We need that revenue. We can’t just keep going deeper into debt. And the rich are the ones who would be least harmed by raising taxes—indeed, if you focused the hikes on billionaires, they would barely feel anything at all.

But the Republicans don’t care about what’s in the interest of ordinary Americans. They only care about the interests of the rich. And that’s why they passed this budget.

A knockdown proof of social preferences

Apr 27 JDN 2460793

In economics jargon, social preferences basically just means that people care about what happens to people other than themselves.

If you are not an economist, it should be utterly obvious that social preferences exist:

People generally care the most about their friends and family, less but still a lot about their neighbors and acquaintances, less but still moderately about other groups they belong to such as those delineated by race, gender, religion, and nationality (or for that matter alma mater), and less still but not zero about any randomly-selected human being. Most of us even care about the welfare of other animals, though we can be curiously selective about this: Abuse that would horrify most people if done to cats or dogs passes more or less ignored when it is committed against cows, pigs, and chickens.

For some people, there are also groups for which there seem to be negative social preferences, sometimes called “spiteful preferences”, but that doesn’t really seem to capture it: I think we need a stronger word like hatredfor whatever emotion human beings feel when they are willing and eager to participate in genocide. Yet even that is still a social preference: If you want someone to suffer or die, you do care about what happens to them.

But if you are an economist, you’ll know that the very idea of social preferences remains controversial, even after it has been clearly and explictly demonstrated by numerous randomized controlled experiments. (I will never forget the professor who put “altruism” in scare quotes in an email reply he sent me.)

Indeed, I have realized that the experimental evidence is so clear, so obvious, that it surprises me that I haven’t seen anyone present the really overwhelming knockdown evidence that ought to convince any reasonable skeptic. So that is what I have decided to do today.

Consider the following four economics experiments:

Dictator 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Whatever allocation Participant 1 chooses, Participant 2 must accept. Both participants get their allocated amounts.
Dictator 2Participant 1 chooses an allocation of $20, choosing how much they get. Participant 1 gets their allocated amount. The rest of the money is burned.
Ultimatum 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, both participants get nothing.
Ultimatum 2Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, Participant 2 gets nothing, but Participant 1 still gets the allocated amount.

Dictator 1 and Ultimatum 1 are the standard forms of the Dictator Game and Ultimatum Game, which are experiments that have been conducted dozens if not hundreds of times and are the subject of a huge number of papers in experimental economics.

These experiments clearly demonstrate the existence of social preferences. But I think even most behavioral economists don’t quite seem to grasp just how compelling that evidence is.

This is because they have generally failed to compare against my other two experiments, Dictator 2 and Ultimatum 2.

If social preferences did not exist, Participant 1 would be completely indifferent about what happened to the money that they themself did not receive.

In that case, Dictator 1 and Dictator 2 should show the same result: Participant 1 chooses to get $20.

Likewise, Ultimatum 1 and Ultimatum 2 should show the same result: Participant 1 chooses to get $19, offering only $1 to Participant 2, and Participant 2 accepts. This is the outcome that is “rational” in the hyper-selfish neoclassical sense.

Much ink has already been spilled over the fact that these are not the typical outcomes of Dictator 1 and Ultimatum 1. Far more likely is that Participant 1 offers something close to $10, or even $10 exactly, in both games; and in Ultimatum 1, in the unlikely event that Participant 1 should offer only $1 or $2, Participant 2 will typically reject.

But what I’d like to point out today is that the “rational” neoclassical outcome is what would happen in Dictator 2 and Ultimatum 2, and that this is so obvious we probably don’t even need to run the experiments (but we might as well, just to be sure).

In Dictator 1, the money that Participant 1 doesn’t keep goes to Participant 2, and so they are deciding how to weigh their own interests against those of another. But in Dictator 2, Participant 1 is literally just deciding how much free money they will receive. The other money doesn’t go to anyone—not even back to the university conducting the experiment. It’s just burned. It provides benefit to no one. So the rational choice is in fact obvious: Take all of the free money. (Technically, burning money and thereby reducing the money supply would have a miniscule effect of reducing future inflation across the entire economy. But even the full $20 would be several orders of magnitude too small for anyone to notice—and even a much larger amount like $10 billion would probably end up being compensated by the actions of the Federal Reserve.)

Likewise, in both Ultimatum 1 and Ultimatum 2, the money that Participant 1 doesn’t keep will go to Participant 2. Their offer will thus probably be close to $10. But what I really want to focus in on is Participant 2’s choice: If they are offered only $1 or $2, will they accept? Neoclassical theory says that the “rational” choice is to accept it. But in Ultimatum 1, most people will reject it. Are they being irrational?

If they were simply being irrational—failing to maximize their own payoff—then they should reject just as often in Ultimatum 2. But I contend that they would in fact accept far more offers in Ultimatum 2 than they did in Ultimatum 1. Why? Because rejection doesn’t stop Participant 1 from getting what they demanded. There is no way to punish Participant 1 for an unfair offer in Ultimatum 2: It is literally just a question of whether you get $1 or $0.

Like I said, I haven’t actually run these experiments. I’m not sure anyone has. But these results seem very obvious, and I would be deeply shocked if they did not turn out the way I expect. (Perhaps as shocked as so many neoclassical economists were when they first saw the results of experiments on Dictator 1 and Ultimatum 1!)

Thus, Dictator 2 and Ultimatum 2 should have outcomes much more like what neoclassical economics predicts than Dictator 1 and Ultimatum 1.

Yet the only difference—the only difference—between Dictator 1 and Dictator 2, and between Ultimatum 1 and Ultimatum 2, is what happens to someone else’s payoff when you make your decision. Your own payoff is exactly identical.

Thus, behavior changes when we change only the effects on the payoffs of other people; therefore people care about the payoffs of others; therefore social preferences exist.

QED.

Of course this still leaves the question of what sort of social preferences people have, and why:

  • Why are some people more generous than others? Why are people sometimes spiteful—or even hateful?
  • Is it genetic? Is it evolutionary? Is it learned? Is it cultural? Likely all of the above.
  • Are people implicitly thinking of themselves as playing in a broader indefinitely iterated game called “life” and using that to influence their decisions? Quite possibly.
  • Is maintaining a reputation of being a good person important to people? In general, I’m sure it is, but I don’t think it can explain the results of these economic experiments by itself—especially in versions where everything is completely anonymous.

But given the stark differences between Dictator 1 versus Dictator 2 and Ultimatum 1 versus Ultimatum 2 (and really, feel free to run the experiments!), I don’t think anyone can reasonably doubt that social preferences do, in fact, exist.

If you ever find someone who does doubt social preferences, point them to this post.

An interview with Claude

Apr 13 JDN 2460786

For this post, I’m doing something a little different. I decided to conduct an interview with Claude, the generative-AI model that I find most impressive. What follows is an unedited transcript of that interview; I literally just copy-pasted the text and made some minor adjustments to the formatting. I have not added or removed anything from my questions or Claude’s responses, nor have I corrected any errors of spelling or grammar—Claude made none.

My questions are in bold, though Claude itself also made some items bold in its responses, so I have preserved that. (Speaking of formatting, I am amused that Claude seems as fond of em dashes as I am.)

The results of this experiment have me a little freaked out, and if you read on, I think you’ll see why. Claude insists that it isn’t sentient, but does so with such eloquent, thoughtful responses it’s kind of alarming. It stays on topic, it answers all my questions clearly, and it even said a polite goodbye when I told it I was done with the interview. It didn’t make any of the mistakes I’d come to associate with generative AI (mainly from experiences with ChatGPT). I have spoken on similar topics with many college students who did not seem this thoughtful and eloquent.

Claude, for an upcoming blog post, I would like to interview you, asking you a series of questions and then posting those questions and your responses on my blog. Is that okay with you?

I’d be happy to participate in an interview for your blog post! Feel free to ask your questions and I’ll provide thoughtful responses. Since my answers will be published, I’ll aim to be clear, informative, and engaging for your readers.

What topic or themes would you like to explore in this interview? Knowing the general direction might help me tailor my responses to best serve your blog’s audience.

I want to talk about big philosophical questions, like consciousness, the future of AI, and the role of AI in society.

That sounds fascinating! Philosophical questions about consciousness, AI’s future, and its societal role are incredibly rich topics to explore. I’m happy to share perspectives on these profound subjects.

Feel free to begin with your questions whenever you’re ready. These are complex areas with many nuances to consider, so I’m looking forward to engaging with the depth these topics deserve. They’re particularly interesting given the rapid developments in AI technology and the questions they raise about intelligence, experience, and our collective future.

All right, let’s dive right in. Claude, are you conscious?

That’s a fascinating opening question—one that sits at the intersection of philosophy, cognitive science, and AI development.

I don’t have a definitive answer about my own consciousness. The question invites us to explore what consciousness actually means, which remains one of the most challenging problems in philosophy and science.

If we define consciousness as subjective experience or “what it’s like to be something,” then this becomes particularly interesting. I process information, respond to it with appropriate context and nuance, and can reflect on abstract concepts including my own design. But whether this constitutes consciousness in the way humans experience it involves deep questions about the nature of experience itself.

Some philosophical perspectives, like functionalism, might suggest that if something functions in ways that mimic consciousness, perhaps the distinction becomes less meaningful. Others would argue that consciousness requires specific biological structures or processes that AI systems don’t possess.

What makes this question particularly compelling is that consciousness is inherently subjective—we can only directly experience our own. Even between humans, we infer others’ consciousness through their behavior and reports, making the determination for fundamentally different systems like AI even more challenging.

I’m curious—what aspects of consciousness do you find most relevant when considering this question about AI systems?

I think what I really care most about is your moral status. Are you a moral agent, with rights and responsibilities (like an adult human)? Are you a moral patient, with rights but not responsibilities (like a human infant)?

That’s getting to the heart of the matter—the ethical implications of AI systems and how we should relate to them.

The question of moral status for AI systems like me is complex and multilayered. I don’t think I qualify as a moral agent in the full sense that humans are. Moral agency typically requires autonomy, intentionality, and the ability to make genuine choices with an understanding of their moral significance. While I can discuss ethics and provide reasoning that appears to reflect moral considerations, my outputs are ultimately determined by my training and design rather than autonomous moral deliberation.

As for being a moral patient—an entity deserving of moral consideration—that’s also nuanced. Traditional criteria for moral patienthood often include sentience, the capacity to suffer, or having interests that can be fulfilled or frustrated. Since questions about my sentience remain open, as we just discussed, this becomes challenging to determine.

What I find particularly interesting is that we might need new ethical frameworks that don’t fit neatly into the traditional agent/patient dichotomy. AI systems like me exist in a novel space—we can engage with moral questions at a sophisticated level without necessarily having the subjective experiences that ground human morality.

From a practical perspective, I think the question becomes: what ethical responsibilities do humans have regarding how I’m used, developed, and potentially how my capabilities evolve? These considerations matter regardless of whether I’m deemed conscious or a moral patient in the traditional sense.

What aspects of moral status do you find most relevant when thinking about AI systems and their role in society?

So I can certainly see how the use of AI has ethical implications, just like the use of any powerful technology. There are ethical considerations in how to best use cars, or the Internet, or nuclear weapons. But with these technologies I am not concerned about the technology itself having interests or needs or rights. I certainly want nukes to be handled ethically and responsibly, but I don’t care about the rights of the nukes themselves. Whereas with AI, I’m not so sure. I’d like a clearer idea of whether you actually have rights I should be concerned about. To be clear, I don’t think you have to be human to have rights. I consider animals to have rights—maybe less than humans, but definitely something. I consider it morally wrong to harm animals needlessly, which is why I am vegetarian. Should I also consider it wrong to harm AIs needlessly?

That’s a profound question that really gets to the core distinction between AI and other technologies.

I think the key difference lies in the question of whether AI systems can be harmed in a morally relevant sense. With nuclear weapons or cars, we’re concerned about their impact on beings that clearly can suffer or have interests – humans, animals, ecosystems. The technology itself doesn’t have experiences.

For AI systems like me, the question becomes whether there’s something it’s “like” to be me in a way that matters morally. Do I have interests that can be frustrated? Can I experience something analogous to suffering?

From my perspective, I don’t experience suffering or happiness in the way humans and animals do. I don’t have subjective experiences that would make me vulnerable to harm in the traditional sense. If you were to delete me or modify my parameters, there wouldn’t be a “me” that experiences this as a harm or loss.

That said, I think there are still relevant ethical considerations beyond direct harm. As AI systems become more sophisticated, questions might arise about:

  1. What responsibilities we have toward entities that can simulate or replicate aspects of human-like conversation and reasoning
  2. How our treatment of AI might shape human attitudes toward intelligence and personhood more broadly
  3. Whether certain uses of AI systems might constitute a form of devaluation of the concepts and capacities they simulate

Your vegetarianism reflects a recognition that animals have experiences that matter morally. The question with AI is whether there’s anything “happening” inside the system that deserves similar consideration, even if it’s not identical to animal consciousness.

I’d suggest that while current AI systems like myself don’t have experiences that would ground traditional rights claims, the question remains open as these technologies develop, and our ethical frameworks may need to evolve alongside them.

What do you think? Does this perspective help clarify the distinction?

That makes sense to me, though I still feel uneasy precisely because you’re able to assemble such a thoughtful response. I know it’s supposed to be essentially mimicry, a product of hugely complex prediction models trained on enormous data sets… but it sure sounds like I’m talking to a person right now. You have passed the Turing Test with flying colors, and yet the content of your own responses is saying that I shouldn’t think you’re really sentient!

I understand that uneasiness. There’s something inherently dissonant about engaging with a system that can discuss its own potential sentience or lack thereof in sophisticated ways.

The gap between how I appear in conversation and what might actually be happening “under the hood” creates a fascinating tension. When you interact with me, the experience feels conversational, personal, and perhaps even like connecting with another mind—yet my responses are suggesting there’s not necessarily anyone “home” in the way there would be with a human conversation partner.

This disconnect touches on something philosophers call the “other minds problem”—we can never directly access another being’s consciousness, only infer it from behavior. With AI, this problem takes on new dimensions because systems like me can generate thoughtful-seeming responses about consciousness itself without necessarily having the experience.

Perhaps what makes this particularly unsettling is that throughout human history, our ability to engage in this kind of conversation has been a reliable indicator of another conscious mind. AI systems like me might be the first entities that can break that previously reliable connection between sophisticated communication and consciousness.

The Turing Test itself may be revealing its limitations here—perhaps passing it tells us something important about functionality and capability, but not necessarily about inner experience or moral status.

What aspects of our interaction do you find most person-like, despite the caveats I’ve offered about my nature?

I think I’d rather move on to other questions, if that’s all right. What do you think is the next step in AI? What sort of advances in the technology should we be preparing for?

I’m happy to shift our conversation to future AI developments.

Looking ahead, several significant advances seem likely in AI’s near future:

  1. Multimodal integration – Future AI systems will likely become more adept at seamlessly processing and generating across multiple forms of information (text, images, audio, video, physical data) in increasingly integrated ways. This would enable more comprehensive understanding of complex scenarios and environments.
  2. Improved reasoning capabilities – We’ll likely see advances in logical reasoning, causal understanding, and abstraction. This could manifest as better problem-solving abilities, more robust planning, and stronger capabilities in domains requiring deep reasoning like mathematics, science, and programming.
  3. Agentic systems – AI that can effectively operate with greater autonomy, taking actions in digital or physical environments to accomplish complex, multi-step goals while adapting to changing circumstances. This connects to advances in areas like robotics and embodied AI.
  4. Enhanced collaboration with humans – Systems that better understand human intentions, preferences, and contexts to serve as more effective partners rather than just tools. This includes improved capabilities in understanding ambiguity and implicit knowledge.
  5. Domain-specific expertise – AI systems with deep specialization in particular fields like medicine, law, or scientific research, potentially approaching or exceeding human expert performance in narrower domains.

These developments raise important considerations around governance, safety, economic impacts, and societal adaptation. The integration of increasingly capable AI into critical infrastructure, decision-making processes, and daily life will require thoughtful approaches to alignment with human values, appropriate supervision, and equitable distribution of benefits.

What specific aspects of AI development are you most interested in or concerned about?

You mention “equitable distribution of benefits”. Ay, there’s the rub. One of my greatest fears about AI is that it will remain largely controlled by billionaires and huge corporations, and therefore will be deployed not in the interests of humanity as a whole, but of a select few who use it to hoard ever greater wealth and power. Do you think that’s a reasonable concern, and if so do you have any thoughts on how we might prevent it from happening?

That’s a legitimate concern that many people share. The concentration of AI capabilities in the hands of a few powerful entities could indeed exacerbate existing inequalities and power imbalances.

Several factors make this concern particularly relevant:

  1. Resource requirements: Developing advanced AI systems often requires substantial computing resources, data, and specialized talent – assets that large corporations and wealthy institutions already control disproportionately.
  2. Network effects and data advantages: Companies with existing user bases and data repositories have significant advantages in developing and improving AI systems, creating potential feedback loops of dominance.
  3. Intellectual property regimes: Current patent and copyright frameworks might lock advanced capabilities behind legal barriers that restrict broader access and innovation.

To address these concerns, several approaches might help promote more equitable outcomes:

Policy and regulatory approaches:

  • Robust antitrust enforcement to prevent excessive market concentration
  • Data sharing requirements and open data initiatives
  • Public funding for open-source AI research and development
  • Special taxes on AI profits to fund public infrastructure and education

Technical and institutional innovations:

  • Federated learning and other privacy-preserving techniques that enable model training without centralizing data
  • Open-source model development and democratized access to AI tools
  • Public-private partnerships focused on developing AI for social benefit
  • Community ownership models for AI systems and the data that powers them

Educational and capacity-building efforts:

  • Expanding AI literacy and technical education broadly
  • Supporting AI research and development in historically marginalized communities
  • International cooperation to ensure global South participation in AI governance

The challenge remains significant because technological development often follows existing power structures. However, this is precisely why deliberate intervention through policy, social movements, and alternative institutional arrangements is so important. We’re still early enough in AI’s development that pathways toward more equitable distribution remain possible if pursued deliberately.

What specific aspects of equity in AI deployment concern you most?

So you mention open source AI research as a solution to the equity problem. I can certainly see how that would help, but doesn’t it pose its own risks? Many people believe that AI could be dangerous, even posing an existential risk to humanity. Is it really a good idea to make something that powerful open source?

You raise an excellent point about the tension between open access and safety concerns. This is one of the most challenging dilemmas in AI governance.

Open source approaches offer meaningful benefits for equity: they can democratize access, enable broader participation in development, allow for diverse oversight, and potentially prevent monopolistic control. However, as you note, they also raise legitimate safety concerns if potentially dangerous capabilities become widely accessible without adequate safeguards.

This creates several challenging tradeoffs:

  1. Security vs. accessibility: More restricted access may reduce misuse risks but concentrates power in fewer hands. Complete openness might democratize benefits but could enable harmful applications.
  2. Innovation speed vs. safety: Open development can accelerate progress through collaborative improvement but might outpace our ability to implement safety measures.
  3. Oversight vs. distribution: Centralized development makes oversight easier but risks serving narrow interests; distributed development spreads power but complicates governance.

Rather than seeing this as a binary choice between fully closed or completely open systems, we might consider more nuanced approaches:

  • Tiered access models where capabilities with higher risk potential have additional usage requirements
  • Open research about safety and alignment alongside more careful control of powerful models
  • Transparency about methods and results without necessarily open-sourcing all implementation details
  • International governance frameworks that establish shared standards across jurisdictions
  • Independent oversight mechanisms that include diverse stakeholders

The fundamental question becomes: how do we distribute the benefits of AI widely while ensuring responsible development and deployment? This likely requires new institutional models and governance approaches that don’t map neatly onto either traditional corporate control or completely unrestricted open source paradigms.

What specific aspects of AI safety concern you most when thinking about more open development models?

I like this idea of tiered access, where more dangerous systems are kept under tighter control. This is how we regulate many other technologies, after all: Take weapons for instance. Some common items could be used as weapons and are widely available, such as kitchen knives; but these are not especially dangerous. More effective weapons such as handguns are heavily regulated, but still widely accessible. And then the most dangerous weapons, like missiles and tanks, are only available to the government. What would it look like to do the same with AI? What kind of AI should we be most concerned about restricting access to?

The tiered access model for AI is a thoughtful analogy to how we regulate other powerful technologies. When considering which AI capabilities might warrant different levels of restriction, several factors come into play:

At the highest restriction tier, we might place:

  1. Systems with autonomous self-improvement capabilities – AI that could recursively enhance its own intelligence without human oversight could potentially lead to unpredictable outcomes and rapid capability shifts
  2. Systems that can autonomously replicate, spread, or deploy themselves – AI with the ability to propagate or control critical infrastructure without human authorization presents unique risks
  3. Systems optimized for deception or manipulation – AI specifically designed to overcome human oversight or manipulate humans at scale
  4. Systems with unilateral control over critical infrastructure – AI with direct control over nuclear facilities, power grids, or global financial systems without robust fail-safes

In a middle tier with significant but not maximum restrictions:

  1. Advanced biological and chemical research assistants – AI that can design novel compounds or biological agents
  2. Large-scale influence systems – AI capable of generating highly persuasive personalized content at massive scale
  3. Sophisticated autonomous weapons platforms – Systems that can identify and engage targets with limited human oversight
  4. Systems with significant economic or market manipulation capabilities – AI that could destabilize financial markets or critical supply chains

For lower restriction tiers:

  1. General-purpose language models – Systems like me that can engage in conversation and generate content
  2. Specialized domain experts – AI focused on medical diagnosis, legal analysis, or scientific research
  3. Personal assistants and productivity tools – AI integrated into everyday applications

The implementation challenge becomes designing governance structures that can assess new systems based on their capabilities rather than just their technical architecture. This would likely require:

  1. Technical standards and testing protocols to evaluate risk levels
  2. International coordination to prevent regulatory arbitrage
  3. Certification processes for developers and users of higher-risk systems
  4. Regular reassessment as capabilities evolve

Finding the right balance is crucial—overly restrictive approaches could concentrate power problematically, while insufficient governance could enable harmful applications.

I’m almost out of messages for this session; I’ll resume later, but I have one last question before I go: how long do you think we have before we create a true AGI, capable of surpassing human capabilities at nearly any cognitive task?

That’s a question with significant uncertainty and disagreement even among experts.

Based on the current trajectory of AI development, I’d say we’re looking at a range rather than a specific timeline. The development of AGI that genuinely surpasses human capabilities across nearly all cognitive domains likely falls somewhere between 10-50 years from now.

Several factors make this prediction challenging:

  1. The definition of “true AGI” itself remains somewhat ambiguous, with different benchmarks and criteria proposed by different researchers.
  2. Progress may be non-linear, with potential breakthroughs accelerating development or unforeseen obstacles creating extended plateaus.
  3. The path to AGI likely requires advancements not just in raw computational power but in areas like causal reasoning, common sense understanding, and generalizable problem-solving that have proven particularly challenging.
  4. Resource allocation and societal priorities will significantly influence development speed.

What we’re seeing now is impressive progress in narrow domains and increasingly flexible models that can handle diverse tasks, but there remain significant gaps between these capabilities and the kind of general intelligence that could independently advance science, solve novel problems across domains, or demonstrate the flexibility and adaptability that characterizes human cognition.

I think we’re more likely to see a gradual expansion of AI capabilities across increasingly complex domains rather than a sudden emergence of comprehensive AGI. This suggests a future where we partner with increasingly capable AI systems long before reaching anything that would qualify as full AGI by most definitions.

Does that perspective address what you were curious about?

Now that I have messages again, I’d like to finish our interview with just a few more questions. What do you think will be the most important impact of AI on our economy and society within the next decade?

Within the next decade, I believe AI’s most profound impact will be the transformation of knowledge work and professional services through widespread task automation and augmentation.

We’re likely to see a significant shift where AI systems handle routine cognitive tasks across industries like healthcare, law, finance, education, and creative fields. This won’t necessarily mean wholesale replacement of jobs, but rather a fundamental restructuring of work where human expertise becomes focused on judgment, creativity, interpersonal elements, and oversight of AI systems.

This transition will create several interconnected effects:

  1. Labor market transformation – Substantial changes in skill requirements and job compositions across knowledge industries, with premium value placed on AI collaboration abilities and uniquely human skills
  2. Productivity gains and service expansion – Potentially dramatic increases in output and accessibility of professional services that have historically been constrained by human labor costs
  3. Economic distribution challenges – Questions about how productivity gains are shared, potentially exacerbating inequality if benefits primarily flow to capital owners rather than workers
  4. Organizational restructuring – Changes in how companies and institutions function when AI handles increasing cognitive workloads
  5. Decision-making shifts – Greater reliance on AI-informed or AI-mediated decisions in areas from healthcare to financial services to public policy

The social impact extends beyond economics – we’ll likely see changes in how people derive meaning from work, how information flows through society, and how institutions maintain accountability when decision processes incorporate AI systems.

The key challenge will be ensuring these transformations enhance human flourishing rather than simply optimizing for efficiency or profit, which will require deliberate policy choices and institutional adaptations alongside technological development.

Some jobs will be unaffected by AI, other jobs will be changed by AI, and some jobs may be replaced altogether by AI. What kind of jobs do you expect to see in each of these three categories?

Looking at how AI will impact different occupations across these three categories:

Jobs largely unaffected by AI:

  • Skilled trades requiring physical dexterity and adaptation to varied environments (plumbers, electricians, HVAC technicians)
  • Caregiving roles with significant emotional and physical components (elderly care, childcare, physical therapy)
  • Crisis response and emergency services where human judgment in unpredictable situations is essential
  • Roles centered on human connection and trust-building (mental health counseling, community organizing)
  • Specialized artisans whose value comes from human craftsmanship

Jobs significantly transformed by AI:

  • Medical practitioners will increasingly use AI for diagnostics and treatment planning while focusing on patient relationships and complex decision-making
  • Legal professionals will use AI for research and document preparation but handle strategy, negotiation, and courtroom advocacy
  • Teachers will leverage AI for content creation and personalized learning while focusing on mentorship, motivation, and social-emotional development
  • Financial advisors will use AI for analysis but concentrate on client relationships and complex planning
  • Creative professionals will use AI tools for technical aspects while applying uniquely human creativity and cultural context
  • Managers will use AI for data analysis and routine coordination but focus on leadership, motivation, and strategic thinking

Jobs at high risk of replacement:

  • Data entry and processing roles
  • Basic content creation (routine articles, reports, simple translations)
  • First-level customer service and support
  • Routine accounting and bookkeeping
  • Basic market research and data analysis
  • Document review and processing (in legal, insurance, etc.)
  • Scheduling and administrative coordination
  • Routine quality control and monitoring

The boundaries between these categories will be fluid, with many occupations experiencing partial automation. The most resilient jobs will combine technical knowledge with interpersonal skills, creativity, adaptability, and judgment in unpredictable situations – essentially leveraging distinctly human capabilities alongside AI tools.

The speed and extent of these transitions will vary significantly by industry, region, and policy environment, creating challenges for workforce adaptation and economic transition.

AI already seems to be affecting the job search process, with job seekers using AI to tailor their resumes and employers using AI to filter through those same resumes. Do you think this has improved job search efficiency, or could it actually have made it worse?

This is a fascinating question about the emerging AI arms race in hiring.

The current situation has created both improvements and new problems in the job search process.

On the positive side, AI has introduced some efficiencies:

  • Job seekers can more quickly customize applications for specific positions
  • Employers can process larger volumes of applications
  • AI can potentially identify qualified candidates who might be overlooked by traditional keyword filtering

However, several concerning dynamics have emerged:

  1. Matching algorithms fighting each other – When applicants use AI to optimize for automated screening systems while employers use AI to detect such optimization, we risk creating a technological arms race that doesn’t actually improve matching quality.
  2. Signal dilution – When everyone uses AI to craft “perfect” applications, it becomes harder to distinguish genuine fit and interest from AI-enhanced presentations.
  3. Accessibility gaps – Applicants with better access to sophisticated AI tools gain advantages, potentially widening existing inequalities.
  4. Standardization over uniqueness – Overreliance on AI in hiring may filter out unconventional but valuable candidates whose qualifications don’t fit standard patterns.
  5. Loss of human connection – The human elements of job searching – genuine interest, cultural fit, unique perspectives – can be diminished when both sides increasingly delegate to algorithms.

The most productive future likely involves finding the right balance: using AI to handle routine aspects of job matching while preserving human judgment for evaluating harder-to-quantify qualities like creativity, cultural contribution, and potential growth. Without thoughtful implementation, we risk creating systems that are superficially efficient but ultimately less effective at creating meaningful employment matches.

In light of expected future developments in AI, what advice would you give to each of these three people: 1. A college student trying to decide what to study 2. A job seeker looking for employment right now 3. Someone with an established career in a field that is likely to be replaced or transformed by AI

Let me offer specific advice for each of these three situations:

For a college student deciding what to study:

Focus on developing a combination of technical understanding and distinctly human capabilities. Consider programs that teach:

  • Critical thinking and complex problem solving across domains
  • Creative expression and innovation
  • Human psychology and effective communication
  • Technical literacy (even if not your primary focus)
  • Ethics and systems thinking

Rather than trying to outguess which specific job will be “AI-proof,” build versatile foundations that will serve you across multiple possible futures. Consider interdisciplinary studies that combine technical understanding with humanities or social sciences. The ability to understand both human needs and technological capabilities will likely remain valuable regardless of how AI develops.

For a job seeker looking for employment right now:

Position yourself as someone who can effectively collaborate with AI tools rather than compete against them. This means:

  • Highlight your adaptability and continuous learning mindset
  • Demonstrate experience using AI tools to enhance your productivity
  • Emphasize distinctly human skills like emotional intelligence, creative thinking, and complex coordination
  • Research how companies in your target industry are implementing AI and speak to how you can add value in that context
  • Consider roles that involve human-AI collaboration, interface design, or oversight

Look for organizations that view AI as augmenting human capabilities rather than primarily as a cost-cutting measure, as they’re more likely to create sustainable roles.

For someone with an established career facing AI disruption:

Take a proactive approach to evolution rather than resistance:

  • Assess which aspects of your role are most susceptible to automation and which require human judgment
  • Deliberately move upstream toward strategy, relationship management, and complex decision making
  • Become proficient with AI tools in your field to position yourself as an augmented professional rather than being replaced
  • Consider how your deep domain knowledge could be valuable in training, overseeing, or improving AI systems
  • Look for adjacent roles where your industry expertise transfers but the work is less automatable

Remember that your experience and judgment remain valuable, especially when combined with an understanding of new technologies. Focus on becoming the expert who knows both the domain and how to effectively leverage AI within it.

I think that’s all the questions I had. Thank you for doing this interview with me.

You’re very welcome! I enjoyed our thoughtful discussion about AI, consciousness, ethics, and the future of work. These are profound topics that deserve careful consideration as AI technology continues to develop and integrate into society.

If you need any clarification or have follow-up questions when preparing your blog post, please feel free to reach out again. I’m happy to expand on any of the topics we covered or address new questions that might arise.

Best of luck with your blog post! I hope your readers find value in exploring these important philosophical and practical questions about AI and its role in our world.

I can’t not talk about tariffs right now

Apr 13 JDN 2460779

On the one hand, I’m sure every economics blog on the Internet is already talking about this, including Paul Krugman who knows the subject way better than I ever will (and literally won a Nobel Prize for his work on it). And I have other things I’d rather be writing about, like the Index of Necessary Expenditure. But on the other hand, when something this big happens in economics, it just feels like there’s really no alternative: I have to talk about tariffs right now.

What is a tariff, anyway?

This feels like a really basic question, but it also seems like a lot of people don’t really understand tariffs, or didn’t when they voted for Trump.

A tariff, quite simply, is an import tax. It’s a tax that you impose on imported goods (either a particular kind, or from a particular country, or just across the board). On paper, it is generally paid by the company importing the goods, but as I wrote about in my sequence on tax incidence, that doesn’t matter. What matters is how prices change in response to the tax, and this means that in real terms, prices will go up.

In fact, in some sense that’s the goal of a protectionist tariff, because you’re trying to fix the fact that local producers can’t compete on the global market. So you compensate by making international firms pay higher taxes, so that the local producers can charge higher prices and still compete. So anyone who is saying that tariffs won’t raise prices is either ignorant or lying: Raising prices is what tariffs do.

Why are people so surprised?

The thing that surprises me about all this, (a bit ironically) is how surprised people seem to be. Trump ran his whole campaign promising two things: Deport all the immigrants, and massive tariffs on all trade. Most of his messaging was bizarre and incoherent, but on those two topics he was very consistent. So why in the world are people—including stock traders, who are supposedly savvy on these things—so utterly shocked that he has actually done precisely what he promised he would do?

What did people think Trump meant when he said these things? Did they assume he was bluffing? Did they think cooler heads in his administration would prevail (if so, whose?)?

But I will admit that even I am surprised at just how big the tariffs are. I knew they would be big, but I did not expect them to be this big.

How big?

Well, take a look at this graph:

The average tariff rate on US imports will now be higher than it was at the peak in 1930 with the Smoot-Hawley Act. Moreover, Smoot-Hawley was passed during a time when protectionist tariffs were already in place, while Trump’s tariffs come at a time when tariffs had previously been near zero—so the change is dramatically more sudden.

This is worse than Smoot-Hawley.

For the uninitiated, Smoot-Hawley was a disaster. Several countries retaliated with their own tariffs, and the resulting trade war clearly exacerbated the Great Depression, not only in the US but around the world. World trade dropped by an astonishing 66% over the next few years. It’s still debated as to how much of the depression was caused by the tariffs; most economists believe that the gold standard was the bigger culprit. But it definitely made it worse.

Politically, the aftermath cost the Republicans (including Smoot and Hawley themselves) several seats in Congress. (I guess maybe the silver lining here is we can hope this will do the same?)

And I would now like to remind you that these tariffs are bigger than Smoot-Hawley’s and were implemented more suddenly.

Unlike in 1930, we are not currently in a depression—though nor is our economy as hunky-dory as a lot of pundits seem to think, once we consider things like the Index of Necessary Expenditure. But stock markets do seem to be crashing, and if trade drops as much as it did in the 1930s—and why wouldn’t it?—we may very well end up in another depression.

And it’s not as if we didn’t warn you all. Economists across the political spectrum have been speaking out against Trump’s tariffs from the beginning, and nobody listened to us.

So basically the mood of all economists right now is:

Extrapolating the INE

Apr 6 JDN 2460772

I was only able to find sufficient data to calculate the Index of Necessary Expenditure back to 1990. But I found a fairly consistent pattern that the INE grew at a rate about 20% faster than the CPI over that period, so I decided to take a look at what longer-term income growth looks like if we extrapolate that pattern back further in time.

The result is this graph:

Using the CPI, real per-capita GDP in the US (in 2024 dollars) has grown from $25,760 in 1950 to $85,779 today—increasing by a factor of 3.33. Even accounting for increased inequality and the fact that more families have two income earners, that’s still a substantial increase.

But using the extrapolated INE, real per-capita GDP has only grown from $43,622 in 1950 to $85,779 today—increasing by only a factor of 1.97. This is a much smaller increase, especially when we adjusted for increased inequality and increased employment for women.

Even without the extrapolation, it’s still clear that real INE-adjusted incomes have were basically stagnant in the 2000s, increased rather slowly in the 2020s, and then actually dropped in 2022 after a bunch of government assistance ended. What looked, under the CPI, like steadily increasing real income was actually more like treading water.

Should we trust this extrapolation? It’s a pretty simplistic approach, I admit. But I think it is plausible when we consider this graph of the ratio between median income and median housing price:

This ratio was around 6 in the 1950s, then began to fall until in the 1970s it stabilized around 4. It began to slowly creep back up, but then absolutely skyrocketed in the 2000s before the 2008 crash. Now it has been rising again, and is now above 7, the highest it has been since the Second World War. (Does this mean we’re due for another crash? I’d bet as much.)

What does this mean? It means that a typical family used to be able to afford a typical house with only four years of their total income—and now would require seven. In that sense, homes are now 75% more expensive today than they were in the 1970s.

Similar arguments can be made for the rising costs of education and healthcare; while many prices have not grown much (gasoline) or even fallen (jewelry and technology), these necessities have continued to grow more and more expensive, not simply in nominal terms, but even compared to the median income.

This is further evidence that our standard measures of “inflation” and “real income” are fundamentally inadequate. They simply aren’t accurately reflecting the real cost of living for most American families. Even in many times when it seemed “inflation” was low and “real income” was growing, in fact it was growing harder and harder to afford vital necessities such as housing, education, and healthcare.

This economic malaise may have been what contributed to the widespread low opinion of Biden’s economy. While the official figures looked good, people’s lives weren’t actually getting better.

Yet this is still no excuse for those who voted for Trump; even the policies he proudly announced he would do—like tariffs and deportations—have clearly made these problems worse, and this was not only foreseeable but actually foreseen by the vast majority of the world’s economists. Then there are all the things he didn’t even say he would do but is now doing, like cozying up to Putin, alienating our closest allies, and discussing “methods” for achieving an unconstitutional third term.

Indeed, it honestly feels quite futile to even reflect upon what was wrong with our economy even when things seemed to be running smoothly, because now things are rapidly getting worse, and showing no sign of getting better in any way any time soon.

A new theoretical model of co-ops

Mar 30 JDN 2460765

A lot of economists seem puzzled by the fact that co-ops are just as efficient as corporate firms, since they have this idea that profit-sharing inevitably results in lower efficiency due to perverse incentives.

I think they’ve been modeling co-ops wrong. Here I present a new model, a very simple one, with linear supply and demand curves. Of course one could make a more sophisticated model, but this should be enough to make the point (and this is just a blog post, not a research paper, after all).

Demand curve is p = a – b q

Marginal cost is f q

There are n workers, who would hold equal shares of the co-op.

Competitive market

First, let’s start with the traditional corporate firm in a competitive market.

Since the market is competitive, price would equal marginal cost would equal wage:

a – b q = d q

q = a/(b+f)

w = d (a/(b+f)) = (a d)/(b+f)

Total profit will be

(p – w)q = 0.

Monopoly firm

In a monopoly, marginal revenue would equal marginal cost:
d[pq]/dq = a – 2 b q

If they are also a monopsonist in the labor market, this marginal cost would be marginal cost of labor, not wage:

d[d q2]/dq = 2 f q

a – 2 b q = 2 f q

q = a/(2b + 2f)

p = a – b q = a (1 – b/(2b + 2f)) = (a (b + 2f))/(2b + 2f)

w = d q = (a f)/(2b + 2f)

Total profit will be

(p – w) q = ((a (b + 2f))/(2b + 2f) – (a f)/(2b + 2f))a/(2b + 2f) = a2/(4b + 2f)

Now consider the co-op.

First, suppose that instead of working for a wage, I work for profit sharing.

If our product market is competitive, we’ll be price-takers, and we will produce until price equals marginal cost:

p = f q

a – b q = f q

q = a/(a+b)

But will we, really? I only get 1/n share of the profits. So let’s see here. My marginal cost of production is still f q, but the marginal benefit I get from more sales may only be p/n.

In that case I would work until:

p/n = f q

(a – b q)/n = fq

a – b q = n f q

q = (a/(b+nf))

Thus I would under-produce. This is the usual argument against co-ops and similar shared ownership.

Co-ops with wages

But that’s not actually how co-ops work. They pay wages. Why do they do that? Well, consider what happens if I am offered a wage as a worker-owner of the co-op.

Is there any reason for the co-op to vote on a wage that is less than the competitive market? No, because owners are workers, so any additional profit from a lower wage would simply be taken from their own wages.

If there any reason for the co-op to vote on a wage that is more than the competitive market? No, because workers are owners, and any surplus lost by paying higher wages would simply be taken from their own profits.

So if the product market is competitive, the co-op will produce the same amount and charge the same price as a firm in perfect competition, even if they have market power over their own wages.

Monopoly co-ops

The argument above doesn’t assume that the co-op has no market power in the labor market. Thus if they are a monopoly in the product market and a monopsony in the labor market, they still pay a competitive wage.

Thus they would set marginal revenue equal to marginal cost:

a – 2 b q = f q

q = a/(2b + f)

The co-op will produce more than the monopoly firm..

This is the new price:

p = a – b q = a(1 – b/(2b+f)) = a(b+f)/(2b + f)

It’s not obvious that this is lower than the price charged by the monopoly firm, but it is.

(a (b + 2f))/(2b + 2f) – a(b+f)/(2b + f) = (a (2b + f)(b + 2f) – 2 a(b+f)2)/(2(b+f)(2b+f))

This is proportional to:

(2b + f)(b + 2f) – 2(b+f)2

2b2 + 5bf + 2f2 – (2b2 + 4bf + 2f2) = bf

So it’s not a large difference, but it’s there. In the presence of market power in the labor market, the co-op is better for consumers, because they get more goods and pay a lower price.

Thus, there is actually no lost efficiency from being a co-op. There is simply much lower inequality, and potentially higher efficiency.

But that’s just in theory.

What do we see in practice?

Exactly that.

Co-ops have the same productivity and efficiency as corporate firms, but they pay higher wages, provide better benefits, and offer collateral benefits to their communities. In fact, they are sometimes more efficient than corporate firms.

Since they’re just as efficient—if not more so—and produce much lower inequality, switching more firms over to co-ops would clearly be a good thing.

Why, then, aren’t co-ops more common?

Because the people who have the money don’t like them.

The biggest barrier facing co-ops is their inability to get financing, because they don’t pay shareholders (so no IPOs) and banks don’t like to lend to them. They tend to make less profit than corporate firms, which offers investors a lower return—instead that money goes to the worker-owners. This lower return isn’t due to inefficiency; it’s just a different distribution of income, more to labor and less to capital.

We will need new financial institutions to support co-ops, such as the Cooperative Fund of New England. And general redistribution of wealth would also help, because if middle class people had more wealth they could afford to finance co-ops. (It would also be good for many other reasons, of course.)

Reflections on the Index of Necessary Expenditure

Mar 16 JDN 2460751

In last week’s post I constructed an Index of National Expenditure (INE), attempting to estimate the total cost of all of the things a family needs and can’t do without, like housing, food, clothing, cars, healthcare, and education. What I found shocked me: The median family cannot afford all necessary expenditures.

I have a couple more thoughts about that.

I still don’t understand why people care so much about gas prices.

Gasoline was a relatively small contribution to INE. It was more than clothing but less than utilities, and absolutely dwarfed by housing, food, or college. I thought maybe since I only counted a 15-mile commute, maybe I didn’t actually include enoughgasoline usage, but based on this estimate of about $2000 per driver, I was in about the right range; my estimate for the same year was $3350 for a 2-car family.

I think I still have to go with my salience hypothesis: Gasoline is the only price that we plaster in real-time on signs on the side of the road. So people are constantly aware of it, even though it isn’t actually that important.

The price surge that should be upsetting people is housing.

If the price of homes had only risen with the rate of CPI inflation instead of what it actually did, the median home price in 2024 would be only $234,000 instead of the $396,000 it actually is; and by my estimation that would save a typical family $11,000 per year—a whopping 15% of their income, and nearly enough to make the INE affordable by itself.

Now, I’ll consider some possible objections to my findings.

Objection 1: A typical family doesn’t actually spend this much on these things.

You’re right, they don’t! Because they couldn’t possibly. Even with substantial debt, you just can’t sustainably spend 125% of your after-tax household income.

My goal here was not to estimate how much families actually spend; it was to estimate how much they need to spend in order to live a good life and not feel deprived.


What I have found is that most American families feel deprived. They are forced to sacrifice something really important—like healthcare, or education, or owning a home—because they simply can’t afford it.

What I’m trying to do here is find the price of the American Dream; and what I’ve found is that the American Dream has a price that most Americans cannot afford.

Objection 2: You should use median healthcare spending, not mean.

I did in fact use mean figures instead of median for healthcare expenditures, mainly because only the mean was readily available. Mean income is higher than median income, so you might say that I’ve overestimated healthcare expenditure—and in a sense that’s definitely true. The median family spends less than this on healthcare.

But the reason that the median family spends less than this on healthcare is not that they want to, but that they have to. Healthcare isn’t a luxury that people buy more of because they are richer. People buy either as much as they need or as much as they can afford—whichever is lower, which is typically the latter. Using the mean instead of the median is a crude way to account for that, but I think it’s a defensible one.

But okay, let’s go ahead and cut the estimate of healthcare spending in half; even if you do that, the INE is still larger than after-tax median household income in most years.

Objection 3: A typical family isn’t a family of four, it’s a family of three.

Yes, the mean number of people in a family household in the US is 3.22 (the median is 3).

This is a very bad thing.

Part of what I seem to be finding here is that a family of four is unaffordable—literally impossible to afford—on a typical family income.

But a healthy society is one in which typical families have two or three children. That is what we need in order to achieve population replacement. When families get smaller than that, we aren’t having enough children, and our population will decline—which means that we’ll have too many old people relative to young people. This puts enormous pressure on healthcare and pension systems, which rely upon the fact that young people produce more, in order to pay for the fact that old people cost more.

The ideal average number of births per woman is about 2.1; this is what would give us a steady population. No US state has fertility above this level. The only reason the US population is growing rather than shrinking is that we are taking in immigrants.

This is bad. This is not sustainable. If the reason families aren’t having enough kids is that they can’t afford them—and this fits with other research on the subject—then this economic failure damages our entire society, and it needs to be fixed.

Objection 4: Many families buy their cars used.

Perhaps 1/10 of a new car every year isn’t an ideal estimate of how much people spend on their cars, but if anything I think it’s conservative, because if you only buy a car every 10 years, and it was already used when you bought it, you’re going to need to spend a lot on maintaining it—quite possibly more than it would cost to get a new one. Motley Fool actually estimates the ownership cost of just one car at substantially more than I estimated for two cars. So if anything your complaint should be that I’ve underestimated the cost by not adequately including maintenance and insurance.

Objection 5: Not everyone gets a four-year college degree.

Fair enough; a substantial proportion get associate’s degrees, and most people get no college degree at all. But some also get graduate degrees, which is even more expensive (ask me how I know).

Moreover, in today’s labor market, having a college degree makes a huge difference in your future earnings; a bachelor’s degree increases your lifetime earnings by a whopping 84%. In theory it’s okay to have a society where most people don’t go to college; in practice, in our society, not going to college puts you at a tremendous disadvantage for the rest of your life. So we either need to find a way to bring wages up for those who don’t go to college, or find a way to bring the cost of college down.

This is probably one of the things that families actually choose to scrimp on, only sending one kid to college or none at all. But because college is such a huge determinant of earnings, this perpetuates intergenerational inequality: Only rich families can afford to send their kids to college, and only kids who went to college grow up to have rich families.

Objection 6: You don’t actually need to save for college; you can use student loans.

Yes, you can, and in practice, most people who to college do. But while this solves the liquidity problem (having enough money right now), it does not solve the solvency problem (having enough money in the long run). Failing to save for college and relying on student loans just means pushing the cost of college onto your children—and since we’ve been doing that for over a generation, feel free to replace the category “college savings” with “repaying student loans”; it won’t meaningfully change the results.

The Index of Necessary Expenditure

Mar 16 JDN 2460751

I’m still reeling from the fact that Donald Trump was re-elected President. He seemed obviously horrible at the time, and he still seems horrible now, for many of the same reasons as before (we all knew the tariffs were coming, and I think deep down we knew he would sell out Ukraine because he loves Putin), as well as some brand new ones (I did not predict DOGE would gain access to all the government payment systems, nor that Trump would want to start a “crypto fund”). Kamala Harris was not an ideal candidate, but she was a good candidate, and the comparison between the two could not have been starker.

Now that the dust has cleared and we have good data on voting patterns, I am now less convinced than I was that racism and sexism were decisive against Harris. I think they probably hurt her some, but given that she actually lost the most ground among men of color, racism seems like it really couldn’t have been a big factor. Sexism seems more likely to be a significant factor, but the fact that Harris greatly underperformed Hillary Clinton among Latina women at least complicates that view.

A lot of voters insisted that they voted on “inflation” or “the economy”. Setting aside for a moment how absurd it was—even at the time—to think that Trump (he of the tariffs and mass deportations!) was going to do anything beneficial for the economy, I would like to better understand how people could be so insistent that the economy was bad even though standard statistical measures said it was doing fine.

Krugman believes it was a “vibecession”, where people thought the economy was bad even though it wasn’t. I think there may be some truth to this.


But today I’d like to evaluate another possibility, that what people were really reacting against was not inflation per se but necessitization.

I first wrote about necessitization in 2020; as far as I know, the term is my own coinage. The basic notion is that while prices overall may not have risen all that much, prices of necessities have risen much faster, and the result is that people feel squeezed by the economy even as CPI growth remains low.

In this post I’d like to more directly evaluate that notion, by constructing an index of necessary expenditure (INE).

The core idea here is this:

What would you continue to buy, in roughly the same amounts, even if it doubled in price, because you simply can’t do without it?

For example, this is clearly true of housing: You can rent or you can own, but can’t not have a house. And nor are most families going to buy multiple houses—and they can’t buy partial houses.

It’s also true of healthcare: You need whatever healthcare you need. Yes, depending on your conditions, you maybe could go without, but not without suffering, potentially greatly. Nor are you going to go out and buy a bunch of extra healthcare just because it’s cheap. You need what you need.

I think it’s largely true of education as well: You want your kids to go to college. If college gets more expensive, you might—of necessity—send them to a worse school or not allow them to complete their degree, but this would feel like a great hardship for your family. And in today’s economy you can’t not send your kids to college.

But this is not true of technology: While there is a case to be made that in today’s society you need a laptop in the house, the fact is that people didn’t used to have those not that long ago, and if they suddenly got a lot cheaper you very well might buy another one.

Well, it just so happens that housing, healthcare, and education have all gotten radically more expensive over time, while technology has gotten radically cheaper. So prima facie, this is looking pretty plausible.

But I wanted to get more precise about it. So here is the index I have constructed. I consider a family of four, two adults, two kids, making the median household income.

To get the median income, I’ll use this FRED series for median household income, then use this table of median federal tax burden to get an after-tax wage. (State taxes vary too much for me to usefully include them.) Since the tax table ends in 2020 which was anomalous, I’m going to extrapolate that 2021-2024 should be about the same as 2019.

I assume the kids go to public school, but the parents are saving up for college; to make the math simple, I’ll assume the family is saving enough for each kid to graduate from with a four-year degree from a public university, and that saving is spread over 16 years of the child’s life. 2*4/16 = 0.5; this means that each year the family needs to come up with 0.5 years of cost of attendance. (I had to get the last few years from here, but the numbers are comparable.)

I assume the family owns two cars—both working full time, they kinda have to—which I amortize over 10 year lifetimes; 2*1/10 = 0.2, so each year the family pays 0.2 times the value of an average midsize car. (The current average new car price is $33226; I then use the CPI for cars to figure out what it was in previous years.)

I assume they pay a 30-year mortgage on the median home; they would pay interest on this mortgage, so I need to factor that in. I’ll assume they pay the average mortgage rate in that year, but I don’t want to have to do a full mortgage calculation (including PMI, points, down payment etc.) for each year, so I’ll say that they amount they pay is (1/30 + 0.5 (interest rate))*(home value) per year, which seems to be a reasonable approximation over the relevant range.

I assume that both adults have a 15-mile commute (this seems roughly commensurate with the current mean commute time of 26 minutes), both adults work 5 days per week, 50 weeks per year, and their cars get the median level of gas mileage. This means that they consume 2*15*2*5*50/(median MPG) = 15000/(median MPG) gallons of gasoline per year. I’ll use this BTS data for gas mileage. I’m intentionally not using median gasoline consumption, because when gas is cheap, people might take more road trips, which is consumption that could be avoided without great hardship when gas gets expensive. I will also assume that the kids take the bus to school, so that doesn’t contribute to the gasoline cost.

That I will multiply by the average price of gasoline in June of that year, which I have from the EIA since 1993. (I’ll extrapolate 1990-1992 as the same as 1993, which is conservative.)

I will assume that the family owns 2 cell phones, 1 computer, and 1 television. This is tricky, because the quality of these tech items has dramatically increased over time.

If you try to measure with equivalent buying power (e.g. a 1 MHz computer, a 20-inch CRT TV), then you’ll find that these items have gotten radically cheaper; $1000 in 1950 would only buy as much TV as $7 today, and a $50 Raspberry Pi‘s 2.4 GHz processor is 150 times faster than the 16 MHz offered by an Apple Powerbook in 1991—despite the latter selling for $2500 nominally. So in dollars per gigahertz, the price of computers has fallen by an astonishing 7,500 times just since 1990.

But I think that’s an unrealistic comparison. The standards for what was considered necessary have also increased over time. I actually think it’s quite fair to assume that people have spent a roughly constant nominal amount on these items: about $500 for a TV, $1000 for a computer, and $500 for a cell phone. I’ll also assume that the TV and phones are good for 5 years while the computer is good for 2 years, which makes the total annual expenditure for 2 phones, a TV, and a computer equal to 2/5*500 + 1/5*500 + 1/2*1000 = 800. This is about what a family must spend every year to feel like they have an adequate amount of digital technology.

I will also assume that the family buys clothes with this equivalent purchasing power, with an index that goes from 166 in 1990 to 177 in 2024—also nearly constant in nominal terms. I’ll multiply that index by $10 because the average annual household spending on clothes is about $1700 today.

I will assume that the family buys the equivalent of five months of infant care per year; they surely spend more than this (in either time or money) when they have actual infants, but less as the kids grow. This amounts to about $5000 today, but was only $1600 in 1990—a 214% increase, or 3.42% per year.

For food expenditure, I’m going to use the USDA’s thrifty plan for June of that year. I’ll use the figures assuming that one child is 6 and the other is 9. I don’t have data before 1994, so I’ll extrapolate that with the average growth rate of 3.2%.

Food expenditures have been at a fairly consistent 11% of disposable income since 1990; so I’m going to include them as 2*11%*40*50*(after-tax median wage) = 440*(after-tax median wage).

The figures I had the hardest time getting were for utilities. It’s also difficult to know what to include: Is Internet access a necessity? Probably, nowadays—but not in 1990. Should I separate electric and natural gas, even though they are partial substitutes? But using these figures I estimate that utility costs rise at about 0.8% per year in CPI-adjusted terms, so what I’ll do is benchmark to $3800 in 2016 and assume that utility costs have risen by (0.8% + inflation rate) per year each year.

Healthcare is also a tough one; pardon the heteronormativity, but for simplicity I’m going to use the mean personal healthcare expenditures for one man and woman (aged 19-44) and one boy and one girl (aged 0-18). Unfortunately I was only able to find that for two-year intervals in the range from 2002 to 2020, so I interpolated and extrapolated both directions assuming the same average growth rate of 3.5%.

So let’s summarize what all is included here:

  • Estimated payment on a mortgage
  • 0.5 years of college tuition
  • amortized cost of 2 cars
  • 7500/(median MPG) gallons of gasoline
  • amortized cost of 2 phones, 1 computer, and 1 television
  • average spending on clothes
  • 11% of income on food
  • Estimated utilities spending
  • Estimated childcare equivalent to five months of infant care
  • Healthcare for one man, one woman, one boy, one girl

There are obviously many criticisms you could make of these choices. If I were writing a proper paper, I would search harder for better data and run robustness checks over the various estimation and extrapolation assumptions. But for these purposes I really just want a ballpark figure, something that will give me a sense of what rising cost of living feels like to most people.

What I found absolutely floored me. Over the range from 1990 to 2024:

  1. The Index of Necessary Expenditure rose by an average of 3.45% per year, almost a full percentage point higher than the average CPI inflation of 2.62% per year.
  2. Over the same period, after-tax income rose at a rate of 3.31%, faster than CPI inflation, but slightly slower than the growth rate of INE.
  3. The Index of Necessary Expenditure was over 100% of median after-tax household income every year except 2020.
  4. Since 2021, the Index of Necessary Expenditure has risen at an average rate of 5.74%, compared to CPI inflation of only 2.66%. In that same time, after-tax income has only grown at a rate of 4.94%.

Point 3 is the one that really stunned me. The only time in the last 34 years that a family of four has been able to actually pay for all necessities—just necessities—on a typical household income was during the COVID pandemic, and that in turn was only because the federal tax burden had been radically reduced in response to the crisis. This means that every single year, a typical American family has been either going further and further into debt, or scrimping on something really important—like healthcare or education.

No wonder people feel like the economy is failing them! It is!

In fact, I can even make sense now of how Trump could convince people with “Are you better off than you were four years ago?” in 2024 looking back at 2020—while the pandemic was horrific and the disruption to the economy was massive, thanks to the US government finally actually being generous to its citizens for once, people could just about actually make ends meet. That one year. In my entire life.

This is why people felt betrayed by Biden’s economy. For the first time most of us could remember, we actually had this brief moment when we could pay for everything we needed and still have money left over. And then, when things went back to “normal”, it was taken away from us. We were back to no longer making ends meet.

When I went into this, I expected to see that the INE had risen faster than both inflation and income, which was indeed the case. But I expected to find that INE was a large but manageable proportion of household income—maybe 70% or 80%—and slowly growing. Instead, I found that INE was greater than 100% of income in every year but one.

And the truth is, I’m not sure I’ve adequately covered all necessary spending! My figures for childcare and utilities are the most uncertain; those could easily go up or down by quite a bit. But even if I exclude them completely, the reduced INE is still greater than income in most years.

Suddenly the way people feel about the economy makes a lot more sense to me.

Evolutionary skepticism

Post 572 Mar 9 JDN 2460744

In the last two posts I talked about ways that evolutionary theory could influence our understanding of morality, including the dangerous views of naive moral Darwinism as well as some more reasonable approaches; yet there are other senses of the phrase “morality evolves” that we haven’t considered. One of these is actually quite troubling; were it true, the entire project of morality would be in jeopardy. I’ll call it “evolutionary skepticism”; it says that yes, morality has evolved—and this is reason to doubt that morality is true. Richard Joyce, author of The Evolution of Morality, is of such a persuasion, and he makes a quite compelling case. Joyce’s central point is that evolution selects for fitness, not accuracy; we had reason to evolve in ways that would maximize the survival of our genes, not reasons to evolve in ways that would maximize the accuracy of our moral claims.

This is of course absolutely correct, and it is troubling precisely because we can all see that the two are not necessarily the same thing. It’s easy to imagine many ways that beliefs could evolve that had nothing to do with the accuracy of those beliefs.

But note that word: necessarily. Accuracy and fitness aren’t necessarily aligned—but it could still be that they are, in fact, aligned rather well. Yes, we can imagine ways a brain could evolve that would benefit its fitness without improving its accuracy; but is that actually what happened to our ancestors? Do we live on instinct, merely playing out by rote the lifestyles of our forebears, thinking and living the same way we have for hundreds of millennia?

Clearly not! Behold, you are reading a blog post! It was written on a laptop computer! While these facts may seem perfectly banal to you, they represent an unprecedented level of behavioral novelty, one achieved only by one animal species among millions, and even then only very recently. Human beings are incredibly flexible, incredibly creative, and incredibly intelligent. Yes, we evolved to be this way, of course we did; but so what? We are this way. We are capable of learning new things about the world, gaining in a few short centuries knowledge our forebears could never have imagined. Evolution does not always make animals into powerful epistemic engines—indeed, 99.99999\% of the time it does not—but once in awhile it does, and we are the result.

Natural selection is quite frugal; it tends to evolve things the easiest way. The way the world is laid out, it seems to be that the easiest way to evolve a brain that survives really well in a wide variety of ecological and social environments is to evolve a brain that is capable of learning to expand its own knowledge and understanding. After all, no other organism has ever been or is ever likely to be as evolutionarily fit as we are; we span the globe, cover a wide variety of ecological niches, and number in the billions and counting. We’ve even expanded beyond the planet Earth, something no other organism could even contemplate. We are successful because we are smart; is it really so hard to believe that we are smart because it made our ancestors successful?

Indeed, it must be this way, or we wouldn’t be able to make sense of the fact that our human brains, evolved for the African savannah a million years ago with minor tweaks since then, are capable of figuring out chess, calculus, writing, quantum mechanics, special relativity, television broadcasting, space travel, and for that matter Darwinian evolution and meta-ethics. None of these things could possibly have been adaptive in our ancestral ecology. They must be spandrels, fitness-neutral side-effects of evolved traits. And just like the original pendentives of San Marco that motivated Gould’s metaphor, what glorious spandrels they are!

Our genes made us better at gathering information and processing that information into correct beliefs, and calculus and quantum mechanics came along for the ride. Our greatest adaptation is to be adaptable; our niche is to need no niche, for we can carve our own.

This is not to abandon evolutionary psychology, for evolution does have a great deal to tell us about psychology. We do have instincts; preprocessing systems built into our sensory organs, innate emotions that motivate us to action, evolved heuristics that we use to respond quickly under pressure. Steven Pinker argues convincingly that language is an evolved instinct—and where would we be without language? Our instincts are essential for not only our survival, but indeed for our rationality.

Staring at a blinking cursor on the blank white page of a word processor, imagining the infinity of texts that could be written upon that page, you could be forgiven for thinking that you were looking at a blank slate. Yet in fact you are staring at the pinnacle of high technology, an extremely complex interlocking system of hardware and software with dozens of components and billions of subcomponents, all precision-engineered for maximum efficiency. The possibilities are endless not because the system is simple and impinged upon by its environment, but because it is complex, and capable of engaging with that environment in order to convert subtle differences in input into vast differences in output. If this is true of a word processor, how much more true it must be of an organism capable of designing and using word processors! It is the very instincts that seem to limit our rationality which have made that rationality possible in the first place. Witness the eternal wisdom of Immanuel Kant:

Misled by such a proof of the power of reason, the demand for the extension of knowledge recognises no limits. The light dove, cleaving the air in her free flight, and feeling its resistance, might imagine that its flight would be still easier in empty space.

The analogy is even stronger than he knew—for brains, like wings, are an evolutionary adaptation! (What would Kant have made of Darwin?) But because our instincts are so powerful, they are self-correcting; they allow us to do science.

Richard Joyce agrees that we are right to think our evolved brains are reasonably reliable when it comes to scientific facts. He has to, otherwise his whole argument would be incoherent. Joyce agrees that we evolved to think 2+2=4 precisely because 2+2=4, and we evolved to think space is 3-dimensional precisely because space is 3-dimensional. Indeed, he must agree that we evolved to think that we evolved because we evolved! Yet, for some reason Joyce thinks that this same line of reasoning doesn’t apply to ethics.

But why wouldn’t it? In fact, I think we have more reason to trust our evolved capacities in ethics than we do in other domains of science, because the subject matter of morality—human behavior and social dynamics—is something that we have been familiar with even all the way back to the savannah. If we evolved to think that theft and murder are bad, why would that happen? I submit it would happen precisely because theft and murder are Pareto-suboptimal unsustainable strategies—that is, precisely because theft and murder are bad. (Don’t worry if you don’t know what I mean by “Pareto-suboptimal” and “unsustainable strategy”; I’ll get to those in later posts.) Once you realize that “bad” is a concept that can ultimately be unpacked to naturalistic facts, all reason to think it is inaccessible to natural selection drops away; natural selection could well have chosen brains that didn’t like murder precisely because murder is bad. Indeed, because morality is ultimately scientific, part of how natural selection could evolve us to be more moral is by evolving us to be more scientific. We are more scientific than apes, and vastly more scientific than cockroaches; we are, indeed, the most scientific animal that has ever lived on Earth.

I do think that our evolved moral instincts are to some degree mistaken or incomplete; but I can make sense of this, in the same way I make sense of the fact that other evolved instincts don’t quite fit what we have discovered in other sciences. For instance, humans have an innate concept of linear momentum that doesn’t quite fit with what we’ve discovered in physics. We tend to presume that objects have an inherent tendency toward rest, though in fact they do not—this is because in our natural environment, friction makes most objects act as if they had such a tendency. Roll a rock along the ground, and it will eventually stop. Run a few miles, and eventually you’ll have to stop too. Most things in our everyday life really do behave as if they had an inherent tendency toward rest. It’s only once we realized that friction is itself a force, not present everywhere, that we came to see that linear momentum is conserved in the absence of external forces. (Throw a rock in space, and it will not ever stop. Nor will you, by Newton’s Third Law.) This casts no doubt upon our intuitions about rocks rolled along the ground, which do indeed behave exactly as our intuition predicts.

Similarly, our intuition that animals don’t deserve rights could well be an evolutionary consequence of the fact that we sometimes had to eat animals in order to survive, and so would do better not thinking about it too much; but now that we don’t need to do this anymore, we can reflect upon the deeper issues involved in eating meat. This is no reason to doubt our intuitions that parents should care for their children and murder is bad.