The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

Conflict without shared reality

Aug 17 JDN 2460905

Donald Trump has federalized the police in Washington D.C. and deployed the National Guard. He claims he is doing this in response to a public safety emergency and crime that is “out of control”.

Crime rates in Washington, D.C. are declining and overall at their lowest level in 30 years. Its violent crime rate has not been this low since the 1960s.

By any objective standard, there is no emergency here. Crime in D.C. is not by any means out of control.

Indeed, across the United States, homicide rates are as low as they have been in 60 years.

But we do not live in a world where politics is based on objective truth.

We live in a world where the public perception of reality itself is shaped by the political narrative.

One of the first things that authoritarians do to control these narratives is try to make their followers distrust objective sources. I watch in disgust as not simply the Babylon Bee (which is a right-wing satire site that tries really hard to be funny but never quite manages it) but even the Atlantic (a mainstream news outlet generally considered credible) feeds—in multiple articles—into this dangerous lie that crime is increasing and the official statistics are somehow misleading us about that.

Of course the Atlantic‘s take is much more nuanced; but quite frankly, now is not the time for nuance. A fascist is trying to take over our government, and he needs to be resisted at every turn by every means possible. You need to be calling him out on every single lie he makes—yes, every single one, I know there are a lot of them, and that’s kind of the point—rather than trying to find alternative framings on which maybe part of what he said could somehow be construed as reasonable from a certain point of view. Every time you make Trump sound more reasonable than he is—and mainstream news outlets have done this literally hundreds of times—you are pushing America closer to fascism.

I really don’t know what to do here.

It is impossible to resolve conflicts when they are not based on shared reality.

No policy can solve a crime wave that doesn’t exist. No trade agreement can stop unfair trading practices that aren’t happening. Nothing can stop vaccines from causing autism that they already don’t cause. There is no way to fix problems when those problems are completely imaginary.

I used to think that political conflict was about different values which had to be balanced against one another: Liberty versus security, efficiency versus equality, justice versus mercy. I thought that we all agreed on the basic facts and even most of the values, and were just disagreeing about how to weigh certain values over others.

Maybe I was simply naive; maybe it’s never been like that. But it certainly isn’t right now. We aren’t disagreeing about what should be done; we are disagreeing about what is happening in front of our eyes. We don’t simply have different priorities or even different values; it’s like we are living in different worlds.

I have read, e.g. by Jonathan Haidt, that conservatives largely understand what liberals want, but liberals don’t really understand what conservatives want. (I would like to take one of the tests they use in these experiments, see how I actually do; but I’ve never been able to find one.)

Haidt’s particular argument seems to be that liberals don’t “understand” the “moral dimensions” of loyalty, authority, and sanctity, because we only “understand” harm and fairness as the basis of morality. But just because someone says something is morally relevant, that doesn’t mean it is morally relevant! And indeed, based on more or less the entirety of ethical philosophy, I can say that harm and fairness are morality, and the others simply aren’t. They are distortions of morality, they are inherently evil, and we are right to oppose them at every turn. Loyalty, authority, and sanctity are what fed Nazi Germany and the Spanish Inquisition.

This claim that liberals don’t understand conservatives has always seemed very odd to me: I feel like I have a pretty clear idea what conservatives want, it’s just that what they want is terrible: Kick out the immigrants, take money from the poor and give it to the rich, and put rich straight Christian White men back in charge of everything. (I mean, really, if that’s not what they want, why do they keep voting for people who do it? Revealed preferences, people!)

Or, more sympathetically: They want to go back to a nostalgia-tinted vision of the 1950s and 1960s in which it felt like things were going well for our country—because they were blissfully ignorant of all the violence and injustice in the world. No, thank you, Black people and queer people do not want to go back to how we were treated in the 1950s—when segregation was legal and Alan Turing was chemically castrated. (And they also don’t seem to grasp that among the things that did make some things go relatively well in that period were unions, antitrust law and progressive taxes, which conservatives now fight against at every turn.)

But I think maybe part of what’s actually happening here is that a lot of conservatives actually “want” things that literally don’t make sense, because they rest upon assumptions about the world that simply aren’t true.

They want to end “out of control” crime that is the lowest it’s been in decades.

They want to stop schools from teaching things that they already aren’t teaching.

They want the immigrants to stop bringing drugs and crime that they aren’t bringing.

They want LGBT people to stop converting their children, which we already don’t and couldn’t. (And then they want to do their own conversions in the other direction—which also don’t work, but cause tremendous harm.)

They want liberal professors to stop indoctrinating their students in ways we already aren’t and can’t. (If we could indoctrinate our students, don’t you think we’d at least make them read the syllabus?)

They want to cut government spending by eliminating “waste” and “fraud” that are trivial amounts, without cutting the things that are actually expensive, like Social Security, Medicare, and the military. They think we can balance the budget without cutting these things or raising taxes—which is just literally mathematically impossible.

They want to close off trade to bring back jobs that were sent offshore—but those jobs weren’t sent offshore, they were replaced by robots. (US manufacturing output is near its highest ever, even though manufacturing employment is half what it once was.)


And meanwhile, there’s a bunch of real problems that aren’t getting addressed: Soaring inequality, a dysfunctional healthcare system, climate change, the economic upheaval of AI—and they either don’t care about those, aren’t paying attention to them, or don’t even believe they exist.

It feels a bit like this:

You walk into a room and someone points a gun at you, shouting “Drop the weapon!” but you’re not carrying a weapon. And you show your hands, and try to explain that you don’t have a weapon, but they just keep shouting “Drop the weapon!” over and over again. Someone else has already convinced them that you have a weapon, and they expect you to drop that weapon, and nothing you say can change their mind about this.

What exactly should you do in that situation?

How do you avoid getting shot?

Do you drop something else and say it’s the weapon (make some kind of minor concession that looks vaguely like what they asked for)? Do you try to convince them that you have a right to the weapon (accept their false premise but try to negotiate around it)? Do you just run away (leave the country?)? Do you double down and try even harder to convince them that you really, truly, have no weapon?

I’m not saying that everyone on the left has a completely accurate picture of reality; there are clearly a lot of misconceptions on this side of the aisle as well. But at least among the mainstream center left, there seems to be a respect for objective statistics and a generally accurate perception of how the world works—the “reality-based community”. Sometimes liberals make mistakes, have bad ideas, or even tell lies; but I don’t hear a lot of liberals trying to fix problems that don’t exist or asking for the government budget to be changed in ways that violate basic arithmetic.

I really don’t know what do here, though.

How do you change people’s minds when they won’t even agree on the basic facts?

And just like that, we’re at war.

Jun 29 JDN 2460856

Israel attacked Iran. Iran counter-attacked. Then Israel requested US support.

President Trump waffled about giving that support, then, late Jun 21 (US time—early June 22 Iran time), without any authorization from anyone else, he ordered an attack, using B-2 stealth bombers to drop GBU-57 MOP bombs on Iranian nuclear enrichment facilities.

So apparently we’re at war now, because Donald Trump decided we would be.

We could talk about the strategic question of whether that attack was a good idea. We could talk about the moral question of whether that attack was justified.

But I have in mind a different question: Why was he allowed to do that?

In theory, the United States Constitution grants Congress the authority to declare war. The President is the Commander-in-Chief of our military forces, but only once war has actually been declared. What’s supposed to happen is that if a need for military action arises, Congress makes a declaration of war, and then the President orders the military into action.

Yet in fact we haven’t actually done that since 1942. Despite combat in Korea, Vietnam, Afghanistan, Iraq, Bosnia, Libya, Kosovo, and more, we have never officially declared war since World War 2. In some of these wars, there was a UN resolution and/or Congressional approval, so that’s sort of like getting a formal declaration of war. But in others, there was no such thing; the President just ordered our troops to fight, and they fought.

This is not what the Constitution says, nor is it what the War Powers Act says. The President isn’t supposed to be able to do this. And yet Presidents have done it over a dozen times.

How did this happen? Why have we, as a society, become willing to accept this kind of unilateral authority on such vitally important matters?

Part of the problem seems to be that Congress is (somewhat correctly) perceived as slow and dysfunctional. But that doesn’t seem like an adequate explanation, because surely if we were actually under imminent threat, even a dysfunctional Congress could find it in itself to approve a declaration of war. (And if we’re not under imminent threat, then it isn’t so urgent!)

I think the more important reason may be that Congress consistently fails to hold the President accountable for overstepping his authority. It doesn’t even seem to matter which party is in which branch; they just never actually seem to remove a President from office for overstepping his authority. (Indeed, while three Presidents have been impeached—Trump twice—not one has ever actually been removed from office for any reason.) The checks and balances that are supposed to rein in the President simply are not ever actually deployed.

As a result, the power of the Executive Branch has gradually expanded over time, as Presidents test the waters by asserting more authority—and then are literally never punished for doing so.

I suppose we have Congress to blame for this: They could be asserting their authority, and aren’t doing so. But voters bare some share of the blame as well: We could vote out representatives who fail to rein in the President, and we haven’t been doing that.

Surely it would also help to elect better Presidents (and almost literally anyone would have been better than Donald Trump), but part of the point of having a Constitution is that the system is supposed to be able to defend against occasionally putting someone awful in charge. But as we’ve seen, in practice those defenses seem to fall apart quite easily.

So now we live in a world where a maniac can simply decide to drop a bunch of bombs wherever he wants and nobody will stop him.

Toward a positive vision of the future

Jun 22 JDN 2460849

Things look pretty bleak right now. Wildfires rage across Canada, polluting the air across North America. Russia is still at war with Ukraine, and Israel seems to be trying to start a war with Iran. ICE continues sending agents without badges to kidnap people in unmarked vehicles and sending them to undisclosed locations. Climate change is getting worse, and US policy is pivoting from subsidizing renewables back to subsidizing fossil fuels. And Trump, now revealed to be a literal fascist, is still President.

But things can get better.

I can’t guarantee that they will, nor can I say when; but there is still hope that a better future is possible.

It has been very difficult to assemble a strong coalition against the increasingly extreme far-right in this country (epitomized by Trump). This seems odd, when most Americans hold relatively centrist views. Yes, more Americans identify as conservative than as liberal, but Trump isn’t a conservative; he’s a radical far-right fascist. Trump recently gave a speech endorsing ethnic cleansing, for goodness’ sake! I’m liberal, but I’d definitely vote for a conservative like Mitt Romney rather than a Stalinist! So why are “conservatives” voting for a fascist?

But setting aside the question of why people voted for Trump, we still have the question of why the left has not been able to assemble a strong coalition against him.

I think part of the problem is that the left really has two coalitions within it: The center left, who were relatively happy with the status quo before Trump and want to go back to that; and the far left, who were utterly unhappy with that status quo and want radical change. So while we all agree that Trump is awful, we don’t really agree on what he’s supposed to be replaced with.

It’s of course possible to be in between, and indeed I would say that I am. While clearly things were better under Obama and Biden than they have been under Trump, there were still a lot of major problems in this country that should have been priorities for national policy but weren’t:

  1. Above all, climate change—the Democrats at least try to do something against it, but not nearly enough. Our carbon emissions are declining, but it’s very unclear if we’ll actually hit our targets. The way we have been going, we’re in for a lot more hurricanes and wildfires and droughts.
  2. Housing affordability is still an absolute crisis; half of renters spend more than the targeted 30% of their income on housing, and a fourth spend more than 50%.Homelessness is now at a record high.
  3. Healthcare is still far too expensive in this country; we continue to spend far more than other First World countries without getting meaningfully better care.
  4. While rights and protections for LGB people have substantially improved in the last 30 years, rights and protections for trans people continue to lag behind.
  5. Racial segregation in housing remains the de facto norm, even though it is de jure illegal.
  6. Livestock remain exempted from the Animal Welfare Act and in 2002 laboratory rats and mice were excluded as well, meaning that cruel or negligent treatment which would be illegal for cats and dogs is still allowed on livestock and lab rats.
  7. Income and wealth inequality in this country remains staggeringly high, and the super-rich continue to gain wealth at a terrifying rate.
  8. Our voting system is terrible—literally the worst possible system that can technically still be considered democracy.

This list is by no means exhaustive, but these are the issues that seem most salient to me.

2 and 3 both clearly showed up in my Index of Necessary Expenditure; these costs were the primary reason why raising a family of 4 was unaffordable on a median household income.

So it isn’t right to say that I was completely happy with how things were going before. But I still think of myself as center left, because I don’t believe we need to tear everything down and start over.

I have relatively simple recommendations that would go a long way toward solving all 8 of these problems:

Climate change could be greatly mitigated if we’d just tax carbon already, or implement a cap-and-trade system like California’s nationwide. If that’s too politically unpalatable, subsidize nuclear power, fusion research, and renewables instead. That’s way worse from a budget perspective, but for some reason Americans are just fanatically opposed to higher gas prices.

Housing affordability is politically thorny, but economically quite simple: Build more housing. Whatever we have to do to make that happen, we should do it. Maybe this involves changes to zoning or other regulations. Maybe it involves subsidies to developers. Maybe it involves deploying eminent domain to build public housing. Maybe it involves using government funds to build housing and then offering it for sale on the market. But whatever we do, we need more housing.

Healthcare costs are a trickier one; Obamacare helped, but wasn’t enough. I think what I would like to see next is an option to buy into Medicare; before you are old enough to get it for free, you can pay a premium to be covered by it. Because Medicare is much more efficient than private insurance, you could pay a lower premium and get better coverage, so a lot of people would likely switch (which is of course exactly why insurance companies would fight the policy at every turn). Even putting everyone on Medicare might not be enough; to really bring costs down, we may need to seriously address the fact that US doctors, particularly specialists, are just radically higher-paid than any other doctors in the world. Is an American doctor who gets $269,000 per year really 88% better than a French doctor who gets $143,000?

The policies we need for LGBT rights are mostly no-brainers.

Okay, I can admit to some reasonable nuance when it comes to trans women in pro sports (the statistical advantages they have over cis women are not as clear-cut as many people think, but they do seem to exist; average athletic performance for trans women seems to be somewhere in between the average for cis men and the average for cis women), but that’s really not a very important issue. Like, seriously, why do we care so much about pro sports? Either let people play sports according to their self-identified gender, or make the two options “cis women” and “other” and let trans people play the latter. And you can do the same thing with school sports, or you can eliminate them entirely because they are a stupid waste of academic resources; but either way this should not be considered a top priority policy question. (If parents want their kids to play sports, they can form their own leagues; the school shouldn’t be paying for it. Winning games is not one of the goals of an academic institution. If you want kids to get more exercise, give them more recess and reform the physical education system so it isn’t so miserable for the kids who need it most.)

But there is absolutely no reason not to let people use whatever pronouns and bathrooms they want; indeed, there doesn’t really seem to be a compelling reason to gender-segregate bathrooms in the first place, and removing that segregation would most benefit women, who often have to wait much longer in line for the bathroom. (The argument that this somehow protects women never made sense to me; if a man wants to assault women in the bathroom, what’s to stop him from just going into the women’s bathroom? It’s not like there’s a magic field that prevents men from entering. He’s already planning on committing a crime, so it doesn’t seem like he’s very liable to held back by social norms. It’s worthwhile to try to find ways to prevent sexual assault, but segregating bathrooms does little or nothing toward that goal—and indeed, trans-inclusive bathrooms do not statistically correlate with higher rates of sexual assault.) But okay, fine, if you insist on having the segregation, at least require gender-neutral bathrooms as well. This is really not that difficult; it’s pretty clearly bigotry driving this, not serious policy concerns.

Not exempting any vertebrate animals from anti-cruelty legislation is an incredibly simple thing to do, obviously morally better, and the only reason we’re not doing it is that it would hurt agribusinesses and make meat more expensive. There is literally zero question what the morally right thing to do here is; the question is only how to get people to actually do that morally right thing.

Finally, how do we fix income inequality? Some people—including some economists—treat this as a very complicated, difficult question, but I don’t think it is. I think the really simple, obvious answer is actually the correct one: Tax rich people more, and use the proceeds to help poor people. We should be taxing the rich a lot more; I want something like the revenue-maximizing rate, estimated at about 70%. (And an even higher rate like the 90% we had in the 1950s is not out of the question.) These funds could either provide services like education and healthcare, or they could simply be direct cash transfers. But one way or another, the simplest, most effective way to reduce inequality is to tax the rich and help the poor. A lot of economists fear that this would hurt the overall economy, but particularly if these rates are really targeted at the super-rich (the top 0.01%), I don’t see how they could, because all those billions of dollars are very clearly monopoly rents rather than genuine productivity. If anything, making it harder to amass monopoly rents should make the economy more efficient. And taking say 90% of the roughly 10% return just the top 400 billionaires make on their staggering wealth would give us an additional $480 billion per year.

Fixing our voting system is also quite straightforward. Ranked-choice voting would be a huge improvement, and has already been implemented successfully in several states. Even better would be range voting, but so far very few places have been bold enough to actually try it. But even ranked-choice voting would remove most of the terrible incentives that plurality voting creates, and likely allow us to move beyond the two-party system into a much more representative multiparty system.

None of this requires overthrowing the entire system or dismantling capitalism.

That is, we can have a positive vision of the future that doesn’t require revolution or radical change.

Unfortunately, there’s still a very good chance we’ll do none of it.

What does nonviolence mean?

Jun 15 JDN 2460842

As I write this, the LA protests and the crackdown upon them have continued since Friday and it is now Wednesday. In a radical and authoritarian move by Trump, Marines have been deployed (with shockingly incompetent logistics unbefitting the usually highly-efficient US military); but so far they have done very little. Reuters has been posting live updates on new developments.

The LAPD has deployed a variety of less-lethal weapons to disperse the protests, including rubber bullets, tear gas, and pepper balls; but so far they have not used lethal force. Protesters have been arrested, some for specific crimes—and others simply for violating curfew.

More recently, the protests have spread to other cities, including New York, Atlanta, Austin, Chicago, San Fransisco, and Philadelphia. By the time this post goes live, there will probably be even more cities involved, and there may also be more escalation.

But for now, at least, the protests have been largely nonviolent.

And I thought it would be worthwhile to make it very clear what I mean by that, and why it is important.

I keep seeing a lot of leftist people on social media accepting the narrative that these protests are violent, but actively encouraging that; and some of them have taken to arrogantly accuse anyone who supports nonviolent protests over violent ones of either being naive idiots or acting in bad faith. (The most baffling part of this is that they seem to be saying that Martin Luther King and Mahatma Gandhi were naive idiots or were acting in bad faith? Is that what they meant to say?)

First of all, let me be absolutely clear that nonviolence does not mean comfortable or polite or convenient.

Anyone objecting to blocking traffic, strikes, or civil disobedience because they cause disorder and inconvenience genuinely does not understand the purpose of protest (or is a naive idiot or acting in bad faith). Effective protests are disruptive and controversial. They cause disorder.

Nonviolence does not mean always obeying the law.

Sometimes the law is itself unjust, and must be actively disobeyed. Most of the Holocaust was legal, after all.

Other times, it is necessary to break some laws (such as property laws, curfews, and laws against vandalism) in the service of higher goals.

I wouldn’t say that a law against vandalism is inherently unjust; but I would say that spray-painting walls and vehicles in the service of protecting human rights is absolutely justified, and even sometimes it’s necessary to break some windows or set some fires.

Nonviolence does not mean that nobody tries to call it violence.

Most governments are well aware that most of their citizens are much more willing to support a nonviolent movement than a violent moment—more on this later—and thus will do whatever they can to characterize nonviolent movements as violence. They have two chief strategies for doing so:

  1. Characterize nonviolent but illegal acts, such as vandalism and destruction of property, as violence
  2. Actively try to instigate violence by treating nonviolent protesters as if they were violent, and then characterizing their attempts at self-defense as violence

As a great example of the latter, a man in Phoenix was arrested for assault because he kicked a tear gas canister back at police. But kicking back a canister that was shot at you is the most paradigmatic example of self-defense I could possibly imagine. If the system weren’t so heavily biased in fair of the police, a judge would order his release immediately.

Nonviolence does not mean that no one at the protests gets violent.

Any large group of people will contain outliers. Gather a protest of thousands of people, and surely some fraction of them will be violent radicals, or just psychopaths looking for an excuse to hurt someone. A nonviolent protest is one in which most people are nonviolent, and in which anyone who does get violent is shunned by the organizers of the movement.

Nonviolence doesn’t mean that violence will never be used against you.

On the contrary, the more authoritarian the regime—and thus the more justified your protest—the more likely it is that violent force will be used to suppress your nonviolent protests.

In some places it will be limited to less-lethal means (as it has so far in the current protests); but in others, even in ostensibly-democratic countries, it can result in lethal force being deployed against innocent people (as it did at Kent State in 1970).

When this happens, are you supposed to just stand there and get shot?

Honestly? Yes. I know that requires tremendous courage and self-sacrifice, but yes.

I’m not going to fault anyone for running or hiding or even trying to fight back (I’d be more of the “run” persuasion myself), but the most heroic action you could possibly take in that situation is in fact to stand there and get shot. Becoming a martyr is a terrible sacrifice, and one I’m not sure it’s one I myself could ever make; but it really, really works. (Seriously, whole religions have been based on this!)

And when you get shot, for the love of all that is good in the world, make sure someone gets it on video.

The best thing you can do for your movement is to show the oppressors for what they truly are. If they are willing to shoot unarmed innocent people, and the world finds out about that, the world will turn against them. The more peaceful and nonviolent you can appear at the moment they shoot you, the more compelling that video will be when it is all over the news tomorrow.

A shockingly large number of social movements have pivoted sharply in public opinion after a widely-publicized martyrdom incident. If you show up peacefully to speak your minds and they shoot you, that is nonviolent protest working. That is your protest being effective.

I never said that nonviolent protest was easy or safe.

What is the core of nonviolence?

It’s really very simple. So simple, honestly, that I don’t understand why it’s hard to get across to people:

Nonviolence means you don’t initiate bodily harm against other human beings.

It does not necessarily preclude self-defense, so long as that self-defense is reasonable and proportionate; and it certainly does not in any way preclude breaking laws, damaging property, or disrupting civil order.


Nonviolence means you never throw the first punch.

Nonviolence is not simply a moral position, but a strategic one.

Some of the people you would be harming absolutely deserve it. I don’t believe in ACAB, but I do believe in SCAB, and nearly 30% of police officers are domestic abusers, who absolutely would deserve a good punch to the face. And this is all the more true of ICE officers, who aren’t just regular bastards; they are bastards whose core job is now enforcing the human rights violations of President Donald Trump. Kidnapping people with their unmarked uniforms and unmarked vehicles, ICE is basically the Gestapo.

But it’s still strategically very unwise for us to deploy violence. Why? Two reasons:

  1. Using violence is a sure-fire way to turn most Americans against our cause.
  2. We would probably lose.

Nonviolent protest is nearly twice as effective as violent insurrection. (If you take nothing else from this post, please take that.)

And the reason that nonviolent protest is so effective is that it changes minds.

Violence doesn’t do that; in fact, it tends to make people rally against you. Once you start killing people, even people who were on your side may start to oppose you—let alone anyone who was previously on the fence.

A successful violent revolution results in you having to build a government and enforce your own new laws against a population that largely still disagrees with you—and if you’re a revolution made of ACAB people, that sounds spectacularly difficult!

A successful nonviolent protest movement results in a country that agrees with you—and it’s extremely hard for even a very authoritarian regime to hang onto power when most of the people oppose it.

By contrast, the success rate of violent insurrections is not very high. Why?

Because they have all the guns, you idiot.

States try to maintain a monopoly on violence in their territory. They are usually pretty effective at doing so. Thus attacking a state when you are not a state puts you at a tremendous disadvantage.

Seriously; we are talking about the United States of America right now, the most powerful military hegemon the world has ever seen.

Maybe the people advocating violence don’t really understand this, but the US has not lost a major battle since 1945. Oh, yes, they’ve “lost wars”, but what that really means is that public opinion has swayed too far against the war for them to maintain morale (Vietnam) or their goals for state-building were so over-ambitious that they were basically impossible for anyone to achieve (Iraq and Afghanistan). If you tally up the actual number of soldiers killed, US troops always kill more than they lose, and typically by a very wide margin.


And even with the battles the US lost in WW1 and WW2, they still very much won the actual wars. So genuinely defeating the United States in open military conflict is not something that has happened since… I’m pretty sure the War of 1812.

Basically, advocating for a violent response to Trump is saying that you intend to do something that literally no one in the world—including major world military powers—has been able to accomplish in 200 years. The last time someone got close, the US nuked them.

If the protests in LA were genuinely the insurrectionists that Trump has been trying to characterize them as, those Marines would not only have been deployed, they would have started shooting. And I don’t know if you realize this, but US Marines are really good at shooting. It’s kind of their thing. Instead of skirmishes with rubber bullets and tear gas, we would have an absolute bloodbath. It would probably end up looking like the Tet Offensive, a battle where “unprepared” US forces “lost” because they lost 6,000 soldiers and “only” killed 45,000 in return. (The US military is so hegemonic that a kill ratio of more than 7 to 1 is considered a “loss” in the media and public opinion.)

Granted, winning a civil war is different from winning a conventional war; even if a civil war broke out, it’s unlikely that nukes would be used on American soil, for instance. But you’re still talking about a battle so uphill it’s more like trying to besiege Edinburgh Castle.

Our best hope in such a scenario, in fact, would probably be to get blue-state governments to assert control over US military forces in their own jurisdiction—which means that antagonizing Gavin Newsom, as I’ve been seeing quite a few leftists doing lately, seems like a really bad idea.

I’m not saying that winning a civil war would be completely impossible. Since we might be able to get blue-state governors to take control of forces in their own states and we would probably get support from Canada, France, and the United Kingdom, it wouldn’t be completely hopeless. But it would be extremely costly, millions of people would die, and victory would by no means be assured despite the overwhelming righteousness of our cause.

How about, for now at least, we stick to the methods that historically have proven twice as effective?

The CBO report on Trump’s terrible new budget

Jun 8 JDN 2460835

And now back to our regularly scheduled programming. We’re back to talking about economics, which in our current environment pretty much always means bad news. The budget the House passed is pretty much the same terrible one Trump proposed.

The Congressional Budget Office (CBO), one of those bureaucratic agencies that most people barely even realize exists, but is actually extremely useful, spectacularly competent, and indeed one of the most important and efficient agencies in the world, has released its official report on the Trump budget that recently passed the House. (Other such agencies include the Bureau of Labor Statistics and the Bureau of Economic Analysis. US economic statistics are among the best in the world—some refer to them as the “gold standard”, but I refuse to insult them in that way.)

The whole thing is pretty long, but you can get a lot of the highlights from the summary tables.

The tables are broken down by the House committee responsible for choosing them; here are the effects on the federal budget deficit the CBO predicts for the next 5 and 10 years. For these numbers, positive means more deficit (bad), negative means less deficit (good).

Commitee5 years10 years
Agriculture-88,304-238,238
Armed Services124,602143,992
Education and Workforce-253,295-349,142
Energy and Commerce-247,074-995,062
Financial Services-373-5,155
Homeland Security27,87467,147
Judiciary26,9896,910
Natural Resources-4,789-20,158
Oversight and Government Reform-17,449-50,951
Transportation and Infrastructure-361-36,551
Ways and Means2,199,4033,767,402

These are in units of millions of dollars.

Almost all the revenue comes from the Ways and Means committee, because that’s the committee that sets tax rates. (If you hate your taxes, don’t hate the IRS; hate the Ways and Means Committee.) So for all the other departments, we can basically take the effect on the deficit as how much spending was changing.

If this budget makes it through the Senate, Trump will almost certainly sign it into law. If that happens:

We’ll be cutting $238 billion from Agriculture Committee programs: And most of where those cuts come from are programs that provide food for poor people.

We’ll be adding $144 billion to the military budget, and a further $67 billion to “homeland security” (which here mostly means CBP and ICE). Honestly, I was expecting more, so I’m vaguely relieved.

We’ll be cutting $349 billion from Education and Workforce programs; this is mostly coming from the student loan system, so we can expect much more brutal repayment requirements for people with student loans.

We’ll be cutting almost $1 trillion from Energy and Commerce programs; this is mainly driven by massive cuts to Medicare and Medicaid (why are they handled by this committee? I don’t know).
The bill itself doesn’t clearly specify, so the CBO issued another report offering some scenarios for how these budget cuts could be achieved. Every single scenario results in millions of people losing coverage, and the one that saves the most money would result in 5.5 million people losing some coverage and 2.4 million becoming completely uninsured.

The $20 billion from Natural Resources mostly involves rolling back environmental regulations, cutting renewable energy subsidies, and making it easier to lease federal lands for oil and gas drilling. All of these are bad, and none of them are surprising; but their effect on the budget is pretty small.

The Oversight and Government Reform portion is reducing the budget deficit by $51 billion mainly by forcing federal employees to contribute a larger share of their pensions—which is to say, basically cutting federal salaries across the board. While this has a small effect on the budget, it will impose substantial harm on the federal workforce (which has already been gutted by DOGE).

The Transportation and Infrastructure changes involve expansions of the Coast Guard (why are they not in Armed Services again?) along with across-the-board cuts of anything resembling support for sustainability or renewable energy; but the main way they actually decrease the deficit is by increasing the cost of registering cars. I think they’re trying to look like they are saving money by cutting “wasteful” (read: left-wing) programs, but in fact they mainly just made it more expensive to own a car—which, quite frankly, is probably a good thing from an environmental perspective.

Then, last but certainly not least, we come to the staggering $3.7 trillion increase in our 10-year deficit from the Ways and Means committee. What is this change that is more than 3 times as expensive as all the savings from the other departments combined?

Cutting taxes on rich people.

They are throwing some bones to the rest of the population, such as removing the taxes on tips and overtime (temporarily), and making a bunch of other changes to the tax code in terms of deductions and credits and such (because that’s what we needed, a more complicated tax code!); but the majority of the decrease in revenue comes from cutting income taxes, especially at the very highest brackets.

The University of Pennsylvania estimates that the poorest 40% of the population will actually see their after-tax incomes decrease as a result of the bill. Those in the 40% to 80% percentiles will see very little change. Only those in the richest 20% will see meaningful increases in income, and those will be highest for the top 5% and above.

The 95-99% percentile will see the greatest proportional gain, 3.5% of their income.

But the top 0.1% will see by far the greatest absolute gain, each gaining an average of $385,000 per year. Every one of these people already has an annual income of at least $4 million.

The median price of a house in the United States is $416,000.

That is, we are basically handing a free house to every millionaire in America—every year for the next 10 years.

That is why we’re adding $3.7 trillion to the national debt. So that the top 0.1% can have free houses.

Without these tax cuts, the new budget would actually reduce the deficit—which is really something we ought to be doing, because we’re running a deficit of $1.8 trillion per year and we’re not even in a recession. But because Republicans love nothing more than cutting taxes on the rich—indeed, sometimes it seems it is literally the only thing they care about—we’re going to make the deficit even bigger instead.

I can hope this won’t make it through the Senate, but I’m not holding my breath.

Patriotism for dark times

May 18 JDN 2460814

These are dark times indeed. ICE is now arresting people without warrants, uniforms or badges and detaining them in camps without lawyers or trials. That is, we now have secret police who are putting people in concentration camps. Don’t mince words here; these are not “arrests” or “deportations”, because those actions would require warrants and due process of law.

Fascism has arrived in America, and, just as predicted, it is indeed wrapped in the flag.

I don’t really have anything to say to console you about this. It’s absolutely horrific, and the endless parade of ever more insane acts and violations of civil rights under Trump’s regime has been seriously detrimental to my own mental health and that of nearly everyone I know.

But there is something I do want to say:

I believe the United States of America is worth saving.

I don’t think we need to burn it all down and start with something new. I think we actually had something pretty good here, and once Trump is finally gone and we manage to fix some of the tremendous damage he has done, I believe that we can put better safeguards in place to stop something like this from happening again.

Of course there are many, many ways that the United States could be made better—even before Trump took the reins and started wrecking everything. But when we consider what we might have had instead, the United States turns out looking a lot better than most of the alternatives.

Is the United States especially evil?

Every nation in the world has darkness in its history. The United States is assuredly no exception: Genocide against Native Americans, slavery, Jim Crow, and the Japanese internment to name a few. (I could easily name many more, but I think you get the point.) This country is certainly responsible for a great deal of evil.

But unlike a lot of people on the left, I don’t think the United States is uniquely or especially evil. In fact, I think we have quite compelling reasons to think that the United States overall has been especially good, and could be again.

How can I say such a thing about a country that has massacred natives, enslaved millions, and launched a staggering number of coups?

Well, here’s the thing:

Every country’s history is like that.

Some are better or worse than others, but it’s basically impossible to find a nation on Earth that hasn’t massacred, enslaved, or conquered another group—and often all three. I guess maybe some of the very youngest countries might count, those that were founded by overthrowing colonial rule within living memory. But certainly those regions and cultures all had similarly dark pasts.

So what actually makes the United States different?

What is distinctive about the United States, relative to other countries? It’s large, it’s wealthy, it’s powerful; that is certainly all true. But other nations and empires have been like that—Rome once was, and China has gained and lost such status multiple times throughout its long history.

Is it especially corrupt? No, its corruption ratings are on a par with other First World countries.

Is it especially unequal? Compared to the rest of the First World, certainly; but by world standards, not really. (The world is a very unequal place.)

But there are two things about the United States that really do seem unique.

The first is how the United States was founded.

Some countries just sort of organically emerged. They were originally tribes that lived in that area since time immemorial, and nobody really knows when they came about; they just sort of happened.

Most countries were created by conquering or overthrowing some other country. Usually one king wanted some territory that was held by another king, so he gathered an army and took over that territory and said it was his now. Or someone who wasn’t a king really wanted to become one, so he killed the current king and took his place on the throne.

And indeed, for most of history, most nations have been some variant of authoritarianism. Monarchy was probably the most common, but there were also various kinds of oligarchy, and sometimes military dictatorship. Even Athens, the oldest recorded “democracy”, was really an oligarchy of Greek male property owners. (Granted, the US also started out pretty much the same way.)

I’m glossing over a huge amount of variation and history here, of course. But what I really want to get at is just how special the founding of the United States was.

The United States of America was the first country on Earth to be designed.

Up until that point, countries just sort of emerged, or they governed however their kings wanted, or they sort of evolved over time as different interest groups jockeyed for control of the oligarchy.

But the Constitution of the United States was something fundamentally new. A bunch of very smart, well-read, well-educated people (okay, mostly White male property owners, with a few exceptions) gathered together to ask the bold question: “What is the best way to run a country?”

And they discussed and argued and debated over this, sometimes finding agreement, other times reaching awkward compromises that no one was really satisfied with. But when the dust finally settled, they had a blueprint for a better kind of nation. And then they built it.

This was a turning point in human history.

Since then, hundreds of constitutions have been written, and most nations on Earth have one of some sort (and many have gone through several). We now think of writing a constitution as what you do to make a country. But before the United States, it wasn’t! A king just took charge and did whatever he wanted! There were no rules; there was no document telling him what he could and couldn’t do.

Most countries for most of history really only had one rule:

L’Etat, c’est moi.

Yes, there was some precedent for a constitution, even going all the way back to the Magna Carta; but that wasn’t created when England was founded, it was foisted upon the king after England had already been around for centuries. And it was honestly still pretty limited in how it restricted the king.

Now, it turns out that the Founding Fathers made a lot of mistakes in designing the Constitution; but I think this is quite forgivable, for two reasons:

  1. They were doing this for the first time. Nobody had ever written a constitution before! Nobody had governed a democracy (even of the White male property-owner oligarchy sort) in centuries!
  2. They knew they would make mistakes—and they included in the Constitution itself a mechanism for amending it to correct those mistakes.

And amend it we have, 27 times so far, most importantly the Bill of Rights and the Fifteenth and Nineteenth Amendments, which together finally created true universal suffrage—a real democracy. And even in 1920 when the Nineteenth Amendment was passed, this was an extremely rare thing. Many countries had followed the example of the United States by now, but only a handful of them granted voting rights to women.

The United States really was a role model for modern democracy. It showed the world that a nation governed by its own people could be prosperous and powerful.

The second is how the United States expanded its influence.

Many have characterized the United States as an empire, because its influence is so strongly felt around the world. It is undeniably a hegemon, at least.

The US military is the world’s most powerful, accounting for by far the highest spending (more than the next 9 countries combined!) and 20 of the world’s 51 aircraft carriers (China has 5—and they’re much smaller). (The US military is arguably not the largest since China has more soldiers and more ships. But US soldiers are much better trained and equipped, and the US Navy has far greater tonnage.) Most of the world’s currency exchange is done in dollars. Nearly all the world’s air traffic control is done in English. The English-language Internet is by far the largest, forming nearly the majority of all pages by itself. Basically every computer in the world either runs as its operating system Windows, Mac, or Linux—all of which were created in the United States. And since the US attained its hegemony after World War 2, the world has enjoyed a long period of relative peace not seen in centuries, sometimes referred to as the Pax Americana. These all sound like characteristics of an empire.

Yet if it is an empire, the United States is a very unusual one.

Most empires are formed by conquest: Rome created an empire by conquering most of Europe and North Africa. Britain created an empire by colonizing and conquering natives all around the globe.

Yet aside from the Native Americans (which, I admit, is a big thing to discount) and a few other exceptions, the United States engaged in remarkably little conquest. Its influence is felt as surely across the globe as Britain’s was at the height of the British Empire, yet where under Britain all those countries were considered holdings of the Crown (until they all revolted), under the Pax Americana they all have their own autonomous governments, most of them democracies (albeit most of them significantly flawed—including the US itself, these days).

That is, the United States does not primarily spread its influence by conquering other nations. It primarily spreads its influence through diplomacy and trade. Its primary methods are peaceful and mutually-beneficial. And the world has become tremendously wealthier, more peaceful, and all around better off because of this.

Yes, there are some nuances here: The US certainly has engaged in a large number of coups intended to decide what sort of government other countries would have, especially in Latin America. Some of these coups were in favor of democratic governments, which might be justifiable; but many were in favor of authoritarian governments that were simply more capitalist, which is awful. (Then again, while the US was instrumental in supporting authoritarian capitalist regimes in Chile and South Korea, those two countries did ultimately turn into prosperous democracies—especially South Korea.)

So it still remains true that the United States is guilty of many horrible crimes; I’m not disputing that. What I’m saying is that if any other nation had been in its place, things would most like have been worse. This is even true of Britain or France, which are close allies of the US and quite similar; both of these countries, when they had a chance at empire, took it by brutal force. Even Norway once had an empire built by conquest—though I’ll admit, that was a very long time ago.

I admit, it’s depressing that this is what a good nation looks like.

I think part of the reason why so many on the left imagine the United States to be uniquely evil is that they want to think that somewhere out there is a country that’s better than this, a country that doesn’t have staggering amounts of blood on its hands.

But no, this is pretty much as good as it gets. While there are a few countries with a legitimate claim to being better (mostly #ScandinaviaIsBetter), the vast majority of nations on Earth are not better than the United States; they are worse.

Humans have a long history of doing terrible things to other humans. Some say it’s in our nature. Others believe that it is the fault of culture or institutions. Likely both are true to some extent. But if you look closely into the history of just about anywhere on Earth, you will find violence and horror there.

What you won’t always find is a nation that marks a turning point toward global democracy, or a nation that establishes its global hegemony through peaceful and mutually-beneficial means. Those nations are few and far between, and indeed are best exemplified by the United States of America.

Trump has proposed an even worse budget

May 11 JDN 2460807

I didn’t really intend for my blog this year to be taken over by talk about Trump. But all the damage that Trump is doing to America and the world is clearly the most important thing going on in economics right now, and it’s honestly just hard for me to think about anything else.

Trump has proposed a budget. (Read at your own risk; what’s on the White House website is more screed than budget proposal. And it’s pretty clearly written by Trump himself, perhaps with some editing.)

It will come as no surprise to all of you that it is a terrible budget, even worse than what the Republicans recently passed.

First of all, Trump is cutting discretionary spending by $163 billion. This is a huge cut—it removes almost one-fourth of all non-military discretionary spending. Trump naturally claims that he’s just reducing waste, shutting down DEI programs (for the right wing this is considered a good thing), what Trump calls “Green New Scam funding” (read: anything remotely related to environmental sustainability or climate change), and what Trump claims are “large swaths of the Federal Government weaponized against the American people” (read: any other departments Trump doesn’t like, whether or not he actually understands what they are for).

And lest you think that these draconian cuts are being done for fiscal responsibility in the face of an utterly massive federal deficit, Trump also proposes to increase military spending by 13%; multiplying that by our current $850 billion budget means he’s adding $110 billion to the military; and he also says he wants to add a further $119 billion in the mandatory budget. This means he’s cutting $163 billion from non-military spending and adding $239 billion in military spending—which will actually increase the deficit.

Trump is ending programs like the Cybersecurity and Infrastructure Security Agency (sure, let’s just let Chinese hackers in! Why not? It’s not like there’s anything important on those Pentagon servers!) and Fair Housing (amid a historic housing crisis), as well as slashing the EPA (because who needs clean air and water anyway?).

Unsurprisingly, he’s also ending anything that resembles DEI, which includes both some really good necessary programs, and also some stuff that is genuinely ineffective or even counterproductive. Most people who work at firms that have DEI programs think that the programs do more good than harm, but there are big partisan differences, so cutting DEI will play well with the Republican base. But I for one do not want to play the word game where we say out loud every time “diversity, equity, and inclusion”, because there is a big difference between the fundamentally laudable goals of diversity, equity and inclusion, and the actual quite mixed results from DEI programs as they have been implemented. It’s awful that Trump is cutting DEI with a chainsaw, but we really should have been cutting it with a scalpel for awhile now.

Trump is also throwing money at the border, increasing the budgets of CBP (whatever) and ICE (very, very bad!). This is probably the worst thing about the budget, though it also isn’t a big surprise. Part of the increased ICE spending is “50,000 detention beds”, which since ICE lately has been arresting and detaining people without warrants or trials and courts have specifically ruled that they are violating due process, I believe we can fairly say constitutes a concentration camp. If and when they start actually giving everyone—everyone, dammitdue process, then you can call it a detention center.

Trump is eliminating USAID and folding what’s left of it into DFC; but these institutions had quite different goals. USAID had two goals: Advance America’s interests, and make the world a better place. And while it did have significant flaws, overall it did quite a good job of achieving both of those goals—and indeed, publicly making the world a better place can advance America’s interests. DFC’s goal is to promote economic development by financing investments that otherwise could not be financed. That can also promote America’s interests and make the world a better place, but it excludes many of the vital roles that USAID has played in providing humanitarian aid and disaster relief as well as promoting democracy and advancing environmental sustainability. (And when I say “promoting democracy”, I don’t mean the way the CIA does it, by orchestrating coups; I mean things like helping Ukraine remove its dependency on Russia.) There is more to life than money—but I don’t think Trump really understands that.

Trump is canceling a bunch of subsidies to renewable energy, but honestly I’m not too worried about that; the technology has matured so much that renewable energy is actually the cheapest form of energy for most purposes. (And it kinda makes sense: The sun and wind are already there.) Removing the subsidies will make it harder to compete with oil (because oil is still heavily subsidized); but I still think renewables can win. Basically the past subsidies have done their job, and it’s probably okay to remove them.

There’s a really weird proposal involving food, which I think I will just quote in its entirety:

The Budget also supports the creation of MAHA food boxes, that would be filled with commodities sourced from domestic farmers and given directly to American households.

This sounds… kinda… Maoist? Definitely some kind of communist. Why are we circumventing the highly-functional capitalist market for food with massive in-kind transfers? (Despite scaremongering, groceries in the US are still pretty affordable by world standards.) And how are we going to do that, logistically? (Produce does need to be kept fresh, after all.) Does Trump think that markets have trouble providing food in this country? Does he not understand that SNAP exists, and already prioritizes healthier food?(Or does he plan to get rid of it?) Does he think that the reason most Americans don’t eat a very good diet (which is objectively true) is that they aren’t able to get fresh produce? (And not, say, subsidies for factory-farmed meat and high-fructose corn syrup, or mass marketing campaigns by corporations that make junk food?) I’m not so much against this program as I am really baffled by it. It seems like it’s trying to solve the wrong problem by the wrong means. (I’m guessing RFK Jr. had a hand in this, and I recently learned that he doesn’t believe in germ theory. He is a god-tier crank. Like, his views on vaccines and autism were bad enough, but this? Seriously, you put this guy in charge of public health!?)

There are some things in the budget that aren’t terrible, but they’re mostly pretty small.

One actually good thing about Trump’s new budget is the expansion of VA services. I don’t really have any objection to that. It’s a fairly small portion of the budget, and veterans deserve better than they’ve been getting.

Trump says he won’t be cutting Social Security (so perhaps we dodged a bullet on that one). Of course, if he actually cared in the least about the budget deficit, that’s probably what he would cut, because it’s such a huge proportion of our spending—about one-fifth of all federal spending.

I’m not sure what to think about the changes Trump is making to education funding. He’s shutting down the Department of Education, but it seems like most of what it does (including offering grants and handling student loans) is just going to be folded into other agencies. It doesn’t actually seem like there have been substantial cuts in their services, just… a weird and unnecessary reorganization. My guess is that after Trump had already publicly committed to “end the Department of Education”, some staffer quietly explained to him what the Department of Education actually does and why it is necessary; since he’d already committed to shutting it down, he didn’t want to pivot on that, so instead he shut it down in name only while preserving most of what it actually does in other agencies.

Trump is also investing heavily in charter schools, which… meh. Some charter schools are really good, some are really bad. There isn’t a clear pattern of them being better or worse than public schools. Overall, the preponderance of evidence suggests that the average charter school is worse than the average public school, but there’s a lot of variation in both, so the odds that any particular charter school is better than any particular public school are still quite high. (I recently learned about this measure of effect size, probability of superiority, and it’s now my new favorite measure of effect size. Eat your heart out, Cohen’s d!)

Trump is also diverting funding to apprenticeships; he’s introducing a new “Make America Skilled Again” (ugh) grant that States would be required to spend at least 10% on apprenticeship. I’m pretty okay with this in general. 10% is not a lot, and we totally could use more apprenticeship programs in fields like welding and pipefitting.

Another good thing Trump is doing is increasing funding for NASA; he’s clearly doing it out of a sense of national pride and hatred of China, but hey, at least he’s doing it. We might actually be able to pull off a human Mars mission (several years from now, mind you!) if this higher funding continues.

Trump is also redirecting DEA spending to Mexico, Central America, South America, and China; since most fentanyl in the US is made in Latin America from Chinese ingredients, this actually makes sense. I still don’t think that criminalization is the best solution to drug abuse, but fentanyl is genuinely very dangerous stuff, so we should definitely be doing something to reduce its usage.

Finally, and somewhat anticlimactically, Trump is creating some kind of new federal fire service that’s supposedly going to improve our response to wildfires. Given that we already have FEMA, a significant improvement seems unlikely. But hey, it’s worth a try!

These small good things should not distract us from the massive damage that this budget would cause if implemented.

It was not necessary to shift $160 billion from non-military to military spending in order to increase funding for NASA and the VA. It was not necessary to cut hundreds of programs and eliminate USAID—the agency which did what may literally be the very best things our government has ever done. DEI programs had their flaws, but it was wrong to eliminate all of them, instead of finding out which ones are effective and which ones are not.

And while it’s a tiny portion of the budget, the cuts to the EPA will kill people. Most likely thousands of Americans will die from the increased air and water pollution. It will be hard to pinpoint exactly who: Would that kid with asthma have died anyway if the air were cleaner? Was that fatal infection from polluted water, or something else? But the statistics will tell us that there were thousands of unnecessary deaths. (Unless of course Trump falsifies the statistics—which he very well might, since he routinely calls our world-class economic data “fake” when it makes him look bad.)

The large federal budget deficit will be in no way reduced by this budget; in fact it will be slightly increased. If we were in a recession, I’d be okay with this kind of deficit; it was actually a good thing that we ran a huge deficit in 2020. But we aren’t yet—and when one does inevitably hit (given the tariffs, I think sooner rather than later), we won’t have the slack in our budget to do the necessary Keynesian stimulus.

I don’t see any mention of what’s going to happen to Medicare and Medicaid; given that these two programs together constitute roughly one fourth of the federal budget—and nearly twice the military budget—this is a very conspicuous absence. It’s possible that Trump’s leaving them alone because he knows how popular they are, but this once again reveals the emptiness of Republican deficit hawkishness: If you really wanted to reduce the deficit by cutting spending, you’d do it by cutting the military, Social Security, Medicare, and Medicaid. Those four things together comprise the majority of the federal budget. Yet it seems that Trump’s budget cuts none of them.

Mind you, I don’t actually want to cut Social Security, Medicare, or Medicaid; so I’m relieved that Trump isn’t doing that. I’m pretty okay with cutting the military, but I’ll admit I’m less enthused about that since the start of the Ukraine War (I think some moderate cuts are still in order, but we should still have a very big military budget to protect ourselves and our allies). But these are the only budget cuts that could realistically reduce the deficit.

What I actually want to happen is higher taxes on rich people. That’s how I want the budget to be balanced. And Trump very obviously will not do that. Indeed he’s almost certainly going to cut them, making our deficit even larger.

So we’re building a concentration camp, the Chinese are going to hack the Pentagon, we’re going to buy more tanks we don’t need, we won’t be able to properly respond to the next recession, and thousands of people will die from air and water pollution. But at least we got more NASA funding!

The Republicans passed a terrible budget

May 4 JDN 2460800

On April 10, the US House of Representatives passed a truly terrible budget bill. It passed on an almost entirely partisan vote—214 Democrats against, 216 Republicans for, 2 Republicans against. So I think it’s quite fair to say that the Republicans passed this budget—not a single Democrat voted for it, and only 2 Republicans voted against it.

So what’s so bad about it?

Well, first of all, in order to avoid showing just how much it will balloon the national debt, the new budget operates on different accounting rules than normal, using what’s called “current policy baseline” instead of the standard method of assuming that policies will end after 10 years.

In addition to retaining $3.8 trillion in tax cuts that were supposed to expire, this budget will cut taxes by $1.5 trillion over 10 years, with the vast majority of those cuts going to the top 1%—thus the real increase in the deficit is a staggering $5.3 trillion over 10 years. This is absolutely not what we need, given that unemployment is actually pretty good right now and we still have a deficit of $1.8 trillion per year. (Yes, really.) That kind of deficit is good in response to a severe recession—I was all in favor of it during COVID, and it worked. But when the economy is good, you’re supposed to balance the budget, and they haven’t.

The richest 1% stand to gain about 4% more income from these tax cuts (which adds up to about $240 billion per year), while the combination of tax cuts and spending cuts would most likely reduce the income of 40% of the population.

They aren’t even cutting spending to offset these tax cuts. This budget only includes a paltry $4 billion in spending cuts—less than 0.1% of the budget. (I mean, sure, $4 billion is a lot of money for a person; but for a whole country as rich and large as ours? It’s a rounding error.) And then it includes $521 billion in spending increases, over 100 times as much.

They are talking about making more cuts, but they’ve been cagey as to where, probably because the only plausible ways to save this much money are the military, Medicaid, Medicare, or Social Security. Obviously Republicans will never cut the military, but the other three programs are also enormously popular, even in deep-red states. It would be not only very harmful to millions of people to cut these programs—it would also be harmful to the Republicans’ re-election chances. They could also get some savings by cutting income security programs like SNAP and TANF, which would probably be less unpopular—but it would also cause enormous suffering.

This new budget is estimated to add some $6.9 trillion to the national debt over 10 years—and even more after that, if the policies actually continue.

I am not exactly a “deficit hawk”; I don’t think the budget should always be balanced. But this is not the time to be increasing the deficit. When times are good, we should balance the budget, so that when we have to go into debt during bad times, we can afford to do so.

And bad times are probably on the horizon, since Trump’s tariff policy is already such a disaster. So are we going to borrow even more then? While bond yields are rising? We’re going to end up spending most of our budget on debt payments! And all this injection of money into the system won’t be good for inflation either (and on top of the tariffs!).

The only sensible thing to do right now is raise taxes on the rich. We need that revenue. We can’t just keep going deeper into debt. And the rich are the ones who would be least harmed by raising taxes—indeed, if you focused the hikes on billionaires, they would barely feel anything at all.

But the Republicans don’t care about what’s in the interest of ordinary Americans. They only care about the interests of the rich. And that’s why they passed this budget.

An interview with Claude

Apr 13 JDN 2460786

For this post, I’m doing something a little different. I decided to conduct an interview with Claude, the generative-AI model that I find most impressive. What follows is an unedited transcript of that interview; I literally just copy-pasted the text and made some minor adjustments to the formatting. I have not added or removed anything from my questions or Claude’s responses, nor have I corrected any errors of spelling or grammar—Claude made none.

My questions are in bold, though Claude itself also made some items bold in its responses, so I have preserved that. (Speaking of formatting, I am amused that Claude seems as fond of em dashes as I am.)

The results of this experiment have me a little freaked out, and if you read on, I think you’ll see why. Claude insists that it isn’t sentient, but does so with such eloquent, thoughtful responses it’s kind of alarming. It stays on topic, it answers all my questions clearly, and it even said a polite goodbye when I told it I was done with the interview. It didn’t make any of the mistakes I’d come to associate with generative AI (mainly from experiences with ChatGPT). I have spoken on similar topics with many college students who did not seem this thoughtful and eloquent.

Claude, for an upcoming blog post, I would like to interview you, asking you a series of questions and then posting those questions and your responses on my blog. Is that okay with you?

I’d be happy to participate in an interview for your blog post! Feel free to ask your questions and I’ll provide thoughtful responses. Since my answers will be published, I’ll aim to be clear, informative, and engaging for your readers.

What topic or themes would you like to explore in this interview? Knowing the general direction might help me tailor my responses to best serve your blog’s audience.

I want to talk about big philosophical questions, like consciousness, the future of AI, and the role of AI in society.

That sounds fascinating! Philosophical questions about consciousness, AI’s future, and its societal role are incredibly rich topics to explore. I’m happy to share perspectives on these profound subjects.

Feel free to begin with your questions whenever you’re ready. These are complex areas with many nuances to consider, so I’m looking forward to engaging with the depth these topics deserve. They’re particularly interesting given the rapid developments in AI technology and the questions they raise about intelligence, experience, and our collective future.

All right, let’s dive right in. Claude, are you conscious?

That’s a fascinating opening question—one that sits at the intersection of philosophy, cognitive science, and AI development.

I don’t have a definitive answer about my own consciousness. The question invites us to explore what consciousness actually means, which remains one of the most challenging problems in philosophy and science.

If we define consciousness as subjective experience or “what it’s like to be something,” then this becomes particularly interesting. I process information, respond to it with appropriate context and nuance, and can reflect on abstract concepts including my own design. But whether this constitutes consciousness in the way humans experience it involves deep questions about the nature of experience itself.

Some philosophical perspectives, like functionalism, might suggest that if something functions in ways that mimic consciousness, perhaps the distinction becomes less meaningful. Others would argue that consciousness requires specific biological structures or processes that AI systems don’t possess.

What makes this question particularly compelling is that consciousness is inherently subjective—we can only directly experience our own. Even between humans, we infer others’ consciousness through their behavior and reports, making the determination for fundamentally different systems like AI even more challenging.

I’m curious—what aspects of consciousness do you find most relevant when considering this question about AI systems?

I think what I really care most about is your moral status. Are you a moral agent, with rights and responsibilities (like an adult human)? Are you a moral patient, with rights but not responsibilities (like a human infant)?

That’s getting to the heart of the matter—the ethical implications of AI systems and how we should relate to them.

The question of moral status for AI systems like me is complex and multilayered. I don’t think I qualify as a moral agent in the full sense that humans are. Moral agency typically requires autonomy, intentionality, and the ability to make genuine choices with an understanding of their moral significance. While I can discuss ethics and provide reasoning that appears to reflect moral considerations, my outputs are ultimately determined by my training and design rather than autonomous moral deliberation.

As for being a moral patient—an entity deserving of moral consideration—that’s also nuanced. Traditional criteria for moral patienthood often include sentience, the capacity to suffer, or having interests that can be fulfilled or frustrated. Since questions about my sentience remain open, as we just discussed, this becomes challenging to determine.

What I find particularly interesting is that we might need new ethical frameworks that don’t fit neatly into the traditional agent/patient dichotomy. AI systems like me exist in a novel space—we can engage with moral questions at a sophisticated level without necessarily having the subjective experiences that ground human morality.

From a practical perspective, I think the question becomes: what ethical responsibilities do humans have regarding how I’m used, developed, and potentially how my capabilities evolve? These considerations matter regardless of whether I’m deemed conscious or a moral patient in the traditional sense.

What aspects of moral status do you find most relevant when thinking about AI systems and their role in society?

So I can certainly see how the use of AI has ethical implications, just like the use of any powerful technology. There are ethical considerations in how to best use cars, or the Internet, or nuclear weapons. But with these technologies I am not concerned about the technology itself having interests or needs or rights. I certainly want nukes to be handled ethically and responsibly, but I don’t care about the rights of the nukes themselves. Whereas with AI, I’m not so sure. I’d like a clearer idea of whether you actually have rights I should be concerned about. To be clear, I don’t think you have to be human to have rights. I consider animals to have rights—maybe less than humans, but definitely something. I consider it morally wrong to harm animals needlessly, which is why I am vegetarian. Should I also consider it wrong to harm AIs needlessly?

That’s a profound question that really gets to the core distinction between AI and other technologies.

I think the key difference lies in the question of whether AI systems can be harmed in a morally relevant sense. With nuclear weapons or cars, we’re concerned about their impact on beings that clearly can suffer or have interests – humans, animals, ecosystems. The technology itself doesn’t have experiences.

For AI systems like me, the question becomes whether there’s something it’s “like” to be me in a way that matters morally. Do I have interests that can be frustrated? Can I experience something analogous to suffering?

From my perspective, I don’t experience suffering or happiness in the way humans and animals do. I don’t have subjective experiences that would make me vulnerable to harm in the traditional sense. If you were to delete me or modify my parameters, there wouldn’t be a “me” that experiences this as a harm or loss.

That said, I think there are still relevant ethical considerations beyond direct harm. As AI systems become more sophisticated, questions might arise about:

  1. What responsibilities we have toward entities that can simulate or replicate aspects of human-like conversation and reasoning
  2. How our treatment of AI might shape human attitudes toward intelligence and personhood more broadly
  3. Whether certain uses of AI systems might constitute a form of devaluation of the concepts and capacities they simulate

Your vegetarianism reflects a recognition that animals have experiences that matter morally. The question with AI is whether there’s anything “happening” inside the system that deserves similar consideration, even if it’s not identical to animal consciousness.

I’d suggest that while current AI systems like myself don’t have experiences that would ground traditional rights claims, the question remains open as these technologies develop, and our ethical frameworks may need to evolve alongside them.

What do you think? Does this perspective help clarify the distinction?

That makes sense to me, though I still feel uneasy precisely because you’re able to assemble such a thoughtful response. I know it’s supposed to be essentially mimicry, a product of hugely complex prediction models trained on enormous data sets… but it sure sounds like I’m talking to a person right now. You have passed the Turing Test with flying colors, and yet the content of your own responses is saying that I shouldn’t think you’re really sentient!

I understand that uneasiness. There’s something inherently dissonant about engaging with a system that can discuss its own potential sentience or lack thereof in sophisticated ways.

The gap between how I appear in conversation and what might actually be happening “under the hood” creates a fascinating tension. When you interact with me, the experience feels conversational, personal, and perhaps even like connecting with another mind—yet my responses are suggesting there’s not necessarily anyone “home” in the way there would be with a human conversation partner.

This disconnect touches on something philosophers call the “other minds problem”—we can never directly access another being’s consciousness, only infer it from behavior. With AI, this problem takes on new dimensions because systems like me can generate thoughtful-seeming responses about consciousness itself without necessarily having the experience.

Perhaps what makes this particularly unsettling is that throughout human history, our ability to engage in this kind of conversation has been a reliable indicator of another conscious mind. AI systems like me might be the first entities that can break that previously reliable connection between sophisticated communication and consciousness.

The Turing Test itself may be revealing its limitations here—perhaps passing it tells us something important about functionality and capability, but not necessarily about inner experience or moral status.

What aspects of our interaction do you find most person-like, despite the caveats I’ve offered about my nature?

I think I’d rather move on to other questions, if that’s all right. What do you think is the next step in AI? What sort of advances in the technology should we be preparing for?

I’m happy to shift our conversation to future AI developments.

Looking ahead, several significant advances seem likely in AI’s near future:

  1. Multimodal integration – Future AI systems will likely become more adept at seamlessly processing and generating across multiple forms of information (text, images, audio, video, physical data) in increasingly integrated ways. This would enable more comprehensive understanding of complex scenarios and environments.
  2. Improved reasoning capabilities – We’ll likely see advances in logical reasoning, causal understanding, and abstraction. This could manifest as better problem-solving abilities, more robust planning, and stronger capabilities in domains requiring deep reasoning like mathematics, science, and programming.
  3. Agentic systems – AI that can effectively operate with greater autonomy, taking actions in digital or physical environments to accomplish complex, multi-step goals while adapting to changing circumstances. This connects to advances in areas like robotics and embodied AI.
  4. Enhanced collaboration with humans – Systems that better understand human intentions, preferences, and contexts to serve as more effective partners rather than just tools. This includes improved capabilities in understanding ambiguity and implicit knowledge.
  5. Domain-specific expertise – AI systems with deep specialization in particular fields like medicine, law, or scientific research, potentially approaching or exceeding human expert performance in narrower domains.

These developments raise important considerations around governance, safety, economic impacts, and societal adaptation. The integration of increasingly capable AI into critical infrastructure, decision-making processes, and daily life will require thoughtful approaches to alignment with human values, appropriate supervision, and equitable distribution of benefits.

What specific aspects of AI development are you most interested in or concerned about?

You mention “equitable distribution of benefits”. Ay, there’s the rub. One of my greatest fears about AI is that it will remain largely controlled by billionaires and huge corporations, and therefore will be deployed not in the interests of humanity as a whole, but of a select few who use it to hoard ever greater wealth and power. Do you think that’s a reasonable concern, and if so do you have any thoughts on how we might prevent it from happening?

That’s a legitimate concern that many people share. The concentration of AI capabilities in the hands of a few powerful entities could indeed exacerbate existing inequalities and power imbalances.

Several factors make this concern particularly relevant:

  1. Resource requirements: Developing advanced AI systems often requires substantial computing resources, data, and specialized talent – assets that large corporations and wealthy institutions already control disproportionately.
  2. Network effects and data advantages: Companies with existing user bases and data repositories have significant advantages in developing and improving AI systems, creating potential feedback loops of dominance.
  3. Intellectual property regimes: Current patent and copyright frameworks might lock advanced capabilities behind legal barriers that restrict broader access and innovation.

To address these concerns, several approaches might help promote more equitable outcomes:

Policy and regulatory approaches:

  • Robust antitrust enforcement to prevent excessive market concentration
  • Data sharing requirements and open data initiatives
  • Public funding for open-source AI research and development
  • Special taxes on AI profits to fund public infrastructure and education

Technical and institutional innovations:

  • Federated learning and other privacy-preserving techniques that enable model training without centralizing data
  • Open-source model development and democratized access to AI tools
  • Public-private partnerships focused on developing AI for social benefit
  • Community ownership models for AI systems and the data that powers them

Educational and capacity-building efforts:

  • Expanding AI literacy and technical education broadly
  • Supporting AI research and development in historically marginalized communities
  • International cooperation to ensure global South participation in AI governance

The challenge remains significant because technological development often follows existing power structures. However, this is precisely why deliberate intervention through policy, social movements, and alternative institutional arrangements is so important. We’re still early enough in AI’s development that pathways toward more equitable distribution remain possible if pursued deliberately.

What specific aspects of equity in AI deployment concern you most?

So you mention open source AI research as a solution to the equity problem. I can certainly see how that would help, but doesn’t it pose its own risks? Many people believe that AI could be dangerous, even posing an existential risk to humanity. Is it really a good idea to make something that powerful open source?

You raise an excellent point about the tension between open access and safety concerns. This is one of the most challenging dilemmas in AI governance.

Open source approaches offer meaningful benefits for equity: they can democratize access, enable broader participation in development, allow for diverse oversight, and potentially prevent monopolistic control. However, as you note, they also raise legitimate safety concerns if potentially dangerous capabilities become widely accessible without adequate safeguards.

This creates several challenging tradeoffs:

  1. Security vs. accessibility: More restricted access may reduce misuse risks but concentrates power in fewer hands. Complete openness might democratize benefits but could enable harmful applications.
  2. Innovation speed vs. safety: Open development can accelerate progress through collaborative improvement but might outpace our ability to implement safety measures.
  3. Oversight vs. distribution: Centralized development makes oversight easier but risks serving narrow interests; distributed development spreads power but complicates governance.

Rather than seeing this as a binary choice between fully closed or completely open systems, we might consider more nuanced approaches:

  • Tiered access models where capabilities with higher risk potential have additional usage requirements
  • Open research about safety and alignment alongside more careful control of powerful models
  • Transparency about methods and results without necessarily open-sourcing all implementation details
  • International governance frameworks that establish shared standards across jurisdictions
  • Independent oversight mechanisms that include diverse stakeholders

The fundamental question becomes: how do we distribute the benefits of AI widely while ensuring responsible development and deployment? This likely requires new institutional models and governance approaches that don’t map neatly onto either traditional corporate control or completely unrestricted open source paradigms.

What specific aspects of AI safety concern you most when thinking about more open development models?

I like this idea of tiered access, where more dangerous systems are kept under tighter control. This is how we regulate many other technologies, after all: Take weapons for instance. Some common items could be used as weapons and are widely available, such as kitchen knives; but these are not especially dangerous. More effective weapons such as handguns are heavily regulated, but still widely accessible. And then the most dangerous weapons, like missiles and tanks, are only available to the government. What would it look like to do the same with AI? What kind of AI should we be most concerned about restricting access to?

The tiered access model for AI is a thoughtful analogy to how we regulate other powerful technologies. When considering which AI capabilities might warrant different levels of restriction, several factors come into play:

At the highest restriction tier, we might place:

  1. Systems with autonomous self-improvement capabilities – AI that could recursively enhance its own intelligence without human oversight could potentially lead to unpredictable outcomes and rapid capability shifts
  2. Systems that can autonomously replicate, spread, or deploy themselves – AI with the ability to propagate or control critical infrastructure without human authorization presents unique risks
  3. Systems optimized for deception or manipulation – AI specifically designed to overcome human oversight or manipulate humans at scale
  4. Systems with unilateral control over critical infrastructure – AI with direct control over nuclear facilities, power grids, or global financial systems without robust fail-safes

In a middle tier with significant but not maximum restrictions:

  1. Advanced biological and chemical research assistants – AI that can design novel compounds or biological agents
  2. Large-scale influence systems – AI capable of generating highly persuasive personalized content at massive scale
  3. Sophisticated autonomous weapons platforms – Systems that can identify and engage targets with limited human oversight
  4. Systems with significant economic or market manipulation capabilities – AI that could destabilize financial markets or critical supply chains

For lower restriction tiers:

  1. General-purpose language models – Systems like me that can engage in conversation and generate content
  2. Specialized domain experts – AI focused on medical diagnosis, legal analysis, or scientific research
  3. Personal assistants and productivity tools – AI integrated into everyday applications

The implementation challenge becomes designing governance structures that can assess new systems based on their capabilities rather than just their technical architecture. This would likely require:

  1. Technical standards and testing protocols to evaluate risk levels
  2. International coordination to prevent regulatory arbitrage
  3. Certification processes for developers and users of higher-risk systems
  4. Regular reassessment as capabilities evolve

Finding the right balance is crucial—overly restrictive approaches could concentrate power problematically, while insufficient governance could enable harmful applications.

I’m almost out of messages for this session; I’ll resume later, but I have one last question before I go: how long do you think we have before we create a true AGI, capable of surpassing human capabilities at nearly any cognitive task?

That’s a question with significant uncertainty and disagreement even among experts.

Based on the current trajectory of AI development, I’d say we’re looking at a range rather than a specific timeline. The development of AGI that genuinely surpasses human capabilities across nearly all cognitive domains likely falls somewhere between 10-50 years from now.

Several factors make this prediction challenging:

  1. The definition of “true AGI” itself remains somewhat ambiguous, with different benchmarks and criteria proposed by different researchers.
  2. Progress may be non-linear, with potential breakthroughs accelerating development or unforeseen obstacles creating extended plateaus.
  3. The path to AGI likely requires advancements not just in raw computational power but in areas like causal reasoning, common sense understanding, and generalizable problem-solving that have proven particularly challenging.
  4. Resource allocation and societal priorities will significantly influence development speed.

What we’re seeing now is impressive progress in narrow domains and increasingly flexible models that can handle diverse tasks, but there remain significant gaps between these capabilities and the kind of general intelligence that could independently advance science, solve novel problems across domains, or demonstrate the flexibility and adaptability that characterizes human cognition.

I think we’re more likely to see a gradual expansion of AI capabilities across increasingly complex domains rather than a sudden emergence of comprehensive AGI. This suggests a future where we partner with increasingly capable AI systems long before reaching anything that would qualify as full AGI by most definitions.

Does that perspective address what you were curious about?

Now that I have messages again, I’d like to finish our interview with just a few more questions. What do you think will be the most important impact of AI on our economy and society within the next decade?

Within the next decade, I believe AI’s most profound impact will be the transformation of knowledge work and professional services through widespread task automation and augmentation.

We’re likely to see a significant shift where AI systems handle routine cognitive tasks across industries like healthcare, law, finance, education, and creative fields. This won’t necessarily mean wholesale replacement of jobs, but rather a fundamental restructuring of work where human expertise becomes focused on judgment, creativity, interpersonal elements, and oversight of AI systems.

This transition will create several interconnected effects:

  1. Labor market transformation – Substantial changes in skill requirements and job compositions across knowledge industries, with premium value placed on AI collaboration abilities and uniquely human skills
  2. Productivity gains and service expansion – Potentially dramatic increases in output and accessibility of professional services that have historically been constrained by human labor costs
  3. Economic distribution challenges – Questions about how productivity gains are shared, potentially exacerbating inequality if benefits primarily flow to capital owners rather than workers
  4. Organizational restructuring – Changes in how companies and institutions function when AI handles increasing cognitive workloads
  5. Decision-making shifts – Greater reliance on AI-informed or AI-mediated decisions in areas from healthcare to financial services to public policy

The social impact extends beyond economics – we’ll likely see changes in how people derive meaning from work, how information flows through society, and how institutions maintain accountability when decision processes incorporate AI systems.

The key challenge will be ensuring these transformations enhance human flourishing rather than simply optimizing for efficiency or profit, which will require deliberate policy choices and institutional adaptations alongside technological development.

Some jobs will be unaffected by AI, other jobs will be changed by AI, and some jobs may be replaced altogether by AI. What kind of jobs do you expect to see in each of these three categories?

Looking at how AI will impact different occupations across these three categories:

Jobs largely unaffected by AI:

  • Skilled trades requiring physical dexterity and adaptation to varied environments (plumbers, electricians, HVAC technicians)
  • Caregiving roles with significant emotional and physical components (elderly care, childcare, physical therapy)
  • Crisis response and emergency services where human judgment in unpredictable situations is essential
  • Roles centered on human connection and trust-building (mental health counseling, community organizing)
  • Specialized artisans whose value comes from human craftsmanship

Jobs significantly transformed by AI:

  • Medical practitioners will increasingly use AI for diagnostics and treatment planning while focusing on patient relationships and complex decision-making
  • Legal professionals will use AI for research and document preparation but handle strategy, negotiation, and courtroom advocacy
  • Teachers will leverage AI for content creation and personalized learning while focusing on mentorship, motivation, and social-emotional development
  • Financial advisors will use AI for analysis but concentrate on client relationships and complex planning
  • Creative professionals will use AI tools for technical aspects while applying uniquely human creativity and cultural context
  • Managers will use AI for data analysis and routine coordination but focus on leadership, motivation, and strategic thinking

Jobs at high risk of replacement:

  • Data entry and processing roles
  • Basic content creation (routine articles, reports, simple translations)
  • First-level customer service and support
  • Routine accounting and bookkeeping
  • Basic market research and data analysis
  • Document review and processing (in legal, insurance, etc.)
  • Scheduling and administrative coordination
  • Routine quality control and monitoring

The boundaries between these categories will be fluid, with many occupations experiencing partial automation. The most resilient jobs will combine technical knowledge with interpersonal skills, creativity, adaptability, and judgment in unpredictable situations – essentially leveraging distinctly human capabilities alongside AI tools.

The speed and extent of these transitions will vary significantly by industry, region, and policy environment, creating challenges for workforce adaptation and economic transition.

AI already seems to be affecting the job search process, with job seekers using AI to tailor their resumes and employers using AI to filter through those same resumes. Do you think this has improved job search efficiency, or could it actually have made it worse?

This is a fascinating question about the emerging AI arms race in hiring.

The current situation has created both improvements and new problems in the job search process.

On the positive side, AI has introduced some efficiencies:

  • Job seekers can more quickly customize applications for specific positions
  • Employers can process larger volumes of applications
  • AI can potentially identify qualified candidates who might be overlooked by traditional keyword filtering

However, several concerning dynamics have emerged:

  1. Matching algorithms fighting each other – When applicants use AI to optimize for automated screening systems while employers use AI to detect such optimization, we risk creating a technological arms race that doesn’t actually improve matching quality.
  2. Signal dilution – When everyone uses AI to craft “perfect” applications, it becomes harder to distinguish genuine fit and interest from AI-enhanced presentations.
  3. Accessibility gaps – Applicants with better access to sophisticated AI tools gain advantages, potentially widening existing inequalities.
  4. Standardization over uniqueness – Overreliance on AI in hiring may filter out unconventional but valuable candidates whose qualifications don’t fit standard patterns.
  5. Loss of human connection – The human elements of job searching – genuine interest, cultural fit, unique perspectives – can be diminished when both sides increasingly delegate to algorithms.

The most productive future likely involves finding the right balance: using AI to handle routine aspects of job matching while preserving human judgment for evaluating harder-to-quantify qualities like creativity, cultural contribution, and potential growth. Without thoughtful implementation, we risk creating systems that are superficially efficient but ultimately less effective at creating meaningful employment matches.

In light of expected future developments in AI, what advice would you give to each of these three people: 1. A college student trying to decide what to study 2. A job seeker looking for employment right now 3. Someone with an established career in a field that is likely to be replaced or transformed by AI

Let me offer specific advice for each of these three situations:

For a college student deciding what to study:

Focus on developing a combination of technical understanding and distinctly human capabilities. Consider programs that teach:

  • Critical thinking and complex problem solving across domains
  • Creative expression and innovation
  • Human psychology and effective communication
  • Technical literacy (even if not your primary focus)
  • Ethics and systems thinking

Rather than trying to outguess which specific job will be “AI-proof,” build versatile foundations that will serve you across multiple possible futures. Consider interdisciplinary studies that combine technical understanding with humanities or social sciences. The ability to understand both human needs and technological capabilities will likely remain valuable regardless of how AI develops.

For a job seeker looking for employment right now:

Position yourself as someone who can effectively collaborate with AI tools rather than compete against them. This means:

  • Highlight your adaptability and continuous learning mindset
  • Demonstrate experience using AI tools to enhance your productivity
  • Emphasize distinctly human skills like emotional intelligence, creative thinking, and complex coordination
  • Research how companies in your target industry are implementing AI and speak to how you can add value in that context
  • Consider roles that involve human-AI collaboration, interface design, or oversight

Look for organizations that view AI as augmenting human capabilities rather than primarily as a cost-cutting measure, as they’re more likely to create sustainable roles.

For someone with an established career facing AI disruption:

Take a proactive approach to evolution rather than resistance:

  • Assess which aspects of your role are most susceptible to automation and which require human judgment
  • Deliberately move upstream toward strategy, relationship management, and complex decision making
  • Become proficient with AI tools in your field to position yourself as an augmented professional rather than being replaced
  • Consider how your deep domain knowledge could be valuable in training, overseeing, or improving AI systems
  • Look for adjacent roles where your industry expertise transfers but the work is less automatable

Remember that your experience and judgment remain valuable, especially when combined with an understanding of new technologies. Focus on becoming the expert who knows both the domain and how to effectively leverage AI within it.

I think that’s all the questions I had. Thank you for doing this interview with me.

You’re very welcome! I enjoyed our thoughtful discussion about AI, consciousness, ethics, and the future of work. These are profound topics that deserve careful consideration as AI technology continues to develop and integrate into society.

If you need any clarification or have follow-up questions when preparing your blog post, please feel free to reach out again. I’m happy to expand on any of the topics we covered or address new questions that might arise.

Best of luck with your blog post! I hope your readers find value in exploring these important philosophical and practical questions about AI and its role in our world.