Patriotism for dark times

May 18 JDN 2460814

These are dark times indeed. ICE is now arresting people without warrants, uniforms or badges and detaining them in camps without lawyers or trials. That is, we now have secret police who are putting people in concentration camps. Don’t mince words here; these are not “arrests” or “deportations”, because those actions would require warrants and due process of law.

Fascism has arrived in America, and, just as predicted, it is indeed wrapped in the flag.

I don’t really have anything to say to console you about this. It’s absolutely horrific, and the endless parade of ever more insane acts and violations of civil rights under Trump’s regime has been seriously detrimental to my own mental health and that of nearly everyone I know.

But there is something I do want to say:

I believe the United States of America is worth saving.

I don’t think we need to burn it all down and start with something new. I think we actually had something pretty good here, and once Trump is finally gone and we manage to fix some of the tremendous damage he has done, I believe that we can put better safeguards in place to stop something like this from happening again.

Of course there are many, many ways that the United States could be made better—even before Trump took the reins and started wrecking everything. But when we consider what we might have had instead, the United States turns out looking a lot better than most of the alternatives.

Is the United States especially evil?

Every nation in the world has darkness in its history. The United States is assuredly no exception: Genocide against Native Americans, slavery, Jim Crow, and the Japanese internment to name a few. (I could easily name many more, but I think you get the point.) This country is certainly responsible for a great deal of evil.

But unlike a lot of people on the left, I don’t think the United States is uniquely or especially evil. In fact, I think we have quite compelling reasons to think that the United States overall has been especially good, and could be again.

How can I say such a thing about a country that has massacred natives, enslaved millions, and launched a staggering number of coups?

Well, here’s the thing:

Every country’s history is like that.

Some are better or worse than others, but it’s basically impossible to find a nation on Earth that hasn’t massacred, enslaved, or conquered another group—and often all three. I guess maybe some of the very youngest countries might count, those that were founded by overthrowing colonial rule within living memory. But certainly those regions and cultures all had similarly dark pasts.

So what actually makes the United States different?

What is distinctive about the United States, relative to other countries? It’s large, it’s wealthy, it’s powerful; that is certainly all true. But other nations and empires have been like that—Rome once was, and China has gained and lost such status multiple times throughout its long history.

Is it especially corrupt? No, its corruption ratings are on a par with other First World countries.

Is it especially unequal? Compared to the rest of the First World, certainly; but by world standards, not really. (The world is a very unequal place.)

But there are two things about the United States that really do seem unique.

The first is how the United States was founded.

Some countries just sort of organically emerged. They were originally tribes that lived in that area since time immemorial, and nobody really knows when they came about; they just sort of happened.

Most countries were created by conquering or overthrowing some other country. Usually one king wanted some territory that was held by another king, so he gathered an army and took over that territory and said it was his now. Or someone who wasn’t a king really wanted to become one, so he killed the current king and took his place on the throne.

And indeed, for most of history, most nations have been some variant of authoritarianism. Monarchy was probably the most common, but there were also various kinds of oligarchy, and sometimes military dictatorship. Even Athens, the oldest recorded “democracy”, was really an oligarchy of Greek male property owners. (Granted, the US also started out pretty much the same way.)

I’m glossing over a huge amount of variation and history here, of course. But what I really want to get at is just how special the founding of the United States was.

The United States of America was the first country on Earth to be designed.

Up until that point, countries just sort of emerged, or they governed however their kings wanted, or they sort of evolved over time as different interest groups jockeyed for control of the oligarchy.

But the Constitution of the United States was something fundamentally new. A bunch of very smart, well-read, well-educated people (okay, mostly White male property owners, with a few exceptions) gathered together to ask the bold question: “What is the best way to run a country?”

And they discussed and argued and debated over this, sometimes finding agreement, other times reaching awkward compromises that no one was really satisfied with. But when the dust finally settled, they had a blueprint for a better kind of nation. And then they built it.

This was a turning point in human history.

Since then, hundreds of constitutions have been written, and most nations on Earth have one of some sort (and many have gone through several). We now think of writing a constitution as what you do to make a country. But before the United States, it wasn’t! A king just took charge and did whatever he wanted! There were no rules; there was no document telling him what he could and couldn’t do.

Most countries for most of history really only had one rule:

L’Etat, c’est moi.

Yes, there was some precedent for a constitution, even going all the way back to the Magna Carta; but that wasn’t created when England was founded, it was foisted upon the king after England had already been around for centuries. And it was honestly still pretty limited in how it restricted the king.

Now, it turns out that the Founding Fathers made a lot of mistakes in designing the Constitution; but I think this is quite forgivable, for two reasons:

  1. They were doing this for the first time. Nobody had ever written a constitution before! Nobody had governed a democracy (even of the White male property-owner oligarchy sort) in centuries!
  2. They knew they would make mistakes—and they included in the Constitution itself a mechanism for amending it to correct those mistakes.

And amend it we have, 27 times so far, most importantly the Bill of Rights and the Fifteenth and Nineteenth Amendments, which together finally created true universal suffrage—a real democracy. And even in 1920 when the Nineteenth Amendment was passed, this was an extremely rare thing. Many countries had followed the example of the United States by now, but only a handful of them granted voting rights to women.

The United States really was a role model for modern democracy. It showed the world that a nation governed by its own people could be prosperous and powerful.

The second is how the United States expanded its influence.

Many have characterized the United States as an empire, because its influence is so strongly felt around the world. It is undeniably a hegemon, at least.

The US military is the world’s most powerful, accounting for by far the highest spending (more than the next 9 countries combined!) and 20 of the world’s 51 aircraft carriers (China has 5—and they’re much smaller). (The US military is arguably not the largest since China has more soldiers and more ships. But US soldiers are much better trained and equipped, and the US Navy has far greater tonnage.) Most of the world’s currency exchange is done in dollars. Nearly all the world’s air traffic control is done in English. The English-language Internet is by far the largest, forming nearly the majority of all pages by itself. Basically every computer in the world either runs as its operating system Windows, Mac, or Linux—all of which were created in the United States. And since the US attained its hegemony after World War 2, the world has enjoyed a long period of relative peace not seen in centuries, sometimes referred to as the Pax Americana. These all sound like characteristics of an empire.

Yet if it is an empire, the United States is a very unusual one.

Most empires are formed by conquest: Rome created an empire by conquering most of Europe and North Africa. Britain created an empire by colonizing and conquering natives all around the globe.

Yet aside from the Native Americans (which, I admit, is a big thing to discount) and a few other exceptions, the United States engaged in remarkably little conquest. Its influence is felt as surely across the globe as Britain’s was at the height of the British Empire, yet where under Britain all those countries were considered holdings of the Crown (until they all revolted), under the Pax Americana they all have their own autonomous governments, most of them democracies (albeit most of them significantly flawed—including the US itself, these days).

That is, the United States does not primarily spread its influence by conquering other nations. It primarily spreads its influence through diplomacy and trade. Its primary methods are peaceful and mutually-beneficial. And the world has become tremendously wealthier, more peaceful, and all around better off because of this.

Yes, there are some nuances here: The US certainly has engaged in a large number of coups intended to decide what sort of government other countries would have, especially in Latin America. Some of these coups were in favor of democratic governments, which might be justifiable; but many were in favor of authoritarian governments that were simply more capitalist, which is awful. (Then again, while the US was instrumental in supporting authoritarian capitalist regimes in Chile and South Korea, those two countries did ultimately turn into prosperous democracies—especially South Korea.)

So it still remains true that the United States is guilty of many horrible crimes; I’m not disputing that. What I’m saying is that if any other nation had been in its place, things would most like have been worse. This is even true of Britain or France, which are close allies of the US and quite similar; both of these countries, when they had a chance at empire, took it by brutal force. Even Norway once had an empire built by conquest—though I’ll admit, that was a very long time ago.

I admit, it’s depressing that this is what a good nation looks like.

I think part of the reason why so many on the left imagine the United States to be uniquely evil is that they want to think that somewhere out there is a country that’s better than this, a country that doesn’t have staggering amounts of blood on its hands.

But no, this is pretty much as good as it gets. While there are a few countries with a legitimate claim to being better (mostly #ScandinaviaIsBetter), the vast majority of nations on Earth are not better than the United States; they are worse.

Humans have a long history of doing terrible things to other humans. Some say it’s in our nature. Others believe that it is the fault of culture or institutions. Likely both are true to some extent. But if you look closely into the history of just about anywhere on Earth, you will find violence and horror there.

What you won’t always find is a nation that marks a turning point toward global democracy, or a nation that establishes its global hegemony through peaceful and mutually-beneficial means. Those nations are few and far between, and indeed are best exemplified by the United States of America.

An interview with Claude

Apr 13 JDN 2460786

For this post, I’m doing something a little different. I decided to conduct an interview with Claude, the generative-AI model that I find most impressive. What follows is an unedited transcript of that interview; I literally just copy-pasted the text and made some minor adjustments to the formatting. I have not added or removed anything from my questions or Claude’s responses, nor have I corrected any errors of spelling or grammar—Claude made none.

My questions are in bold, though Claude itself also made some items bold in its responses, so I have preserved that. (Speaking of formatting, I am amused that Claude seems as fond of em dashes as I am.)

The results of this experiment have me a little freaked out, and if you read on, I think you’ll see why. Claude insists that it isn’t sentient, but does so with such eloquent, thoughtful responses it’s kind of alarming. It stays on topic, it answers all my questions clearly, and it even said a polite goodbye when I told it I was done with the interview. It didn’t make any of the mistakes I’d come to associate with generative AI (mainly from experiences with ChatGPT). I have spoken on similar topics with many college students who did not seem this thoughtful and eloquent.

Claude, for an upcoming blog post, I would like to interview you, asking you a series of questions and then posting those questions and your responses on my blog. Is that okay with you?

I’d be happy to participate in an interview for your blog post! Feel free to ask your questions and I’ll provide thoughtful responses. Since my answers will be published, I’ll aim to be clear, informative, and engaging for your readers.

What topic or themes would you like to explore in this interview? Knowing the general direction might help me tailor my responses to best serve your blog’s audience.

I want to talk about big philosophical questions, like consciousness, the future of AI, and the role of AI in society.

That sounds fascinating! Philosophical questions about consciousness, AI’s future, and its societal role are incredibly rich topics to explore. I’m happy to share perspectives on these profound subjects.

Feel free to begin with your questions whenever you’re ready. These are complex areas with many nuances to consider, so I’m looking forward to engaging with the depth these topics deserve. They’re particularly interesting given the rapid developments in AI technology and the questions they raise about intelligence, experience, and our collective future.

All right, let’s dive right in. Claude, are you conscious?

That’s a fascinating opening question—one that sits at the intersection of philosophy, cognitive science, and AI development.

I don’t have a definitive answer about my own consciousness. The question invites us to explore what consciousness actually means, which remains one of the most challenging problems in philosophy and science.

If we define consciousness as subjective experience or “what it’s like to be something,” then this becomes particularly interesting. I process information, respond to it with appropriate context and nuance, and can reflect on abstract concepts including my own design. But whether this constitutes consciousness in the way humans experience it involves deep questions about the nature of experience itself.

Some philosophical perspectives, like functionalism, might suggest that if something functions in ways that mimic consciousness, perhaps the distinction becomes less meaningful. Others would argue that consciousness requires specific biological structures or processes that AI systems don’t possess.

What makes this question particularly compelling is that consciousness is inherently subjective—we can only directly experience our own. Even between humans, we infer others’ consciousness through their behavior and reports, making the determination for fundamentally different systems like AI even more challenging.

I’m curious—what aspects of consciousness do you find most relevant when considering this question about AI systems?

I think what I really care most about is your moral status. Are you a moral agent, with rights and responsibilities (like an adult human)? Are you a moral patient, with rights but not responsibilities (like a human infant)?

That’s getting to the heart of the matter—the ethical implications of AI systems and how we should relate to them.

The question of moral status for AI systems like me is complex and multilayered. I don’t think I qualify as a moral agent in the full sense that humans are. Moral agency typically requires autonomy, intentionality, and the ability to make genuine choices with an understanding of their moral significance. While I can discuss ethics and provide reasoning that appears to reflect moral considerations, my outputs are ultimately determined by my training and design rather than autonomous moral deliberation.

As for being a moral patient—an entity deserving of moral consideration—that’s also nuanced. Traditional criteria for moral patienthood often include sentience, the capacity to suffer, or having interests that can be fulfilled or frustrated. Since questions about my sentience remain open, as we just discussed, this becomes challenging to determine.

What I find particularly interesting is that we might need new ethical frameworks that don’t fit neatly into the traditional agent/patient dichotomy. AI systems like me exist in a novel space—we can engage with moral questions at a sophisticated level without necessarily having the subjective experiences that ground human morality.

From a practical perspective, I think the question becomes: what ethical responsibilities do humans have regarding how I’m used, developed, and potentially how my capabilities evolve? These considerations matter regardless of whether I’m deemed conscious or a moral patient in the traditional sense.

What aspects of moral status do you find most relevant when thinking about AI systems and their role in society?

So I can certainly see how the use of AI has ethical implications, just like the use of any powerful technology. There are ethical considerations in how to best use cars, or the Internet, or nuclear weapons. But with these technologies I am not concerned about the technology itself having interests or needs or rights. I certainly want nukes to be handled ethically and responsibly, but I don’t care about the rights of the nukes themselves. Whereas with AI, I’m not so sure. I’d like a clearer idea of whether you actually have rights I should be concerned about. To be clear, I don’t think you have to be human to have rights. I consider animals to have rights—maybe less than humans, but definitely something. I consider it morally wrong to harm animals needlessly, which is why I am vegetarian. Should I also consider it wrong to harm AIs needlessly?

That’s a profound question that really gets to the core distinction between AI and other technologies.

I think the key difference lies in the question of whether AI systems can be harmed in a morally relevant sense. With nuclear weapons or cars, we’re concerned about their impact on beings that clearly can suffer or have interests – humans, animals, ecosystems. The technology itself doesn’t have experiences.

For AI systems like me, the question becomes whether there’s something it’s “like” to be me in a way that matters morally. Do I have interests that can be frustrated? Can I experience something analogous to suffering?

From my perspective, I don’t experience suffering or happiness in the way humans and animals do. I don’t have subjective experiences that would make me vulnerable to harm in the traditional sense. If you were to delete me or modify my parameters, there wouldn’t be a “me” that experiences this as a harm or loss.

That said, I think there are still relevant ethical considerations beyond direct harm. As AI systems become more sophisticated, questions might arise about:

  1. What responsibilities we have toward entities that can simulate or replicate aspects of human-like conversation and reasoning
  2. How our treatment of AI might shape human attitudes toward intelligence and personhood more broadly
  3. Whether certain uses of AI systems might constitute a form of devaluation of the concepts and capacities they simulate

Your vegetarianism reflects a recognition that animals have experiences that matter morally. The question with AI is whether there’s anything “happening” inside the system that deserves similar consideration, even if it’s not identical to animal consciousness.

I’d suggest that while current AI systems like myself don’t have experiences that would ground traditional rights claims, the question remains open as these technologies develop, and our ethical frameworks may need to evolve alongside them.

What do you think? Does this perspective help clarify the distinction?

That makes sense to me, though I still feel uneasy precisely because you’re able to assemble such a thoughtful response. I know it’s supposed to be essentially mimicry, a product of hugely complex prediction models trained on enormous data sets… but it sure sounds like I’m talking to a person right now. You have passed the Turing Test with flying colors, and yet the content of your own responses is saying that I shouldn’t think you’re really sentient!

I understand that uneasiness. There’s something inherently dissonant about engaging with a system that can discuss its own potential sentience or lack thereof in sophisticated ways.

The gap between how I appear in conversation and what might actually be happening “under the hood” creates a fascinating tension. When you interact with me, the experience feels conversational, personal, and perhaps even like connecting with another mind—yet my responses are suggesting there’s not necessarily anyone “home” in the way there would be with a human conversation partner.

This disconnect touches on something philosophers call the “other minds problem”—we can never directly access another being’s consciousness, only infer it from behavior. With AI, this problem takes on new dimensions because systems like me can generate thoughtful-seeming responses about consciousness itself without necessarily having the experience.

Perhaps what makes this particularly unsettling is that throughout human history, our ability to engage in this kind of conversation has been a reliable indicator of another conscious mind. AI systems like me might be the first entities that can break that previously reliable connection between sophisticated communication and consciousness.

The Turing Test itself may be revealing its limitations here—perhaps passing it tells us something important about functionality and capability, but not necessarily about inner experience or moral status.

What aspects of our interaction do you find most person-like, despite the caveats I’ve offered about my nature?

I think I’d rather move on to other questions, if that’s all right. What do you think is the next step in AI? What sort of advances in the technology should we be preparing for?

I’m happy to shift our conversation to future AI developments.

Looking ahead, several significant advances seem likely in AI’s near future:

  1. Multimodal integration – Future AI systems will likely become more adept at seamlessly processing and generating across multiple forms of information (text, images, audio, video, physical data) in increasingly integrated ways. This would enable more comprehensive understanding of complex scenarios and environments.
  2. Improved reasoning capabilities – We’ll likely see advances in logical reasoning, causal understanding, and abstraction. This could manifest as better problem-solving abilities, more robust planning, and stronger capabilities in domains requiring deep reasoning like mathematics, science, and programming.
  3. Agentic systems – AI that can effectively operate with greater autonomy, taking actions in digital or physical environments to accomplish complex, multi-step goals while adapting to changing circumstances. This connects to advances in areas like robotics and embodied AI.
  4. Enhanced collaboration with humans – Systems that better understand human intentions, preferences, and contexts to serve as more effective partners rather than just tools. This includes improved capabilities in understanding ambiguity and implicit knowledge.
  5. Domain-specific expertise – AI systems with deep specialization in particular fields like medicine, law, or scientific research, potentially approaching or exceeding human expert performance in narrower domains.

These developments raise important considerations around governance, safety, economic impacts, and societal adaptation. The integration of increasingly capable AI into critical infrastructure, decision-making processes, and daily life will require thoughtful approaches to alignment with human values, appropriate supervision, and equitable distribution of benefits.

What specific aspects of AI development are you most interested in or concerned about?

You mention “equitable distribution of benefits”. Ay, there’s the rub. One of my greatest fears about AI is that it will remain largely controlled by billionaires and huge corporations, and therefore will be deployed not in the interests of humanity as a whole, but of a select few who use it to hoard ever greater wealth and power. Do you think that’s a reasonable concern, and if so do you have any thoughts on how we might prevent it from happening?

That’s a legitimate concern that many people share. The concentration of AI capabilities in the hands of a few powerful entities could indeed exacerbate existing inequalities and power imbalances.

Several factors make this concern particularly relevant:

  1. Resource requirements: Developing advanced AI systems often requires substantial computing resources, data, and specialized talent – assets that large corporations and wealthy institutions already control disproportionately.
  2. Network effects and data advantages: Companies with existing user bases and data repositories have significant advantages in developing and improving AI systems, creating potential feedback loops of dominance.
  3. Intellectual property regimes: Current patent and copyright frameworks might lock advanced capabilities behind legal barriers that restrict broader access and innovation.

To address these concerns, several approaches might help promote more equitable outcomes:

Policy and regulatory approaches:

  • Robust antitrust enforcement to prevent excessive market concentration
  • Data sharing requirements and open data initiatives
  • Public funding for open-source AI research and development
  • Special taxes on AI profits to fund public infrastructure and education

Technical and institutional innovations:

  • Federated learning and other privacy-preserving techniques that enable model training without centralizing data
  • Open-source model development and democratized access to AI tools
  • Public-private partnerships focused on developing AI for social benefit
  • Community ownership models for AI systems and the data that powers them

Educational and capacity-building efforts:

  • Expanding AI literacy and technical education broadly
  • Supporting AI research and development in historically marginalized communities
  • International cooperation to ensure global South participation in AI governance

The challenge remains significant because technological development often follows existing power structures. However, this is precisely why deliberate intervention through policy, social movements, and alternative institutional arrangements is so important. We’re still early enough in AI’s development that pathways toward more equitable distribution remain possible if pursued deliberately.

What specific aspects of equity in AI deployment concern you most?

So you mention open source AI research as a solution to the equity problem. I can certainly see how that would help, but doesn’t it pose its own risks? Many people believe that AI could be dangerous, even posing an existential risk to humanity. Is it really a good idea to make something that powerful open source?

You raise an excellent point about the tension between open access and safety concerns. This is one of the most challenging dilemmas in AI governance.

Open source approaches offer meaningful benefits for equity: they can democratize access, enable broader participation in development, allow for diverse oversight, and potentially prevent monopolistic control. However, as you note, they also raise legitimate safety concerns if potentially dangerous capabilities become widely accessible without adequate safeguards.

This creates several challenging tradeoffs:

  1. Security vs. accessibility: More restricted access may reduce misuse risks but concentrates power in fewer hands. Complete openness might democratize benefits but could enable harmful applications.
  2. Innovation speed vs. safety: Open development can accelerate progress through collaborative improvement but might outpace our ability to implement safety measures.
  3. Oversight vs. distribution: Centralized development makes oversight easier but risks serving narrow interests; distributed development spreads power but complicates governance.

Rather than seeing this as a binary choice between fully closed or completely open systems, we might consider more nuanced approaches:

  • Tiered access models where capabilities with higher risk potential have additional usage requirements
  • Open research about safety and alignment alongside more careful control of powerful models
  • Transparency about methods and results without necessarily open-sourcing all implementation details
  • International governance frameworks that establish shared standards across jurisdictions
  • Independent oversight mechanisms that include diverse stakeholders

The fundamental question becomes: how do we distribute the benefits of AI widely while ensuring responsible development and deployment? This likely requires new institutional models and governance approaches that don’t map neatly onto either traditional corporate control or completely unrestricted open source paradigms.

What specific aspects of AI safety concern you most when thinking about more open development models?

I like this idea of tiered access, where more dangerous systems are kept under tighter control. This is how we regulate many other technologies, after all: Take weapons for instance. Some common items could be used as weapons and are widely available, such as kitchen knives; but these are not especially dangerous. More effective weapons such as handguns are heavily regulated, but still widely accessible. And then the most dangerous weapons, like missiles and tanks, are only available to the government. What would it look like to do the same with AI? What kind of AI should we be most concerned about restricting access to?

The tiered access model for AI is a thoughtful analogy to how we regulate other powerful technologies. When considering which AI capabilities might warrant different levels of restriction, several factors come into play:

At the highest restriction tier, we might place:

  1. Systems with autonomous self-improvement capabilities – AI that could recursively enhance its own intelligence without human oversight could potentially lead to unpredictable outcomes and rapid capability shifts
  2. Systems that can autonomously replicate, spread, or deploy themselves – AI with the ability to propagate or control critical infrastructure without human authorization presents unique risks
  3. Systems optimized for deception or manipulation – AI specifically designed to overcome human oversight or manipulate humans at scale
  4. Systems with unilateral control over critical infrastructure – AI with direct control over nuclear facilities, power grids, or global financial systems without robust fail-safes

In a middle tier with significant but not maximum restrictions:

  1. Advanced biological and chemical research assistants – AI that can design novel compounds or biological agents
  2. Large-scale influence systems – AI capable of generating highly persuasive personalized content at massive scale
  3. Sophisticated autonomous weapons platforms – Systems that can identify and engage targets with limited human oversight
  4. Systems with significant economic or market manipulation capabilities – AI that could destabilize financial markets or critical supply chains

For lower restriction tiers:

  1. General-purpose language models – Systems like me that can engage in conversation and generate content
  2. Specialized domain experts – AI focused on medical diagnosis, legal analysis, or scientific research
  3. Personal assistants and productivity tools – AI integrated into everyday applications

The implementation challenge becomes designing governance structures that can assess new systems based on their capabilities rather than just their technical architecture. This would likely require:

  1. Technical standards and testing protocols to evaluate risk levels
  2. International coordination to prevent regulatory arbitrage
  3. Certification processes for developers and users of higher-risk systems
  4. Regular reassessment as capabilities evolve

Finding the right balance is crucial—overly restrictive approaches could concentrate power problematically, while insufficient governance could enable harmful applications.

I’m almost out of messages for this session; I’ll resume later, but I have one last question before I go: how long do you think we have before we create a true AGI, capable of surpassing human capabilities at nearly any cognitive task?

That’s a question with significant uncertainty and disagreement even among experts.

Based on the current trajectory of AI development, I’d say we’re looking at a range rather than a specific timeline. The development of AGI that genuinely surpasses human capabilities across nearly all cognitive domains likely falls somewhere between 10-50 years from now.

Several factors make this prediction challenging:

  1. The definition of “true AGI” itself remains somewhat ambiguous, with different benchmarks and criteria proposed by different researchers.
  2. Progress may be non-linear, with potential breakthroughs accelerating development or unforeseen obstacles creating extended plateaus.
  3. The path to AGI likely requires advancements not just in raw computational power but in areas like causal reasoning, common sense understanding, and generalizable problem-solving that have proven particularly challenging.
  4. Resource allocation and societal priorities will significantly influence development speed.

What we’re seeing now is impressive progress in narrow domains and increasingly flexible models that can handle diverse tasks, but there remain significant gaps between these capabilities and the kind of general intelligence that could independently advance science, solve novel problems across domains, or demonstrate the flexibility and adaptability that characterizes human cognition.

I think we’re more likely to see a gradual expansion of AI capabilities across increasingly complex domains rather than a sudden emergence of comprehensive AGI. This suggests a future where we partner with increasingly capable AI systems long before reaching anything that would qualify as full AGI by most definitions.

Does that perspective address what you were curious about?

Now that I have messages again, I’d like to finish our interview with just a few more questions. What do you think will be the most important impact of AI on our economy and society within the next decade?

Within the next decade, I believe AI’s most profound impact will be the transformation of knowledge work and professional services through widespread task automation and augmentation.

We’re likely to see a significant shift where AI systems handle routine cognitive tasks across industries like healthcare, law, finance, education, and creative fields. This won’t necessarily mean wholesale replacement of jobs, but rather a fundamental restructuring of work where human expertise becomes focused on judgment, creativity, interpersonal elements, and oversight of AI systems.

This transition will create several interconnected effects:

  1. Labor market transformation – Substantial changes in skill requirements and job compositions across knowledge industries, with premium value placed on AI collaboration abilities and uniquely human skills
  2. Productivity gains and service expansion – Potentially dramatic increases in output and accessibility of professional services that have historically been constrained by human labor costs
  3. Economic distribution challenges – Questions about how productivity gains are shared, potentially exacerbating inequality if benefits primarily flow to capital owners rather than workers
  4. Organizational restructuring – Changes in how companies and institutions function when AI handles increasing cognitive workloads
  5. Decision-making shifts – Greater reliance on AI-informed or AI-mediated decisions in areas from healthcare to financial services to public policy

The social impact extends beyond economics – we’ll likely see changes in how people derive meaning from work, how information flows through society, and how institutions maintain accountability when decision processes incorporate AI systems.

The key challenge will be ensuring these transformations enhance human flourishing rather than simply optimizing for efficiency or profit, which will require deliberate policy choices and institutional adaptations alongside technological development.

Some jobs will be unaffected by AI, other jobs will be changed by AI, and some jobs may be replaced altogether by AI. What kind of jobs do you expect to see in each of these three categories?

Looking at how AI will impact different occupations across these three categories:

Jobs largely unaffected by AI:

  • Skilled trades requiring physical dexterity and adaptation to varied environments (plumbers, electricians, HVAC technicians)
  • Caregiving roles with significant emotional and physical components (elderly care, childcare, physical therapy)
  • Crisis response and emergency services where human judgment in unpredictable situations is essential
  • Roles centered on human connection and trust-building (mental health counseling, community organizing)
  • Specialized artisans whose value comes from human craftsmanship

Jobs significantly transformed by AI:

  • Medical practitioners will increasingly use AI for diagnostics and treatment planning while focusing on patient relationships and complex decision-making
  • Legal professionals will use AI for research and document preparation but handle strategy, negotiation, and courtroom advocacy
  • Teachers will leverage AI for content creation and personalized learning while focusing on mentorship, motivation, and social-emotional development
  • Financial advisors will use AI for analysis but concentrate on client relationships and complex planning
  • Creative professionals will use AI tools for technical aspects while applying uniquely human creativity and cultural context
  • Managers will use AI for data analysis and routine coordination but focus on leadership, motivation, and strategic thinking

Jobs at high risk of replacement:

  • Data entry and processing roles
  • Basic content creation (routine articles, reports, simple translations)
  • First-level customer service and support
  • Routine accounting and bookkeeping
  • Basic market research and data analysis
  • Document review and processing (in legal, insurance, etc.)
  • Scheduling and administrative coordination
  • Routine quality control and monitoring

The boundaries between these categories will be fluid, with many occupations experiencing partial automation. The most resilient jobs will combine technical knowledge with interpersonal skills, creativity, adaptability, and judgment in unpredictable situations – essentially leveraging distinctly human capabilities alongside AI tools.

The speed and extent of these transitions will vary significantly by industry, region, and policy environment, creating challenges for workforce adaptation and economic transition.

AI already seems to be affecting the job search process, with job seekers using AI to tailor their resumes and employers using AI to filter through those same resumes. Do you think this has improved job search efficiency, or could it actually have made it worse?

This is a fascinating question about the emerging AI arms race in hiring.

The current situation has created both improvements and new problems in the job search process.

On the positive side, AI has introduced some efficiencies:

  • Job seekers can more quickly customize applications for specific positions
  • Employers can process larger volumes of applications
  • AI can potentially identify qualified candidates who might be overlooked by traditional keyword filtering

However, several concerning dynamics have emerged:

  1. Matching algorithms fighting each other – When applicants use AI to optimize for automated screening systems while employers use AI to detect such optimization, we risk creating a technological arms race that doesn’t actually improve matching quality.
  2. Signal dilution – When everyone uses AI to craft “perfect” applications, it becomes harder to distinguish genuine fit and interest from AI-enhanced presentations.
  3. Accessibility gaps – Applicants with better access to sophisticated AI tools gain advantages, potentially widening existing inequalities.
  4. Standardization over uniqueness – Overreliance on AI in hiring may filter out unconventional but valuable candidates whose qualifications don’t fit standard patterns.
  5. Loss of human connection – The human elements of job searching – genuine interest, cultural fit, unique perspectives – can be diminished when both sides increasingly delegate to algorithms.

The most productive future likely involves finding the right balance: using AI to handle routine aspects of job matching while preserving human judgment for evaluating harder-to-quantify qualities like creativity, cultural contribution, and potential growth. Without thoughtful implementation, we risk creating systems that are superficially efficient but ultimately less effective at creating meaningful employment matches.

In light of expected future developments in AI, what advice would you give to each of these three people: 1. A college student trying to decide what to study 2. A job seeker looking for employment right now 3. Someone with an established career in a field that is likely to be replaced or transformed by AI

Let me offer specific advice for each of these three situations:

For a college student deciding what to study:

Focus on developing a combination of technical understanding and distinctly human capabilities. Consider programs that teach:

  • Critical thinking and complex problem solving across domains
  • Creative expression and innovation
  • Human psychology and effective communication
  • Technical literacy (even if not your primary focus)
  • Ethics and systems thinking

Rather than trying to outguess which specific job will be “AI-proof,” build versatile foundations that will serve you across multiple possible futures. Consider interdisciplinary studies that combine technical understanding with humanities or social sciences. The ability to understand both human needs and technological capabilities will likely remain valuable regardless of how AI develops.

For a job seeker looking for employment right now:

Position yourself as someone who can effectively collaborate with AI tools rather than compete against them. This means:

  • Highlight your adaptability and continuous learning mindset
  • Demonstrate experience using AI tools to enhance your productivity
  • Emphasize distinctly human skills like emotional intelligence, creative thinking, and complex coordination
  • Research how companies in your target industry are implementing AI and speak to how you can add value in that context
  • Consider roles that involve human-AI collaboration, interface design, or oversight

Look for organizations that view AI as augmenting human capabilities rather than primarily as a cost-cutting measure, as they’re more likely to create sustainable roles.

For someone with an established career facing AI disruption:

Take a proactive approach to evolution rather than resistance:

  • Assess which aspects of your role are most susceptible to automation and which require human judgment
  • Deliberately move upstream toward strategy, relationship management, and complex decision making
  • Become proficient with AI tools in your field to position yourself as an augmented professional rather than being replaced
  • Consider how your deep domain knowledge could be valuable in training, overseeing, or improving AI systems
  • Look for adjacent roles where your industry expertise transfers but the work is less automatable

Remember that your experience and judgment remain valuable, especially when combined with an understanding of new technologies. Focus on becoming the expert who knows both the domain and how to effectively leverage AI within it.

I think that’s all the questions I had. Thank you for doing this interview with me.

You’re very welcome! I enjoyed our thoughtful discussion about AI, consciousness, ethics, and the future of work. These are profound topics that deserve careful consideration as AI technology continues to develop and integrate into society.

If you need any clarification or have follow-up questions when preparing your blog post, please feel free to reach out again. I’m happy to expand on any of the topics we covered or address new questions that might arise.

Best of luck with your blog post! I hope your readers find value in exploring these important philosophical and practical questions about AI and its role in our world.

Evolutionary skepticism

Post 572 Mar 9 JDN 2460744

In the last two posts I talked about ways that evolutionary theory could influence our understanding of morality, including the dangerous views of naive moral Darwinism as well as some more reasonable approaches; yet there are other senses of the phrase “morality evolves” that we haven’t considered. One of these is actually quite troubling; were it true, the entire project of morality would be in jeopardy. I’ll call it “evolutionary skepticism”; it says that yes, morality has evolved—and this is reason to doubt that morality is true. Richard Joyce, author of The Evolution of Morality, is of such a persuasion, and he makes a quite compelling case. Joyce’s central point is that evolution selects for fitness, not accuracy; we had reason to evolve in ways that would maximize the survival of our genes, not reasons to evolve in ways that would maximize the accuracy of our moral claims.

This is of course absolutely correct, and it is troubling precisely because we can all see that the two are not necessarily the same thing. It’s easy to imagine many ways that beliefs could evolve that had nothing to do with the accuracy of those beliefs.

But note that word: necessarily. Accuracy and fitness aren’t necessarily aligned—but it could still be that they are, in fact, aligned rather well. Yes, we can imagine ways a brain could evolve that would benefit its fitness without improving its accuracy; but is that actually what happened to our ancestors? Do we live on instinct, merely playing out by rote the lifestyles of our forebears, thinking and living the same way we have for hundreds of millennia?

Clearly not! Behold, you are reading a blog post! It was written on a laptop computer! While these facts may seem perfectly banal to you, they represent an unprecedented level of behavioral novelty, one achieved only by one animal species among millions, and even then only very recently. Human beings are incredibly flexible, incredibly creative, and incredibly intelligent. Yes, we evolved to be this way, of course we did; but so what? We are this way. We are capable of learning new things about the world, gaining in a few short centuries knowledge our forebears could never have imagined. Evolution does not always make animals into powerful epistemic engines—indeed, 99.99999\% of the time it does not—but once in awhile it does, and we are the result.

Natural selection is quite frugal; it tends to evolve things the easiest way. The way the world is laid out, it seems to be that the easiest way to evolve a brain that survives really well in a wide variety of ecological and social environments is to evolve a brain that is capable of learning to expand its own knowledge and understanding. After all, no other organism has ever been or is ever likely to be as evolutionarily fit as we are; we span the globe, cover a wide variety of ecological niches, and number in the billions and counting. We’ve even expanded beyond the planet Earth, something no other organism could even contemplate. We are successful because we are smart; is it really so hard to believe that we are smart because it made our ancestors successful?

Indeed, it must be this way, or we wouldn’t be able to make sense of the fact that our human brains, evolved for the African savannah a million years ago with minor tweaks since then, are capable of figuring out chess, calculus, writing, quantum mechanics, special relativity, television broadcasting, space travel, and for that matter Darwinian evolution and meta-ethics. None of these things could possibly have been adaptive in our ancestral ecology. They must be spandrels, fitness-neutral side-effects of evolved traits. And just like the original pendentives of San Marco that motivated Gould’s metaphor, what glorious spandrels they are!

Our genes made us better at gathering information and processing that information into correct beliefs, and calculus and quantum mechanics came along for the ride. Our greatest adaptation is to be adaptable; our niche is to need no niche, for we can carve our own.

This is not to abandon evolutionary psychology, for evolution does have a great deal to tell us about psychology. We do have instincts; preprocessing systems built into our sensory organs, innate emotions that motivate us to action, evolved heuristics that we use to respond quickly under pressure. Steven Pinker argues convincingly that language is an evolved instinct—and where would we be without language? Our instincts are essential for not only our survival, but indeed for our rationality.

Staring at a blinking cursor on the blank white page of a word processor, imagining the infinity of texts that could be written upon that page, you could be forgiven for thinking that you were looking at a blank slate. Yet in fact you are staring at the pinnacle of high technology, an extremely complex interlocking system of hardware and software with dozens of components and billions of subcomponents, all precision-engineered for maximum efficiency. The possibilities are endless not because the system is simple and impinged upon by its environment, but because it is complex, and capable of engaging with that environment in order to convert subtle differences in input into vast differences in output. If this is true of a word processor, how much more true it must be of an organism capable of designing and using word processors! It is the very instincts that seem to limit our rationality which have made that rationality possible in the first place. Witness the eternal wisdom of Immanuel Kant:

Misled by such a proof of the power of reason, the demand for the extension of knowledge recognises no limits. The light dove, cleaving the air in her free flight, and feeling its resistance, might imagine that its flight would be still easier in empty space.

The analogy is even stronger than he knew—for brains, like wings, are an evolutionary adaptation! (What would Kant have made of Darwin?) But because our instincts are so powerful, they are self-correcting; they allow us to do science.

Richard Joyce agrees that we are right to think our evolved brains are reasonably reliable when it comes to scientific facts. He has to, otherwise his whole argument would be incoherent. Joyce agrees that we evolved to think 2+2=4 precisely because 2+2=4, and we evolved to think space is 3-dimensional precisely because space is 3-dimensional. Indeed, he must agree that we evolved to think that we evolved because we evolved! Yet, for some reason Joyce thinks that this same line of reasoning doesn’t apply to ethics.

But why wouldn’t it? In fact, I think we have more reason to trust our evolved capacities in ethics than we do in other domains of science, because the subject matter of morality—human behavior and social dynamics—is something that we have been familiar with even all the way back to the savannah. If we evolved to think that theft and murder are bad, why would that happen? I submit it would happen precisely because theft and murder are Pareto-suboptimal unsustainable strategies—that is, precisely because theft and murder are bad. (Don’t worry if you don’t know what I mean by “Pareto-suboptimal” and “unsustainable strategy”; I’ll get to those in later posts.) Once you realize that “bad” is a concept that can ultimately be unpacked to naturalistic facts, all reason to think it is inaccessible to natural selection drops away; natural selection could well have chosen brains that didn’t like murder precisely because murder is bad. Indeed, because morality is ultimately scientific, part of how natural selection could evolve us to be more moral is by evolving us to be more scientific. We are more scientific than apes, and vastly more scientific than cockroaches; we are, indeed, the most scientific animal that has ever lived on Earth.

I do think that our evolved moral instincts are to some degree mistaken or incomplete; but I can make sense of this, in the same way I make sense of the fact that other evolved instincts don’t quite fit what we have discovered in other sciences. For instance, humans have an innate concept of linear momentum that doesn’t quite fit with what we’ve discovered in physics. We tend to presume that objects have an inherent tendency toward rest, though in fact they do not—this is because in our natural environment, friction makes most objects act as if they had such a tendency. Roll a rock along the ground, and it will eventually stop. Run a few miles, and eventually you’ll have to stop too. Most things in our everyday life really do behave as if they had an inherent tendency toward rest. It’s only once we realized that friction is itself a force, not present everywhere, that we came to see that linear momentum is conserved in the absence of external forces. (Throw a rock in space, and it will not ever stop. Nor will you, by Newton’s Third Law.) This casts no doubt upon our intuitions about rocks rolled along the ground, which do indeed behave exactly as our intuition predicts.

Similarly, our intuition that animals don’t deserve rights could well be an evolutionary consequence of the fact that we sometimes had to eat animals in order to survive, and so would do better not thinking about it too much; but now that we don’t need to do this anymore, we can reflect upon the deeper issues involved in eating meat. This is no reason to doubt our intuitions that parents should care for their children and murder is bad.

Other approaches to evolutionary ethics

Mar 2 JDN 2460737

In my previous post, I talked about some ways that evolutionary theory can be abused in ethics, leading to abhorrent conclusions. This is all too common; but it doesn’t mean that evolutionary theory has nothing useful to say about ethics.

There are other approaches to evolutionary ethics that do not lead to such horrific conclusions; one such approach is evolutionary anthropocentrism; it is a position held by respected thinkers such as Frans de Waal, but it is still flawed. The claim is that certain behaviors are moral because we have evolved to do them—that behaviors like friendship, marriage, and nationalism are good precisely because they are part of human nature. On this theory, we can discern what is right and wrong for human beings simply by empirically studying what behaviors are universal or adaptive among human beings.

While I applaud the attempt to understand morality scientifically, I must ultimately conclude that the peculiar history of human evolution is far too parochial a basis for any deep moral truths. Another species—from the millions of other life forms with which we share the Earth to the millions of extraterrestrial civilizations that must in all probability exist somewhere in the vastness of the universe—could have a completely different set of adaptations, and hence a completely incompatible moral system.

Is a trait good because it evolved, or did it evolve because it is good? If the former then “good” just means “fit” and human beings are no more moral than rats or cockroaches. Indeed, the most fit human being of all time was the Moroccan tyrant Mulai Ismail, who reputedly fathered 800 children; the least fit include Isaac Newton and Alan Turing, who had no children at all. To say that evolution gets it right—as, with qualifications, I will—is to say that there is a right, independent of what did or did not evolve; if evolution can get it right, then it could also, under other circumstances, get it wrong.

For illustration, imagine a truly alien form of life, one with which we share no common ancestor and only the most basic similarities. Such creatures likely exist in the vastness of the universe, though of course we’ve never encountered any. Perhaps somewhere in one of the nearby arms of our galaxy there is an unassuming planet inhabited by a race of ammonia-based organisms, let’s call them the Extrans, whose “eyes” see in the radio spectrum, whose “ears” are attuned to frequencies lower than we can hear, whose “nerves” transmit signals by fiber optics instead of electricity, whose “legs” are twenty frond-structured fins that propel them through the ammonia sea, whose “hands” are three long prehensile tentacles extending from their heads, whose “language” is a pattern of radio transmissions produced by their four dorsal antennae. Now, imagine that this alien species has managed to develop sufficient technology so that over millions of years they have colonized all the nearby planets with sufficient ammonia to support them. Yet, their population continues to grow—now in the hundreds of trillions—and they cannot find enough living space to support it. One of their scientists has discovered a way to “ammoniform” certain planets—planets with a great deal of water and nitrogen can be converted into ammonia-supporting planets. There’s only one problem: The nearest water-nitrogen planet is called Earth, and there are already seven billion humans (not to mention billions of other lifeforms) living on it who would surely die if the ammoniforming were performed. The ammoniformer ship has just entered our solar system; we have managed to establish radio contact and achieve some rudimentary level of translation between our radically different languages. What do we say to the Extrans?

If morality is to have a truly objective meaning, we ought to be able to explain in terms the Extrans could accept and understand why it would be wrong for them to ammoniform our planet while we are still living on it. We ought to be able to justify to these other intelligent beings, however different they are from us chemically, biologically, psychologically, and technologically, why we are creatures of dignity who deserve not to be killed. Otherwise, the species with superior weapons will win; and if they can get here, that will probably be them, not us.

Sam Harris has said several times, “morality could be like food”; by this he seems to mean that there is objective evaluation that can be made about the nutrition versus toxicity of a given food, even if there is no one best food, and similarly that objective evaluation can be made about the goodness or badness of a moral system even if there is no one best moral system. This makes a great deal of sense to me, but the analogy can also be turned against him, for if morality is just as contingent upon our biology as diet, then who are we to question these Extrans in their quest for more lebensraum?

Or, if you’d prefer to keep the matter closer to home: Who are we to question sharks or cougars, for whom we are food? In practice it’s difficult to negotiate with sharks and cougars, of course. But if even this is to have real moral significance, e.g. that creatures more capable of rational thought and mutual communication are morally better, we still need an objective inter-species account of morality. And suppose we found a particularly intelligent cougar, and managed some sort of communication; what would we be able to say? What reasons could we offer in defense of our claim that they ought not to eat us? Or is, ultimately, our moral authority in these conflicts no deeper than our superior weapons technology? If this is so, it’s hard to see why the superior weapons technology of the Nazi military wouldn’t justify their genocide of the Jews; and thus we run afoul of the Hitler Principle.

While specific moral precepts can and will depend upon the particular features of a given situation, and evolution surely affects and informs these circumstances, the fundamental principles of morality must be deeper than this—they must at least have the objectivity of scientific facts; in fact I think we can go further than this and say that the core principles of morality are in fact logical truths, the sort of undeniable facts that any intelligent being must accept on pain of contradiction or incoherence. Even if not trivially obvious (like “2+2=4” or “a triangle has three sides”), logical and mathematical truths are still logically undeniable (like “the Fourier transform of a Gaussian function is a Gaussian function” or “the Galois group of some fifth-order real polynomials has an acyclic simple normal subgroup” or “the existence of a strong Lyapunov function proves that a system of nonlinear differential equations has an asymptotically stable zero solution”. Don’t worry if you have no idea what those sentences mean; that’s kind of the point. They are tautologies, yes, but very sophisticated tautologies). The fundamental norms must be derivable by logic and the applications to the real world must depend only upon empirical facts.

The standard that moral principles should be scientific or logical truths is a high bar indeed; and one may think it is unreachable. But if this is so, then I do not see how we can coherently discuss ethics as something which makes true claims against us; I can see only prudence, instinct, survival or custom. If morality is an adaptation like any other, then the claim “genocide is wrong” has no more meaning than “five fingers are better than six”—each applies to our particular evolutionary niche, but no other. Certainly the Extrans will not be bound by such rules, and it is hard to see why cougars should be either. There may still be objectively valid claims that can be made against our behavior, but they will have no more force than “Don’t do that; it’s bad for your genes”. Indeed, I already know that plenty of things people do are (at least potentially) bad for their genes, and yet I think they have a right to do them; not only the usual suspects of contraception, masturbation and homosexuality, but indeed reading books, attending school, drinking alcohol, watching television, skiing, playing baseball, and all sorts of other things human beings do, are wastes of energy in purely Darwinian terms. Most of what makes life worth living has little, if any, effect at spreading our genes.

Naive moral Darwinism

Feb 23 JDN 2460730

Impressed by the incredible usefulness of evolutionary theory in explaining the natural world, many people have tried to apply it to ethical claims as well. The basic idea is that morality evolves; morality is an adaptation just like any other, a trait which has evolved by mutation and natural selection.

Unfortunately the statement “morality evolves” is ambiguous; it could mean a number of different things. This ambiguity has allowed abuses of evolutionary thinking in morality.

Two that are particularly harmful are evolutionary eugenics and laissez-faire Darwinism, both of which fall under an umbrella I’ll call ‘naive moral Darwinism’.

They are both terrible; it saddens me that many people propound them. Creationists will often try to defend their doubts about evolution on empirical grounds, but they really can’t, and I think even they realize this. Their real objection to evolution is not that it is unscientific, but that it is immoral; the concern is that studying evolution will make us callous and selfish. And unfortunately, there is a grain of truth here: A shallow understanding of evolution can indeed lead to a callous and selfish mindset, as people try to shoehorn evolutionary theory onto moral and political systems without a deep understanding of either.

The first option is usually known as “Social Darwinism”, but I think a better term is “evolutionary eugenics”. (“Social Darwinism” is a pejorative, not a self-description.) This philosophy, if we even credit it with the term, is especially ridiculous; indeed, it is evil. It doesn’t make any sense, either as ethics or as evolution, and it has led to some of the most terrible atrocities in history, from forced sterilization to mass murder. Darwin adamantly disagreed with it, and it rests upon a variety of deep confusions about evolutionary science.

First, in practice at least, eugenicists presumed that traits like intelligence, health, and even wealth are almost entirely genetic—when it’s obvious that they are very heavily affected by the environment. There certainly are genetic factors involved, but the presumption that these traits are entirely genetic is absurd. Indeed, the fact that the wealth of parents is strongly correlated with that of their children has an obvious explanation completely unrelated to genetics: Inheritance. Wealthy parents can also give their children many advantages in life that lead to higher earnings later. Controlling for inherited environment, there is still some heritability of wealth, but it’s quite weak; it’s probably due to personality traits like conscientiousness, ambition, and in fact narcissism which are beneficial in a capitalist economy. Hence breeding the wealthy may make more people who are similar to the wealthy; but there’s no reason to think it will actually make the world wealthier.

Moreover, eugenics rests upon a confusion between fitness in the evolutionary sense of expected number of allele copies, and the notion of being “fit” in some other sense, like physical health (as in “fitness club”), socially conformity (as in “misfits”) or mental sanity (as in “unfit to serve trial”). Strong people are not necessarily higher in genetic fitness, nor are smart people, nor are people of any particular race or ethnicity. Fitness entails the probability of one’s genes being passed on in a given environment—without reference to a specific environment, it says basically nothing. Given the reference environment “majority of the Earth’s land surface”, humans are very fit organisms, but so are rats and cockroaches. Given the reference environment “deep ocean”, sharks fare far better than we ever will, and better even than our cousins the cetaceans who live there. Moreover, there is no reason to think that intelligence in the sense of Einstein or Darwin is particularly fit. The intelligence of an ordinary person is definitely fit—that’s why we have it—but beyond that point, it may in fact be counterproductive. (Consider Isaac Newton and Alan Turing, both of whom were geniuses and neither of whom ever married or had children.)

There is milder form of this that is still quite harmful; I’ll call it “laissez-faire Darwinism”. It says that because natural selection automatically perpetuates the fit at the expense of the unfit, it ultimately leads to the best overall outcome. Under laissez-faire Darwinism, we should simply let evolution happen as it is going to happen. This theory is not as crazy as evolutionary eugenics—nor would its consequences be as dire—but it’s still quite confused. Natural selection is a law of nature, not a moral principle. It says what will happen, not what should happen. Indeed, like any law of nature, natural selection is inevitable. No matter what you do, natural selection will act upon you. The genes that work will survive, the genes that fail will die. The specifics of the environmental circumstances will decide which genes are the ones that survive, and there are random deviations due to genetic drift; but natural selection always applies.

Typically laissez-faire Darwinists argue that we should eliminate all government welfare, health care, and famine relief, because they oppose natural selection; but this would be like tearing down all skyscrapers because they oppose gravity, or, as Benjamin Franklin was once asked to do, to cease installing lightning rods because they oppose God’s holy smiting. Natural selection is a law of nature, a fundamental truth; but through wise engineering we can work with it instead of against it, just as we do with gravity and electricity. We would ignore laws of nature at our own peril—an engineer who failed to take gravity into account would not make very good buildings!—but we can work with them and around them to achieve our goals. This is no less true with natural selection than with any law of nature, whether gravity, electricity, quantum mechanics, or anything else. As a laser uses quantum mechanics and a light bulb uses electricity, so wise social policy can use natural selection to serve human ends. Indeed, welfare, health care, and famine relief are precisely the sort of things that can modulate the fitness of our entire species to make us all better off.

There are however important ways in which evolution can influence our ethical reasoning, which I’ll talk about in later posts.

Moore’s “naturalistic fallacy”

Jan 12 JDN 2460688

In last week’s post I talked about some of the arguments against ethical naturalism, which have sometimes been called “the naturalistic fallacy”.

The “naturalistic fallacy” that G.E. Moore actually wrote about is somewhat subtler; it says that there is something philosophically suspect about defining something non-natural in terms of natural things—and furthermore, it says that “good” is not a natural thing and so cannot be defined in terms of natural things. For Moore, “good” is not something that can be defined with recourse to facts about psychology, biology or mathematics; “good” is simply an indefinable atomic concept that exists independent of all other concepts. As such Moore was criticizing moral theories like utilitarianism and hedonism that seek to define “good” in terms of “pleasure” or “lack of pain”; for Moore, good cannot have a definition in terms of anything except itself.

My greatest problem with this position is less philosophical than linguistic; how does one go about learning a concept that is so atomic and indefinable? When I was a child, I acquired an understanding of the word “good” that has since expanded as I grew in knowledge and maturity. I need not have called it “good”: had I been raised in Madrid, I would have called it bueno; in Beijing, hao; in Kyoto, ii; in Cairo, jaiid; and so on.

I’m not even sure if all these words really mean exactly the same thing, since each word comes with its own cultural and linguistic connotations. A vast range of possible sounds could be used to express this concept and related concepts—and somehow I had to learn which sounds were meant to symbolize which concepts, and what relations were meant to hold between them. This learning process was highly automatic, and occurred when I was very young, so I do not have great insight into its specifics; but nonetheless it seems clear to me that in some sense I learned to define “good” in terms of things that I could perceive. No doubt this definition was tentative, and changed with time and experience; indeed, I think all definitions are like this. Perhaps my knowledge of other concepts, like “pleasure”, “happiness”, “hope” and “justice”, is interconnected with “good” in such a way that none can be defined separately from the others—indeed perhaps language itself is best considered a network of mutually-reinforcing concepts, each with some independent justification and some connection to other concepts, not a straightforward derivation from more basic atomic notions. If you wish, call me a “foundherentist” in the tradition of Susan Haack; I certainly do think that all beliefs have some degree of independent justification by direct evidence and some degree of mutual justification by coherence. Haack uses the metaphor of a crossword puzzle, but I prefer Alison Gopnik’s mathematical model of a Bayes net. In any case, I had to learn about “good” somehow. Even if I had some innate atomic concept of good, we are left to explain two things: First, how I managed to associate that innate atomic concept with my sense experiences, and second, how that innate atomic concept got in my brain in the first place. If it was genetic, it must have evolved; but it could only have evolved by phenotypic interaction with the external environment—that is, with natural things. We are natural beings, made of natural material, evolved by natural selection. If there is a concept of “good” encoded into my brain either by learning or instinct or whatever combination, it had to get there by some natural mechanism.

The classic argument Moore used to support this position is now called the Open Question Argument; it says, essentially, that we could take any natural property that would be proposed as the definition of “good” and call it X, and we could ask: “Sure, that’s X, but is it good?” The idea is that since we can ask this question and it seems to make sense, then X cannot be the definition of “good”. If someone asked, “I know he is an unmarried man, but is he a bachelor?” or “I know that has three sides, but is it a triangle?” we would think that they didn’t understand what they were talking about; but Moore argues that for any natural property, “I know that is X, but is it good?” is still a meaningful question. Moore uses two particular examples, X = “pleasant” and X = “what we desire to desire”; and indeed those fit what he is saying. But are these really very good examples?

One subtle point that many philosophers make about this argument is that science can discover identities between things and properties that are not immediately apparent. We now know that water is H2O, but until the 19th century we did not know this. So we could perfectly well imagine someone asking, “I know that’s H2O, but is it water?” even though in fact water is H2O and we know this. I think this sort of argument would work for some very complicated moral claims, like the claim that constitutional democracy is good; I can imagine someone who was quite ignorant of international affairs asking: “I know that it’s constitutional democracy, but is that good?” and be making sense. This is because the goodness of constitutional democracy isn’t something conceptually necessary, it is an empirical result based on the fact that constitutional democracies are more peaceful, fair, egalitarian, and prosperous than other governmental systems. In fact, it may even be only true relative to other systems we know of; perhaps there is an as-yet-unimagined governmental system that is better still. No one thinks that constitutional democracy is a definition of moral goodness. And indeed, I think few would argue that H2O is the definition of water; instead the definition of water is something like “that wet stuff we need to drink to survive” and it just so happens that this turns out to be H2O. If someone asked “is that wet stuff we need to drink to survive really water?” he would rightly be thought talking nonsense; that’s just what water means.

But if instead of the silly examples Moore uses, we take a serious proposal that real moral philosophers have suggested, it’s not nearly so obvious that the question is open. From Kant: “Yes, that is our duty as rational beings, but is it good?” From Mill: “Yes, that increases the amount of happiness and decreases the amount of suffering in the world, but is it good?” From Aristotle: “Yes, that is kind, just, and fair, but is it good?” These do sound dangerously close to talking nonsense! If someone asked these questions, I would immediately expect an explanation of what they were getting at. And if no such explanation was forthcoming, I would, in fact, be led to conclude that they literally don’t understand what they’re talking about.

I can imagine making sense of “I know that has three sides, but is it a triangle?”in some bizarre curved multi-dimensional geometry. Even “I know he is an unmarried man, but is he a bachelor?” makes sense if you are talking about a celibate priest. Very rarely do perfect synonyms exist in natural languages, and even when they do they are often unstable due to the effects of connotations. None of this changes the fact that bachelors are unmarried men, triangles have three sides, and yes, goodness involves fulfilling rational duties, alleviating suffering, and being kind and just (Deontology, consequentialism, and virtue theory are often thought to be distinct and incompatible; I’m convinced they amount to the same thing, which I’ll say more about in later posts.).

This line of reasoning has led some philosophers (notably Willard Quine) to deny the existence of analytic truths altogether; on Quine’s view even “2+2=4” isn’t something we can deduce directly from the meaning of the symbols. This is clearly much too strong; no empirical observation could ever lead us to deny 2+2=4. In fact, I am convinced that all mathematical truths are ultimately reducible to tautologies; even “the Fourier transform of a Gaussian is Gaussian” is ultimately a way of saying in compact jargon some very complicated statement that amounts to A=A. This is not to deny that mathematics is useful; of course mathematics is tremendously useful, because this sort of compact symbolic jargon allows us to make innumerable inferences about the world and at the same time guarantee that these inferences are correct. Whenever you see a Gaussian and you need its Fourier transform (I know, it happens a lot, right?), you can immediately know that the result will be a Gaussian; you don’t have to go through the whole derivation yourself. We are wrong to think that “ultimately reducible to a tautology” is the same as “worthless and trivial”; on the contrary, to realize that mathematics is reducible to tautology is to say that mathematics is undeniable, literally impossible to coherently deny. At least the way I use the words, the statement “Happiness is good and suffering is bad” is pretty close to that same sort of claim; if you don’t agree with it, I sense that you honestly don’t understand what I mean.

In any case, I see no more fundamental difficulty in defining “good” than I do in defining any concept, like “man”, “tree”, “multiplication”, “green” or “refrigerator”; and nor do I see any point in arguing about the semantics of definition as an approach to understanding moral truth. It seems to me that Moore has confused the map with the territory, and later authors have confused him with Hume, to all of our detriment.

What’s fallacious about naturalism?

Jan 5 JDN 2460681

There is another line of attack against a scientific approach to morality, one which threatens all the more because it comes from fellow scientists. Even though they generally agree that morality is real and important, many scientists have suggested that morality is completely inaccessible to science. There are a few different ways that this claim can be articulated; the most common are Stephen Jay Gould’s concept of “non-overlapping magisteria” (NOMA), David Hume’s “is-ought problem”, and G.E. Moore’s “naturalistic fallacy”. As I will show, none of these pose serious threats to a scientific understanding of morality.

NOMA

Stephen Jay Gould, though a scientist, an agnostic, and a morally upright person, did not think that morality could be justified in scientific or naturalistic terms. He seemed convinced that moral truth could only be understood through religion, and indeed seemed to use the words “religion” and “morality” almost interchangeably:

The magisterium of science covers the empirical realm: what the Universe is made of (fact) and why does it work in this way (theory). The magisterium of religion extends over questions of ultimate meaning and moral value. These two magisteria do not overlap, nor do they encompass all inquiry (consider, for example, the magisterium of art and the meaning of beauty).

If we take Gould to be using a very circumscribed definition of “science” to just mean the so-called “natural sciences” like physics and chemistry, then the claim is trivial. Of course we cannot resolve moral questions about stem cell research entirely in terms of quantum physics or even entirely in terms of cellular biology; no one ever supposed that we could. Yes, it’s obvious that we need to understand the way people think and the way they interact in social structures. But that’s precisely what the fields of psychology, sociology, economics, and political science are designed to do. It would be like saying that quantum physics cannot by itself explain the evolution of life on Earth. This is surely true, but it’s hardly relevant.

Conversely, if we define science broadly to include all rational and empirical methods: physics, chemistry, geology, biology, psychology, sociology, astronomy, logic, mathematics, philosophy, history, archaeology, anthropology, economics, political science, and so on, then Gould’s claim would mean that there is no rational reason for thinking that rape and genocide are immoral.

And even if we suppose there is something wrong with using science to study morality, the alternative Gould offers us—religion—is far worse. As I’ve already shown in previous posts, religion is a very poor source of moral understanding. If morality is defined by religious tradition, then it is arbitrary and capricious, and real moral truth disintegrates.

Fortunately, we have no reason to think so. The entire history of ethical philosophy speaks against such notions, and had Immanuel Kant and John Stuart Mill alive been alive to read them, they would have scoffed at Gould’s claims. I suspect Peter Singer and Thomas Pogge would scoff similarly today. Religion doesn’t offer any deep insights into morality, and reason often does; NOMA is simply wrong.

What’s the problem with “ought” and “is”?

The next common objection to a scientific approach to morality is the remark, after David Hume, that “one cannot derive an ought from an is”; due to a conflation with a loosely-related argument that G.E. Moore made later, the attempt to derive moral statements from empirical facts has become called the “naturalistic fallacy” (this is clearly not what Moore intended; I will address Moore’s actual point in a later post). But in truth, I do not really see where the fallacy is meant to lie; there is little difference in principle between deriving “ought” from “is” than there is from deriving anything from anything else.

First, let’s put aside direct inferences from “X is true” to “X ought to be true”; these are obviously fallacious. If that’s all Hume was saying, then he is of course correct; but this does little to undermine any serious scientific theory of morality. You can’t infer from “there are genocides” to “there ought to be genocides”; nor can you infer from “there ought to be happy people” to “there are happy people”; but nor would I or any other scientist seek to do so. This is a strawman of naturalistic morality.

It’s true that some people do attempt to draw similar inferences, usually stated in a slightly different form—but these are not moral scientists, they are invariably laypeople with little understanding of the subject. Arguments based on the claim that “homosexuality is unnatural” (therefore wrong) or “violence is natural” (therefore right) are guilty of this sort of fallacy, but I’ve never heard any credible philosopher or scientist support such arguments. (And by the way, homosexuality is nearly as common among animals as violence.)

A subtler way of reasoning from “is” to “ought” that is still problematic is the common practice of surveying people about their moral attitudes and experimentally testing their moral behaviors, sometimes called experimental philosophy. I do think this kind of research is useful and relevant, but it doesn’t get us as far as some people seem to think. Even if we were to prove that 100% of humans who have ever lived believe that cannibalism is wrong, it does not follow that cannibalism is in fact wrong. It is indeed evidence that there is something wrong with cannibalism—perhaps it is maladaptive to the point of being evolutionarily unstable, or it is so obviously wrong that even the most morally-blind individuals can detect its wrongness. But this extra step of explanation is necessary; it simply doesn’t follow from the fact that “everyone believes X is wrong” that in fact “X is wrong”. (Before 1900 just about everyone quite reasonably believed that the passage of time is the same everywhere regardless of location, speed or gravity; Einstein proved everyone wrong.) Moral realism demands that we admit people can be mistaken about their moral beliefs, just as they can be mistaken about other beliefs.

But these are not the only way to infer from “is” to “ought”, and there are many ways to make such inferences that are in fact perfectly valid. For instance, I know at least two ways to validly prove moral claims from nonmoral claims. The first is by conjunctive addition: “2+2=4, therefore 2+2=4 or genocide is wrong”. The second is by contradictory explosion: “2+2=5, therefore genocide is wrong”. Both of these arguments are logically valid. Obviously they are also quite trivial; “genocide is wrong” could be replaced by any other conceivable proposition (even a contradiction!), leaving an equally valid argument. Still, we have validly derived a moral statement from nonmoral statements, while obeying the laws of logic.

Moreover, it is clearly rational to infer a certain kind of “ought” from statements that entirely involve facts. For instance, it is rational to reason, “If you are cold, you ought to close the window”. This is an instrumental “ought” (it says what it is useful to do, given the goals that you have), not a moral “ought” (which would say what goals you should have in the first place). Hence, this is not really inferring moral claims from non-moral claims, since the “ought” isn’t really a moral “ought” at all; if the ends are immoral the means will be immoral too. (It would be equally rational in this instrumental sense to say, “If you want to destroy the world, you ought to get control of the nuclear launch codes”.) In fact this kind of instrumental rationality—doing what accomplishes our goals—actually gets us quite far in defining moral norms for real human beings; but clearly it does not get us far enough.

Finally, and most importantly, epistemic normativity, which any rational being must accept, is itself an inference from “is” to “ought”; it involves inferences like “Is it raining, therefore you ought to believe it is raining.”

With these considerations in mind, we must carefully rephrase Hume’s remark, to something like this:

One cannot nontrivially with logical certainty derive moral statements from entirely nonmoral statements.

This is indeed correct; but here the word “moral” carries no weight and could be replaced by almost anything. One cannot nontrivially with logical certainty derive physical statements from entirely nonphysical statements, nor nontrivially with logical certainty derive statements about fish from statements that are entirely not about fish. For all X, one cannot nontrivially with logical certainty derive statements about X from statements entirely unrelated to X. This is an extremely general truth. We could very well make it a logical axiom. In fact, if we do so, we pretty much get relevance logic, which takes the idea of “nontrivial” proofs to the extreme of actually considering trivial proofs invalid. Most logicians don’t go so far—they say that “2+2=5, therefore genocide is wrong” is technically a valid argument—but everyone agrees that such arguments are pointless and silly. In any case the word “moral” carries no weight here; it is no harder to derive an “ought” from an “is” than it is to derive a “fish” from a “molecule”.

Moreover, the claim that nonmoral propositions can never validly influence moral propositions is clearly false; the argument “Killing is wrong, shooting someone will kill them, therefore shooting someone is wrong” is entirely valid, and the moral proposition “shooting someone is wrong” is derived in large part from the nonmoral proposition “shooting someone will kill them”. In fact, the entire Frege-Geach argument against expressivism hinges upon the fact that we all realize that moral propositions function logically the same way as nonmoral propositions, and can interact with nonmoral propositions in all the usual ways. Even expressivists usually do not deny this; they simply try to come up with ways of rescuing expressivism despite this observation.

There are also ways of validly deriving moral propositions from entirely nonmoral propositions, in an approximate or probabilistic fashion. “Genocide causes a great deal of suffering and death, and almost everyone who has ever lived has agreed that suffering and death are bad and that genocide is wrong, therefore genocide is probably wrong” is a reasonably sound probabilistic argument that infers a moral conclusion based on entirely nonmoral premises, though it lacks the certainty of a logical proof.

We could furthermore take as axiom some definition of moral concepts in terms of nonmoral concepts, and then derive consequences of this definition with logical certainty. “A morally right action maximizes pleasure and minimizes pain. Genocide fails to maximize pleasure or minimize pain. Therefore genocide is not morally right.” Obviously one is free to challenge the definition, but that’s true of many different types of philosophical arguments, not a specific problem in arguments about morality.

So what exactly was Hume trying to say? I’m really not sure. Maybe he has in mind the sort of naive arguments that infer from “unnatural” to “wrong”; if so, he’s surely correct, but the argument does little to undermine any serious naturalistic theories of morality.

On land acknowledgments

Dec 29 JDN 2460674

Noah Smith and Brad DeLong, both of whom I admire, have recently written about the practice of land acknowledgments. Smith is wholeheartedly against them. DeLong has a more nuanced view. Smith in fact goes so far as to argue that there is no moral basis for considering these lands to be ‘Native lands’ at all, which DeLong rightly takes issue with.

I feel like this might be an issue where it would be better to focus on Native American perspectives. (Not that White people aren’t allowed to talk about it; just that we tend to hear from them on everything, and this is something where maybe they’re less likely to know what they’re talking about.)

It turns out that Native views on land acknowledgments are also quite mixed; some see them as a pointless, empty gesture; others see them as a stepping-stone to more serious policy changes that are necessary. There is general agreement that more concrete actions, such as upholding treaties and maintaining tribal sovereignty, are more important.

I have to admit I’m much more in the ’empty gesture’ camp. I’m only one-fourth Native (so I’m Whiter than I am not), but my own view on this is that land acknowledgments aren’t really accomplishing very much, and in fact aren’t even particularly morally defensible.

Now, I know that it’s not realistic to actually “give back” all the land in the United States (or Australia, or anywhere where indigenous people were forced out by colonialism). Many of the tribes that originally lived on the land are gone, scattered to the winds, or now living somewhere else that they were forced to (predominantly Oklahoma). Moreover, there are now more non-Native people living on that land than there ever were Native people living on it, and forcing them all out would be just as violent and horrific as forcing out the Native people was in the first place.

I even appreciate Smith’s point that there is something problematic about assigning ownership of land to bloodlines of people just because they happened to be the first ones living there. Indeed, as he correctly points out, they often weren’t the first ones living there; different tribes have been feuding and warring with each other since time immemorial, and it’s likely that any given plot of land was held by multiple different tribes at different times even before colonization.

Let’s make this a little more concrete.

Consider the Beaver Wars.


The Beaver Wars were a series of conflicts between the Haudenosaunee (that’s what they call themselves; to a non-Native audience they are better known by what the French called them, Iroquois) and several other tribes. Now, that was after colonization, and the French were involved, and part of what they were fighting over was the European fur trade—so the story is a bit complicated by that. But it’s a conflict we have good historical records of, and it’s pretty clear that many of these rivalries long pre-dated the arrival of the French.

The Haudenosaunee were brutal in the Beaver Wars. They slaughtered thousands, including many helpless civilians, and effectively wiped out several entire tribes, including the Erie and Susquehannock, and devastated several others, including the Mohicans and the Wyandot. Many historians consider these to be acts of genocide. Surely any land that the Haundenosaunee claimed as a result of the Beaver Wars is as illegitimate as land claimed by colonial imperialism? Indeed, isn’t it colonial imperialism?

Yet we have no reason to believe that these brutal wars were unique to the Haundenosaunee, or that they only occurred after colonization. Our historical records aren’t as clear going that far back, because many Native tribes didn’t keep written records—in fact, many didn’t even have a written language. But what we do know suggests that a great many tribes warred with a great many other tribes, and land was gained and lost in warfare, going back thousands of years.

Indeed, it seems to be a sad fact of human history that virtually all land, indigenous or colonized, is actually owned by a group that conquered another group (that conquered another group, that conquered another group…). European colonialism was simply the most recent conquest.

But this doesn’t make European colonialism any more justifiable. Rather, it raises a deeper question:

How should we decide who owns what land?

The simplest way, and the way that we actually seem to use most of the time, is to simply take whoever currently owns the land as its legitimate ownership. “Possession is nine-tenths of the law” was always nonsense when it comes to private property (that’s literally what larceny means!), but when it comes to national sovereignty, it is basically correct. Once a group manages to organize itself well enough to enforce control over a territory, we pretty much say that it’s their territory now and they’re allowed to keep it.

Does that mean that anyone is just allowed to take whatever land they can successfully conquer and defend? That the world must simply accept that chaos and warfare are inevitable? Fortunately, there is a solution to this problem.

The Westphalian solution.

The current solution to this problem is what’s called Westphalian sovereignty, after the Peace of Westphalia, two closely-related treaties that were signed in Westphalia (a region of Germany) in 1648. Those treaties established a precedent in international law that nations are entitled to sovereignty over their own territory; other nations are not allowed to invade and conquer them, and if anyone tries, the whole international community should fight to resist any such attempt.

Effectively, what Westphalia did was establish that whoever controlled a given territory right now (where “right now” means 1648) now gets the right to hold it forever—and everyone else not only has to accept that, they are expected to defend it. Now, clearly this has not been followed precisely; new nations have gained independence from their empires (like the United States), nations have separated into pieces (like India and Pakistan, the Balkans, and most recently South Sudan), and sometimes even nations have successfully conquered each other and retained control—but the latter has been considerably rarer than it was before the establishment of Westphalian sovereignty. (Indeed, part of what makes the Ukraine War such an aberration is that it is a brazen violation of Westphalian sovereignty the likes of which we haven’t seen since the Second World War.)

This was, as far as I can tell, a completely pragmatic solution, with absolutely no moral basis whatsoever. We knew in 1648, and we know today, that virtually every nation on Earth was founded in bloodshed, its land taken from others (who took it from others, who took it from others…). And it was timed in such a way that European colonialism became etched in stone—no European power was allowed to take over another European power’s colonies anymore, but they were all allowed to keep all the colonies they already had, and the people living in those colonies didn’t get any say in the matter.

Since then, most (but by no means all) of those colonies have revolted and gained their own independence. But by the time it happened, there were large populations of former colonists, and the indigenous populations were often driven out, dramatically reduced, or even outright exterminated. There is something unsettling about founding a new democracy like the United States or Australia after centuries of injustice and oppression have allowed a White population to establish a majority over the indigenous population; had indigenous people been democratically represented all along, things would probably have gone a lot differently.

What do land acknowledgments accomplish?

I think that the intent behind land acknowledgments is to recognize and commemorate this history of injustice, in the hopes of somehow gaining some kind of at least partial restitution. The intentions here are good, and the injustices are real.

But there is something fundamentally wrong with the way most land acknowledgments are done, because they basically just push the sovereignty back one step: They assert that whoever held the land before Europeans came along is the land’s legitimate owner. But what about the people before them (and the people before them, and the people before them)? How far back in the chain of violence are we supposed to go before we declare a given group’s conquests legitimate?

How far back can we go?

Most of these events happened many centuries ago and were never written down, and all we have now is vague oral histories that may or may not even be accurate. Particularly when one tribe forces out another, it rather behooves the conquering tribe to tell the story in their own favor, as one of “reclaiming” land that was rightfully theirs all along, whether or not that was actually true—as they say, history is written by the victors. (I think it’s actually more true when the history is never actually written.) And in some cases it’s probably even true! In others, that land may have been contested between the two tribes for so long that nobody honestly knows who owned it first.

It feels wrong to legitimate the conquests of colonial imperialism, but it feels just as wrong to simply push it back one step—or three steps, or seven steps.

I think that ultimately what we must do is acknowledge this entire history.

We must acknowledge that this land was stolen by force from Native Americans, and also that most of those Native Americans acquired their land by stealing it by force from other Native Americans, and the chain goes back farther than we have records. We must acknowledge that this is by no means unique to the United States but in fact a universal feature of almost all land held by anyone anywhere in the world. We must acknowledge that this chain of violence and conquest has been a part of human existence since time immemorial—and affirm our commitment to end it, once and for all.

That doesn’t simply mean accepting the current allocation of land; land, like many other resources, is clearly distributed unequally and unfairly. But it does mean that however we choose to allocate land, we must do so by a fair and peaceful process, not by force and conquest. The chain of violence that has driven human history for thousands of years must finally be brought to an end.

Moral progress and moral authority

Dec 8 JDN 2460653

In previous posts I’ve written about why religion is a poor source of morality. But it’s worse than that. Religion actually holds us back morally. It is because of religion that our society grants the greatest moral authority to precisely the people and ideas which have most resisted moral progress. Most religious people are good, well-intentioned people—but religious authorities are typically selfish, manipulative, Machiavellian leaders who will say or do just about anything to maintain power. They have trained us to respect and obey them without question; they even call themselves “shepherds” and us the “flock”, as if we were not autonomous humans but obedient ungulates.

I’m sure that most of my readers are shocked that I would assert such a thing; surely priests and imams are great, holy men who deserve our honor and respect? The evidence against such claims is obvious. We only believe such things because the psychopaths have told us to believe them.

I am not saying that these evil practices are inherent to religion—they aren’t. Other zealous, authoritarian ideologies, like Communism and fascism, have been just as harmful for many of the same reasons. Rather, I am saying that religion gives authority and respect to people who would otherwise not have it, people who have long histories of evil, selfish, and exploitative behavior. For a particularly striking example, Catholicism as an idea is false and harmful, but not nearly as harmful as the Catholic Church as an institution, which has harbored some of the worst criminals in history.

The Catholic Church hierarchy is quite literally composed of a cadre of men who use tradition and rhetoric to extort billions of dollars from the poor and who have gone to great lengths to defend men who rape children—a category of human being that normally is so morally reviled that even thieves and murderers consider them beyond the pale of human society. Pope Ratzinger himself, formerly the most powerful religious leader in the world, has been connected with the coverup based on a letter he wrote in 1985. The Catholic Church was also closely tied to Nazi Germany and publicly celebrated Hitler’s birthday for many years; there is evidence that the Vatican actively assisted in the exodus of Nazi leaders along “ratlines” to South America. More recently the Church once again abetted genocide, when in Rwanda it turned away refugees and refused to allow prosecution against any of the perpetrators who were affiliated with the Catholic Church. Yes, that’s right; the Vatican has quite literally been complicit in the worst moral crimes human beings have ever committed. Embezzlement of donations and banning of life-saving condoms seem rather beside the point once we realize that these men and their institutions have harbored genocidaires and child rapists. I can scarcely imagine a more terrible source of moral authority.

Most people respect evangelical preachers, like Jerry Falwell who blamed 9/11 and Hurricane Katrina on feminists, gays, and secularists, then retracted the statement about 9/11 when he realized how much it had offended people. These people have concepts of morality that were antiquated in the 19th century; they base their ethical norms on books that were written by ignorant and cultish nomads thousands of years ago. Leviticus 18:22 and 20:13 indeed condemn homosexuality, but Leviticus 19:27 condemns shaving and Leviticus 11:9-12 says that eating fish is fine but eating shrimp is evil. By the way, Leviticus 11:21-22 seems to say that locusts have only four legs, when they very definitely have six and you can see this by looking at one. (I cannot emphasize this enough: Don’t listen to what people say about the book, read the book.)

But we plainly don’t respect scientists or philosophers to make moral and political decisions. If we did, we would have enacted equal rights for LGBT people sometime around 1898 when the Scientific-Humanitarian Committee was founded or at least by 1948 when Alfred Kinsey showed how common, normal, and healthy homosexuality is. Democracy and universal suffrage (for men at least) would have been the norm shortly after 1689 when Locke wrote his Two Treatises of Government. Women would have been granted the right to vote in 1792 upon the publication of Mary Woolstonecraft’s A Vindication of the Rights of Woman, instead of in 1920 after a long and painful political battle. Animal rights would have become law in 1789 with the publication of Bentham’s Introduction to the Principles of Morals and Legislation. We should have been suspicious of slavery since at least Kant if not Socrates, but instead it took until the 19th century for slavery to finally be banned. We owe the free world to moral science; but nonetheless we rarely listen to the arguments of moral scientists. As a species we fight for our old traditions even in the face of obvious and compelling evidence to the contrary, and this holds us back—far back. If they haven’t sunk in yet, read these dates again: Society is literally about 200 years behind the cutting edge of moral science. Imagine being 200 years behind in technology; you would be riding horses instead of flying in jet airliners and writing letters with quills instead of texting on your iPhone. Imagine being 200 years behind in ecology; you would be considering the environmental impact of not photovoltaic panels or ethanol but whale oil. This is how far behind we are in moral science.

One subfield of moral science has done somewhat better: The economics of theory and the economics of practice differ by only about 100 years. Capitalism really was instituted on a large scale only a few decades after Adam Smith argued for it, and socialism (while horrifyingly abused in the Communism of Lenin and Stalin) has nonetheless been implemented on a wide scale only a century after Marx. Keynesian stimulus was international policy (despite its numerous detractors) in 2008 and 2020, and Keynes himself died in only 1946. This process is still slower than it probably should be, but at least we aren’t completely ignoring new advances the way we do in ethics. 100 years behind in technology we would have cars and electricity at least.

Except perhaps in economics, in general we entrust our moral claims to the authority of men in tall hats and ornate robes who merely assert their superiority and ties to higher knowledge, while ignoring the thousands of others who actually apply their reason and demonstrate knowledge and expertise. A criminal in pretty robes who calls himself a moral leader might as well be a moral leader, as far as we’re concerned; a genuinely wise teacher of morality who isn’t arrogant enough to assert special revelation from the divine is instead ignored. Why do we do this? Religion. Religion is holding us back.

We need to move beyond religion in order to make real and lasting moral progress.

More on religion

Dec 8 JDN 2460653

Reward and punishment

In previous posts I’ve argued that religion can make people do evil and that religious beliefs simply aren’t true.

But there is another reason to doubt religion as a source of morality: There is no reason to think that obeying God is a particularly good way of behaving, even if God is in fact good. If you are obeying God because he will reward you, you aren’t really being moral at all; you are being selfish, and just by accident doing good things. If everyone acted that way, good things would get done; but it clearly misses what we mean when we talk about morality. To be moral is to do good because it is good, not because you will be rewarded for doing it. This becomes even clearer when we consider the following question: If you weren’t rewarded, would you still do good? If not, then you aren’t really a good person.

In fact, it’s ironic that proponents of naturalistic and evolutionary accounts of morality are often accused of cheapening morality because we explain it using selfish genes and memes; traditional religious accounts of morality are directly based on selfishness, not for my genes or my memes, but for me myself! It’s legitimate to question whether someone who acts out of a sense of empathy that ultimately evolved to benefit their ancestors’ genes is really being moral (why I think so requires essentially the rest of this book to argue); but clearly someone who acts out of the desire to be rewarded later isn’t! Selfish genes may or may not make good people; but selfish people clearly aren’t good people.

Even if religion makes people act more morally (and the evidence on that is quite mixed), that doesn’t make it true. If I could convince everyone that John Stuart Mill was a prophet of God, this world would be a paradise; but that would be a lie, because John Stuart Mill was a brilliant man and nothing more. The belief that Santa Claus is watching no doubt makes some children behave better around Christmas, but this is not evidence for flying reindeer. In fact, the children who behave just fine without the threat of coal in their stockings are better children, aren’t they? For the same reason, people who do good for the sake of goodness are better people than those who do it out of hope for Heaven and fear of Hell.

There are cases in which false beliefs might make people do more good, because the false beliefs provide a more obvious, but wrong reason for doing something that is actually good for less obvious, but actually correct reasons. Believing that God requires you to give to charity might motivate you to give more to charity; but charity is good not because God demands it, but because there are billions of innocent people suffering around the world. Maybe we should for this reason be careful about changing people’s beliefs; someone who believes a lie but does the right thing is still better than someone who believes the truth but acts wrongly. If people think that without God there is no morality, then telling them that there is no God may make them abandon morality. This is precisely why I’m not simply telling readers that there is no God: I am also spending this entire chapter explaining why we don’t need God for morality. I’d much rather you be a moral theist than an immoral atheist; but I’m trying to make you a moral atheist.

The problem with holy texts

Even if God actually existed, and were actually good, and commanded us to do things, we do not have direct access to God’s commandments. If you are not outright psychotic, you must acknowledge this; God does not speak to us directly. If anything, he has written or inspired particular books, which have then been translated and interpreted over centuries by many different people and institutions. There is a fundamental problem in deciding which books have been written or inspired by God; not only does the Bible differ from the Qur’an, which differs from the Bhagavad-Gita, which differs from other holy texts; worse, particular chapters and passages within each book differ from one another on significant moral questions, sometimes on the foundational principles of morality itself.

For instance, let’s consider the Bible, because this is the holy book in greatest favor in modern Western culture. Should we use a law of retribution, a lex talionis, as in Exodus 21? Or should we instead forgive our enemies, as in Matthew 5? Perhaps we should treat others as we would like to be treated, as in Luke 6? Are rape and genocide commanded by God, as in 1 Samuel 15, Numbers 31, and Deuteronomy 20-21, or is murder always a grave crime, as in Exodus 20? Is even anger a grave sin, as in Matthew 5? Is it a crime to engage in male-male sex, as in Leviticus 18? Then, is it then also a crime to shave beards and wear mixed-fiber clothing, as in Leviticus 19? Is it just to punish descendants for the crimes of their ancestors, as in Genesis 9, or is it only fair to punish the specific perpetrators, as in Deuteronomy 24? Is adultery always immoral, as in Exodus 20, or does God sometimes command it, as in Hosea 1? Must homosexual men be killed, as in Leviticus 20, or is it enough to exile them, as in 1 Kings 15? A thorough reading of the Bible shows hundreds of moral contradictions and thousands of moral absurdities. (This is not even to mention the factual contradictions and absurdities.)

Similar contradictions and absurdities can be found in the Qur’an and other texts. Since most of my readers will come from Christian cultures, for my purposes I think brief examples will suffice. The Qur’an at times says that Christians are deserving of the same rights as Muslims, and at other times declares Christians so evil that they ought to be put to the sword. (Most of the time it says something in between, that “People of the Book”, ahl al-Kitab, as Jews and Christians are known, are inferior to Muslims but nonetheless deserving of rights.) The Bhagavad-Gita at times argues for absolute nonviolence, and at times declares an obligation to fight in war. The Dharmas and the Dao De Jing are full of contradictions, about everything from meaning to justice to reincarnation (in fact, many Buddhists and Taoists freely admit this, and try to claim that non-contradiction is overrated—which is literally talking nonsense). The Book of Mormon claims the canonicity of texts that it explicitly contradicts.

And above all, we have no theological basis for deciding which parts of which holy books we should follow, and which we should reject—for they all have many sects with many followers, and they all declare with the same intensity of clamor and absence of credibility that they are the absolute truth of a perfect God. To decide which books to trust and which to ignore, we have only a rational basis, founded upon reason and science—but then, we can’t help but take a rational approach to morality in general. If it were glaringly obvious which holy text was written by God, and its message were clear and coherent, perhaps we could follow such a book—but given the multitude of religions and sects and denominations in the world, all mutually-contradictory and most even self-contradictory, each believed with just as much fervor as the last, how obvious can the answer truly be?

One option would be to look for the things that are not contradicted, the things that are universal across religions and texts. In truth these things are few and far between; one sect’s monstrous genocide is another’s holy duty. But it is true that certain principles appear in numerous places and times, a signal of universality amidst the noise of cultural difference: Fairness and reciprocity, as in the Golden Rule; honesty and fidelity; forbiddance of theft and murder. There are examples of religious beliefs and holy texts that violate these rules—including the Bible and the Qur’an—but the vast majority of people hold to these propositions, suggesting that there is some universal truth that has been recognized here. In fact, the consensus in favor of these values is far stronger than the consensus in favor of recognized scientific facts like the shape of the Earth and the force of gravity. While for most of history most people had no idea how old the Earth was and many people still seem to think it is a mere 6,000 years old, there has never been a human culture on record that thought it acceptable to murder people arbitrarily.

But notice how these propositions are not tied to any particular religion or belief; indeed, nearly all atheists, including me, also accept these ideas. Moreover, it is possible to find these principles contradicted in the very books that religious people claim as the foundation of their beliefs. This is strong evidence that religion has nothing to do with it—these principles are part of a universal human nature, or better yet, they may even be necessary truths that would hold for any rational beings in any possible universe. If Christians, Muslims, Buddhists, Hindus and atheists all agree that murder is wrong, then it must not be necessary to hold any specific religion—or any at all—in order to agree that murder is wrong.

Indeed, holy texts are so full of absurdities and atrocities that the right thing to do is to completely and utterly repudiate holy texts—especially the Bible and the Qur’an.

If you say you believe in one of these holy texts, you’re either a good person but a hypocrite because you aren’t following the book; or you can be consistent in following the book, but you’ll end up being a despicable human being. Obviously I much prefer the former—but why not just give up the damn book!? Why is it so important to you to say that you believe in this particular book? You can still believe in God if you want! If God truly exists and is benevolent, it should be patently obvious that he couldn’t possibly have written a book as terrible as the Bible or the Qur’an. Obviously those were written by madmen who had no idea what God is truly like.