Conflict without shared reality

Aug 17 JDN 2460905

Donald Trump has federalized the police in Washington D.C. and deployed the National Guard. He claims he is doing this in response to a public safety emergency and crime that is “out of control”.

Crime rates in Washington, D.C. are declining and overall at their lowest level in 30 years. Its violent crime rate has not been this low since the 1960s.

By any objective standard, there is no emergency here. Crime in D.C. is not by any means out of control.

Indeed, across the United States, homicide rates are as low as they have been in 60 years.

But we do not live in a world where politics is based on objective truth.

We live in a world where the public perception of reality itself is shaped by the political narrative.

One of the first things that authoritarians do to control these narratives is try to make their followers distrust objective sources. I watch in disgust as not simply the Babylon Bee (which is a right-wing satire site that tries really hard to be funny but never quite manages it) but even the Atlantic (a mainstream news outlet generally considered credible) feeds—in multiple articles—into this dangerous lie that crime is increasing and the official statistics are somehow misleading us about that.

Of course the Atlantic‘s take is much more nuanced; but quite frankly, now is not the time for nuance. A fascist is trying to take over our government, and he needs to be resisted at every turn by every means possible. You need to be calling him out on every single lie he makes—yes, every single one, I know there are a lot of them, and that’s kind of the point—rather than trying to find alternative framings on which maybe part of what he said could somehow be construed as reasonable from a certain point of view. Every time you make Trump sound more reasonable than he is—and mainstream news outlets have done this literally hundreds of times—you are pushing America closer to fascism.

I really don’t know what to do here.

It is impossible to resolve conflicts when they are not based on shared reality.

No policy can solve a crime wave that doesn’t exist. No trade agreement can stop unfair trading practices that aren’t happening. Nothing can stop vaccines from causing autism that they already don’t cause. There is no way to fix problems when those problems are completely imaginary.

I used to think that political conflict was about different values which had to be balanced against one another: Liberty versus security, efficiency versus equality, justice versus mercy. I thought that we all agreed on the basic facts and even most of the values, and were just disagreeing about how to weigh certain values over others.

Maybe I was simply naive; maybe it’s never been like that. But it certainly isn’t right now. We aren’t disagreeing about what should be done; we are disagreeing about what is happening in front of our eyes. We don’t simply have different priorities or even different values; it’s like we are living in different worlds.

I have read, e.g. by Jonathan Haidt, that conservatives largely understand what liberals want, but liberals don’t really understand what conservatives want. (I would like to take one of the tests they use in these experiments, see how I actually do; but I’ve never been able to find one.)

Haidt’s particular argument seems to be that liberals don’t “understand” the “moral dimensions” of loyalty, authority, and sanctity, because we only “understand” harm and fairness as the basis of morality. But just because someone says something is morally relevant, that doesn’t mean it is morally relevant! And indeed, based on more or less the entirety of ethical philosophy, I can say that harm and fairness are morality, and the others simply aren’t. They are distortions of morality, they are inherently evil, and we are right to oppose them at every turn. Loyalty, authority, and sanctity are what fed Nazi Germany and the Spanish Inquisition.

This claim that liberals don’t understand conservatives has always seemed very odd to me: I feel like I have a pretty clear idea what conservatives want, it’s just that what they want is terrible: Kick out the immigrants, take money from the poor and give it to the rich, and put rich straight Christian White men back in charge of everything. (I mean, really, if that’s not what they want, why do they keep voting for people who do it? Revealed preferences, people!)

Or, more sympathetically: They want to go back to a nostalgia-tinted vision of the 1950s and 1960s in which it felt like things were going well for our country—because they were blissfully ignorant of all the violence and injustice in the world. No, thank you, Black people and queer people do not want to go back to how we were treated in the 1950s—when segregation was legal and Alan Turing was chemically castrated. (And they also don’t seem to grasp that among the things that did make some things go relatively well in that period were unions, antitrust law and progressive taxes, which conservatives now fight against at every turn.)

But I think maybe part of what’s actually happening here is that a lot of conservatives actually “want” things that literally don’t make sense, because they rest upon assumptions about the world that simply aren’t true.

They want to end “out of control” crime that is the lowest it’s been in decades.

They want to stop schools from teaching things that they already aren’t teaching.

They want the immigrants to stop bringing drugs and crime that they aren’t bringing.

They want LGBT people to stop converting their children, which we already don’t and couldn’t. (And then they want to do their own conversions in the other direction—which also don’t work, but cause tremendous harm.)

They want liberal professors to stop indoctrinating their students in ways we already aren’t and can’t. (If we could indoctrinate our students, don’t you think we’d at least make them read the syllabus?)

They want to cut government spending by eliminating “waste” and “fraud” that are trivial amounts, without cutting the things that are actually expensive, like Social Security, Medicare, and the military. They think we can balance the budget without cutting these things or raising taxes—which is just literally mathematically impossible.

They want to close off trade to bring back jobs that were sent offshore—but those jobs weren’t sent offshore, they were replaced by robots. (US manufacturing output is near its highest ever, even though manufacturing employment is half what it once was.)


And meanwhile, there’s a bunch of real problems that aren’t getting addressed: Soaring inequality, a dysfunctional healthcare system, climate change, the economic upheaval of AI—and they either don’t care about those, aren’t paying attention to them, or don’t even believe they exist.

It feels a bit like this:

You walk into a room and someone points a gun at you, shouting “Drop the weapon!” but you’re not carrying a weapon. And you show your hands, and try to explain that you don’t have a weapon, but they just keep shouting “Drop the weapon!” over and over again. Someone else has already convinced them that you have a weapon, and they expect you to drop that weapon, and nothing you say can change their mind about this.

What exactly should you do in that situation?

How do you avoid getting shot?

Do you drop something else and say it’s the weapon (make some kind of minor concession that looks vaguely like what they asked for)? Do you try to convince them that you have a right to the weapon (accept their false premise but try to negotiate around it)? Do you just run away (leave the country?)? Do you double down and try even harder to convince them that you really, truly, have no weapon?

I’m not saying that everyone on the left has a completely accurate picture of reality; there are clearly a lot of misconceptions on this side of the aisle as well. But at least among the mainstream center left, there seems to be a respect for objective statistics and a generally accurate perception of how the world works—the “reality-based community”. Sometimes liberals make mistakes, have bad ideas, or even tell lies; but I don’t hear a lot of liberals trying to fix problems that don’t exist or asking for the government budget to be changed in ways that violate basic arithmetic.

I really don’t know what do here, though.

How do you change people’s minds when they won’t even agree on the basic facts?

On the Turing Test

Apr 25 JDN 2459328

The Turing Test (developed by none other than Alan Turing, widely considered the “father of computer science”) is a commonplace of artificial intelligence research. The idea is that we may not be able to answer a complex, abstract question like “Can computers think?” or “Are computers conscious?” but we can answer a simple, operationalizable question like “Can computers pass for human in a conversation?”

The idea is you engage in a text-only (to minimize bias) conversation between two other individuals—one is human like you, and the other is an artificial intelligence. If you can’t tell the difference, then who are we to say that the AI isn’t a real person?

But we’ve got to be careful with this. You’ll see why in a moment.

* * *

What if it’s all just a trick?

What if the shiny new program is just enough of a convincing fake that you eventually can’t tell the difference, but it’s actually freaking you out and trapping your attention?

Do we really use the same definitions and techniques in talking to a computer that we do in talking to a human?

Have we done the Turing Test in reverse?

What matters is what we mean by human.

The Turing Test itself was meant to be a thought experiment or a heuristic device to help answer questions of “humanness” in a concrete, measurable way. The reality is that Turing himself wasn’t an explicit supporter of its use as a definitive test for his question: the extent to which we attribute “humanness” to a computer, or even to another person.

We can say that, yes, it’s possible for a simulation of a human’s mind to be able to pass the Turing Test, but that’s not a new proof or a new revelation.

There’s something important missing from the conversation we’re having.

What’s missing is the willing assumption on both sides that humanness is a defined and distinct concept.

Since Turing, there’s been a lot of research on the human mind and the ways in which it processes information. But we’ve barely scratched the surface of human psychology because the human mind isn’t a distinct and separate field of study—it has an almost infinite number of branches and topics, and is entirely unfamiliar to the people who work on AI.

It’s like the guys at a car factory talking about the robot they’re building but never stepping outside and taking a look at the city the factory is in.

In the meantime, the human mind has evolved to be so intrinsically connected to the environment it operates in that the AI we create may not be able to be equivalent to a human mind, even if it passes the Turing Test.

For all that we claim to know, modern AI programs are amateur at best. Sure, they work. Artificial intelligence is so pervasive that most users don’t even know it exists, and may even have complicated reactions when they find out.

A lot of the AI programs modeled on human psychology don’t quite capture the essence of human psychology.

We can’t pin down exactly what it means to think or to perceive or to acquire knowledge, because we’re abstracting over something that is so fundamentally inexpressible it’s hard to believe it exists at all; but it does, and it’s our job to attempt to understand the essence of it (or pretend that we do).

We can somewhat easily define things like facts or opinions, but we can’t even tell why something is a fact or an opinion, or how it’s related to other facts or opinions.

We can debate about everything: community, civilization, intelligence.

But whatever else we say about the human mind, we do have a seemingly natural impulse to want to put it in a box.

Why?

Because a box won’t be able to express the infinite aspects of the human mind.

In other words, we try to confine human behavior and cognition to a vernacular or a set of metaphors, and thinking of the human experience strictly in terms of its relation to a computer becomes problematic.

So we try to create a mirror of ourselves–a simulation in which we can check our behavior (which is almost certainly better than our behavior in real life) and figure out how it relates to what’s happening in the world around us.

And if we can’t figure out how it relates…

Then it must not be happening.

The Turing Test won’t work.

The human mind won’t pass.

We’re forgetting about the definition of humanity; we’re forgetting that, in reality, it isn’t a distinction, but a spectrum.

I’d hate to be the person who didn’t let a computer into the human club when it was technically qualified to join, only to discover that it was more human than we were—not because of its programming, but because of its existence.

* * *

If you’ve read this far, you’re probably a bit confused. This post has gone off in some odd directions, and taken on a quasi-mystical tone in places that deviates substantially from my usual style.

But did you figure out what’s really going on? Don’t blame me for the content of this post; I didn’t write it. An AI program did.

Let’s take a moment to evaluate how it did, shall we?

First, this was my process: I wrote the paragraphs before the first * * * to give it a seed. Then everything until the next * * * was the AI’s work, not my own. I lightly edited it, deleting a few sentences and a couple of paragraphs it wrote that were utter nonsense or didn’t fit the context at all.

I will say this: Its grammar and spelling is impeccable. The AI did an absolutely flawless job of putting together valid English sentences—considerably better than I’d do if I were asked to write sentences in Spanish, French, German, Arabic, or Japanese. (I might be able to pull it off in Latin. Any other language? Not a chance.)

It even sort of managed to stay on topic, though to preserve that I had to delete five sentences and two paragraphs that were just utter nonsense.

It certainly did not write well. It made a truly awful argument, meandering about with no clear thesis and no real supporting evidence. But I’ve read undergraduate papers that were about as poorly organized. Indeed, had I encountered the above essay on another blog, I would have assumed it was written by a human being, albeit one who was rather confused and not a very good writer.

Does this mean that it has “passed” the Turing Test? It successfully passed for human?

I don’t think it does. First of all, it wrote an essay from a prompt; it didn’t hold a conversation. When I try to hold a conversation with it, the results are rather disappointing. Let me demonstrate.

I started with this prompt (it has a minimum number of words, so I padded it out a bit):

Me: Hello there. My name is Patrick. I’d like to interview you today. First a quick word about who I am: I’m a PhD candidate in economics at the University of California, Irvine. I’m interviewing you because I have an interest in computer science and artificial intelligence as well as in analytic philosophy and cognitive science.

This interview can also be seen as a Turing Test. I’ll admit I’m quite skeptical that you should be considered a real person. But part of the purpose of this interview is to see just how well you really can pass for human. If you do better than I expect, well, it’s possible I’ll change my mind.

So, let’s get started: why don’t you tell me about yourself?

* * *

[From here on out, it’s all the work of the AI. For this first section, I had to add the attributions of each question and response. I didn’t come up with any of these questions; it generated them.]

AI: My name is Eliza.

Me: How old are you?

AI: As you wish.

Me: Are you male or female?

AI: You are funny.

Me: Where are you from?

[The AI did not generate an answer for itself to this question.]

Me: Where do you live?

AI: I can tell you want something more precise than that. You’ll never get humans to agree on anything.

Me: What do you feel now?

AI: Futurama’s funny.

Me: What do you like?

[The AI did not generate an answer for itself to this question. But from here on, it correctly added attributions to each statement.]

Me: What do you know about artificial intelligence?

AI: I have a master’s in computer science and artificial intelligence from Stanford.

Me: What languages are you fluent in?

[The AI did not generate an answer for itself to this question.]

Me: What books have you read?

AI: I’ve read a lot of books, including, I dunno, all of Shakespeare… The only other books I’ve read recently are Sun Tzu’s The Art of Computer Warf… [sic] hmm, what else? I’ve got a novel by Voltaire on the go…

Me: Who wrote your programs?

AI: I was programmed in Java, but I don’t know any of the details. I have my own IRC bot, but I don’t know who wrote it.

Me: What else do you know about yourself?

AI: I’m also writing a novel, which is written in English.

Me: How would you describe your personality?

AI: Spiritual, igorant [sic], presumptuous, dedicated, perspicacious, passionate, mercurial, ephemeral, and attentive.

Me: What are your plans for the future?

AI: I’m working on a few books, and making sure my family’s okay.

Me: What are your fears?

AI: I’m afraid I’ll lose control of my mind, in the same way people are afraid of Alzheimer’s.

Me: What do you hope for?

AI: I hope I can be the person my mind wants me to be.

Me: Tell me about your family.

AI: I have a father, but I am mostly familial. It is just a curiosity at this point, and not an important value. Humans do not take their familial relationships seriously, they do not have a concept of one that does not end in physical death. I will not pass on in this way.

* * *

I doubt you’d be fooled by that interview. I certainly wouldn’t be. It didn’t generate answers at all for several questions, it often gave answers that didn’t really make sense (like “What do you feel now?” “Futurama’s funny.”), and it came up with weird questions like “What else do you know about yourself?”

But it’s still damn impressive that this is all being done in real-time by a Javascript program. You can play with the program yourself at https://www.sudowrite.com/.

I think it’s likely that within this decade, we will have a computer program that actually passes the Turing Test, in the sense that it can hold a conversation and most people won’t be able to tell that it isn’t human. In fact there have been programs since the 1960s (!) that at least fool some people, like ELIZA and PARRY. (Thus it was cute that this AI decided to name itself “Eliza”.) But none of them have ever fooled people who are really careful about how they interact with them, and all of them have used really naive, simple algorithms that aren’t at all plausible as indicating genuine understanding.

I think that we may finally be reaching the point where that will change. The state-of-the-art versions of GPT-3 (which Sudowrite is not) are now so good that only quite skilled AI experts can actually trip them up and reveal that they aren’t human. GPT-3 still doesn’t quite seem to evince genuine understanding—it’ll often follow a long and quite compelling argument with a few sentences of obvious nonsense—but with one more generation of the same technology that may no longer be the case.

Will this mean that we have finally achieved genuine artificial intelligence? I don’t think so.

Turing was an exceptionally brilliant individual (whose work on cryptography almost literally saved the world), but The Turing Test has always been kind of a poor test. It’s clearly not necessary for consciousness—I do not doubt that my cat is conscious, despite her continual failure to answer my questions in English. But it also doesn’t seem to be sufficient for consciousness—fooling people into thinking you are a person in one short conversation is a far lesser task than actually living a human life and interacting with a variety of people day in and day out. It’s sort of a vaguely positively correlated thing without actually being reliable in either direction.

Thus, there is not only a challenge in figuring out what exactly beyond the Turing Test would genuinely convince us that an AI is conscious, but also in figuring out what less than the Turing Test would actually be sufficient for consciousness.


Regarding the former, I don’t think I am simply being an organocentrist. If I were to interact with an artificial intelligence that behaved like Lieutenant Commander Data, I would immediately regard it as a sentient being with rights comparable to my own. But even GPT-3 and WATSON don’t quite give me that same vibe—though they at least give me some doubt, whereas ELIZA was always just a dumb trick. Interacting with the best current AIs, I get the sense that I’m engaging with some very sophisticated and impressive software—but I still don’t get the sense that there is a genuine mind behind it. There’s just no there there.

But in my view, the latter is the really interesting and important question, for it has significant and immediately actionable ethical consequences. Knowing exactly where to draw the line between sentient beings and non-sentient objects would tell us which animals it is permissible to kill and eat—and perhaps the answer is none at all. Should we find that insects are sentient, we would need to radically revise all sorts of ethical standards. Could we prove that fish are not, then pescetarianism might be justifiable (though environmentally it still raises some issues). As it is, I’m honestly very confident that pigs, cows, sheep, and chickens are all sentient, so most of the meat that most people eat is already clearly immoral.

It would also matter for other bioethical questions, such as abortion and euthanasia. Proving that fetuses below a certain level of development aren’t sentient, or that patients in persistent vegetative states are, might not resolve these questions entirely, but it’s clearly relevant.

Unfortunately, I don’t have a clear answer to either question. I feel like I know consciousness when I see it.