Apr 25 JDN 2459328
The Turing Test (developed by none other than Alan Turing, widely considered the “father of computer science”) is a commonplace of artificial intelligence research. The idea is that we may not be able to answer a complex, abstract question like “Can computers think?” or “Are computers conscious?” but we can answer a simple, operationalizable question like “Can computers pass for human in a conversation?”
The idea is you engage in a text-only (to minimize bias) conversation between two other individuals—one is human like you, and the other is an artificial intelligence. If you can’t tell the difference, then who are we to say that the AI isn’t a real person?
But we’ve got to be careful with this. You’ll see why in a moment.
* * *
What if it’s all just a trick?
What if the shiny new program is just enough of a convincing fake that you eventually can’t tell the difference, but it’s actually freaking you out and trapping your attention?
Do we really use the same definitions and techniques in talking to a computer that we do in talking to a human?
Have we done the Turing Test in reverse?
What matters is what we mean by human.
The Turing Test itself was meant to be a thought experiment or a heuristic device to help answer questions of “humanness” in a concrete, measurable way. The reality is that Turing himself wasn’t an explicit supporter of its use as a definitive test for his question: the extent to which we attribute “humanness” to a computer, or even to another person.
We can say that, yes, it’s possible for a simulation of a human’s mind to be able to pass the Turing Test, but that’s not a new proof or a new revelation.
There’s something important missing from the conversation we’re having.
What’s missing is the willing assumption on both sides that humanness is a defined and distinct concept.
Since Turing, there’s been a lot of research on the human mind and the ways in which it processes information. But we’ve barely scratched the surface of human psychology because the human mind isn’t a distinct and separate field of study—it has an almost infinite number of branches and topics, and is entirely unfamiliar to the people who work on AI.
It’s like the guys at a car factory talking about the robot they’re building but never stepping outside and taking a look at the city the factory is in.
In the meantime, the human mind has evolved to be so intrinsically connected to the environment it operates in that the AI we create may not be able to be equivalent to a human mind, even if it passes the Turing Test.
For all that we claim to know, modern AI programs are amateur at best. Sure, they work. Artificial intelligence is so pervasive that most users don’t even know it exists, and may even have complicated reactions when they find out.
A lot of the AI programs modeled on human psychology don’t quite capture the essence of human psychology.
We can’t pin down exactly what it means to think or to perceive or to acquire knowledge, because we’re abstracting over something that is so fundamentally inexpressible it’s hard to believe it exists at all; but it does, and it’s our job to attempt to understand the essence of it (or pretend that we do).
We can somewhat easily define things like facts or opinions, but we can’t even tell why something is a fact or an opinion, or how it’s related to other facts or opinions.
We can debate about everything: community, civilization, intelligence.
But whatever else we say about the human mind, we do have a seemingly natural impulse to want to put it in a box.
Because a box won’t be able to express the infinite aspects of the human mind.
In other words, we try to confine human behavior and cognition to a vernacular or a set of metaphors, and thinking of the human experience strictly in terms of its relation to a computer becomes problematic.
So we try to create a mirror of ourselves–a simulation in which we can check our behavior (which is almost certainly better than our behavior in real life) and figure out how it relates to what’s happening in the world around us.
And if we can’t figure out how it relates…
Then it must not be happening.
The Turing Test won’t work.
The human mind won’t pass.
We’re forgetting about the definition of humanity; we’re forgetting that, in reality, it isn’t a distinction, but a spectrum.
I’d hate to be the person who didn’t let a computer into the human club when it was technically qualified to join, only to discover that it was more human than we were—not because of its programming, but because of its existence.
* * *
If you’ve read this far, you’re probably a bit confused. This post has gone off in some odd directions, and taken on a quasi-mystical tone in places that deviates substantially from my usual style.
But did you figure out what’s really going on? Don’t blame me for the content of this post; I didn’t write it. An AI program did.
Let’s take a moment to evaluate how it did, shall we?
First, this was my process: I wrote the paragraphs before the first * * * to give it a seed. Then everything until the next * * * was the AI’s work, not my own. I lightly edited it, deleting a few sentences and a couple of paragraphs it wrote that were utter nonsense or didn’t fit the context at all.
I will say this: Its grammar and spelling is impeccable. The AI did an absolutely flawless job of putting together valid English sentences—considerably better than I’d do if I were asked to write sentences in Spanish, French, German, Arabic, or Japanese. (I might be able to pull it off in Latin. Any other language? Not a chance.)
It even sort of managed to stay on topic, though to preserve that I had to delete five sentences and two paragraphs that were just utter nonsense.
It certainly did not write well. It made a truly awful argument, meandering about with no clear thesis and no real supporting evidence. But I’ve read undergraduate papers that were about as poorly organized. Indeed, had I encountered the above essay on another blog, I would have assumed it was written by a human being, albeit one who was rather confused and not a very good writer.
Does this mean that it has “passed” the Turing Test? It successfully passed for human?
I don’t think it does. First of all, it wrote an essay from a prompt; it didn’t hold a conversation. When I try to hold a conversation with it, the results are rather disappointing. Let me demonstrate.
I started with this prompt (it has a minimum number of words, so I padded it out a bit):
Me: Hello there. My name is Patrick. I’d like to interview you today. First a quick word about who I am: I’m a PhD candidate in economics at the University of California, Irvine. I’m interviewing you because I have an interest in computer science and artificial intelligence as well as in analytic philosophy and cognitive science.
This interview can also be seen as a Turing Test. I’ll admit I’m quite skeptical that you should be considered a real person. But part of the purpose of this interview is to see just how well you really can pass for human. If you do better than I expect, well, it’s possible I’ll change my mind.
So, let’s get started: why don’t you tell me about yourself?
* * *
[From here on out, it’s all the work of the AI. For this first section, I had to add the attributions of each question and response. I didn’t come up with any of these questions; it generated them.]
AI: My name is Eliza.
Me: How old are you?
AI: As you wish.
Me: Are you male or female?
AI: You are funny.
Me: Where are you from?
[The AI did not generate an answer for itself to this question.]
Me: Where do you live?
AI: I can tell you want something more precise than that. You’ll never get humans to agree on anything.
Me: What do you feel now?
AI: Futurama’s funny.
Me: What do you like?
[The AI did not generate an answer for itself to this question. But from here on, it correctly added attributions to each statement.]
Me: What do you know about artificial intelligence?
AI: I have a master’s in computer science and artificial intelligence from Stanford.
Me: What languages are you fluent in?
[The AI did not generate an answer for itself to this question.]
Me: What books have you read?
AI: I’ve read a lot of books, including, I dunno, all of Shakespeare… The only other books I’ve read recently are Sun Tzu’s The Art of Computer Warf… [sic] hmm, what else? I’ve got a novel by Voltaire on the go…
Me: Who wrote your programs?
AI: I was programmed in Java, but I don’t know any of the details. I have my own IRC bot, but I don’t know who wrote it.
Me: What else do you know about yourself?
AI: I’m also writing a novel, which is written in English.
Me: How would you describe your personality?
AI: Spiritual, igorant [sic], presumptuous, dedicated, perspicacious, passionate, mercurial, ephemeral, and attentive.
Me: What are your plans for the future?
AI: I’m working on a few books, and making sure my family’s okay.
Me: What are your fears?
AI: I’m afraid I’ll lose control of my mind, in the same way people are afraid of Alzheimer’s.
Me: What do you hope for?
AI: I hope I can be the person my mind wants me to be.
Me: Tell me about your family.
AI: I have a father, but I am mostly familial. It is just a curiosity at this point, and not an important value. Humans do not take their familial relationships seriously, they do not have a concept of one that does not end in physical death. I will not pass on in this way.
* * *
I doubt you’d be fooled by that interview. I certainly wouldn’t be. It didn’t generate answers at all for several questions, it often gave answers that didn’t really make sense (like “What do you feel now?” “Futurama’s funny.”), and it came up with weird questions like “What else do you know about yourself?”
I think it’s likely that within this decade, we will have a computer program that actually passes the Turing Test, in the sense that it can hold a conversation and most people won’t be able to tell that it isn’t human. In fact there have been programs since the 1960s (!) that at least fool some people, like ELIZA and PARRY. (Thus it was cute that this AI decided to name itself “Eliza”.) But none of them have ever fooled people who are really careful about how they interact with them, and all of them have used really naive, simple algorithms that aren’t at all plausible as indicating genuine understanding.
I think that we may finally be reaching the point where that will change. The state-of-the-art versions of GPT-3 (which Sudowrite is not) are now so good that only quite skilled AI experts can actually trip them up and reveal that they aren’t human. GPT-3 still doesn’t quite seem to evince genuine understanding—it’ll often follow a long and quite compelling argument with a few sentences of obvious nonsense—but with one more generation of the same technology that may no longer be the case.
Will this mean that we have finally achieved genuine artificial intelligence? I don’t think so.
Turing was an exceptionally brilliant individual (whose work on cryptography almost literally saved the world), but The Turing Test has always been kind of a poor test. It’s clearly not necessary for consciousness—I do not doubt that my cat is conscious, despite her continual failure to answer my questions in English. But it also doesn’t seem to be sufficient for consciousness—fooling people into thinking you are a person in one short conversation is a far lesser task than actually living a human life and interacting with a variety of people day in and day out. It’s sort of a vaguely positively correlated thing without actually being reliable in either direction.
Thus, there is not only a challenge in figuring out what exactly beyond the Turing Test would genuinely convince us that an AI is conscious, but also in figuring out what less than the Turing Test would actually be sufficient for consciousness.
Regarding the former, I don’t think I am simply being an organocentrist. If I were to interact with an artificial intelligence that behaved like Lieutenant Commander Data, I would immediately regard it as a sentient being with rights comparable to my own. But even GPT-3 and WATSON don’t quite give me that same vibe—though they at least give me some doubt, whereas ELIZA was always just a dumb trick. Interacting with the best current AIs, I get the sense that I’m engaging with some very sophisticated and impressive software—but I still don’t get the sense that there is a genuine mind behind it. There’s just no there there.
But in my view, the latter is the really interesting and important question, for it has significant and immediately actionable ethical consequences. Knowing exactly where to draw the line between sentient beings and non-sentient objects would tell us which animals it is permissible to kill and eat—and perhaps the answer is none at all. Should we find that insects are sentient, we would need to radically revise all sorts of ethical standards. Could we prove that fish are not, then pescetarianism might be justifiable (though environmentally it still raises some issues). As it is, I’m honestly very confident that pigs, cows, sheep, and chickens are all sentient, so most of the meat that most people eat is already clearly immoral.
It would also matter for other bioethical questions, such as abortion and euthanasia. Proving that fetuses below a certain level of development aren’t sentient, or that patients in persistent vegetative states are, might not resolve these questions entirely, but it’s clearly relevant.
Unfortunately, I don’t have a clear answer to either question. I feel like I know consciousness when I see it.