Jul 12 JDN 2459044
Perhaps the most famous thought experiment in the philosophy of mind, John Searle’s Chinese Room is the sort of argument that basically every expert knows is wrong, yet can’t quite explain what is wrong with it. Here’s a brief summary of the argument; for more detail you can consult Wikipedia or the Stanford Encyclopedia of Philosophy.
I am locked in a room. The only way to communicate with me is via a slot in the door, through which papers can be passed.
Someone on the other side of the door is passing me papers with Chinese writing on them. I do not speak any Chinese. Fortunately, there is a series of file cabinets in the room, containing instruction manuals which explain (in English) what an appropriate response in Chinese would be to any given input of Chinese characters. These instructions are simply conditionals like “After receiving input A B C, output X.”
I can follow these instructions and thereby ‘hold a conversation’ in Chinese with the person outside, despite never understanding Chinese.
This room is like a Turing Test. A computer is fed symbols and has instructions telling it to output symbols; it may ‘hold a conversation’, but it will never really understand language.
First, let me note that if this argument were right, it would pretty much doom the entire project of cognitive science. Searle seems to think that calling consciousness a “biological function” as opposed to a “computation” can somehow solve this problem; but this is not how functions work. We don’t say that a crane ‘isn’t really lifting’ because it’s not made of flesh and bone. We don’t say that an airplane ‘isn’t really flying’ because it doesn’t flap its wings like a bird. He often compares to digestion, which is unambiguously a biological function; but if you make a machine that processes food chemically in the same way as digestion, that is basically a digestion machine. (In fact there is a machine called a digester that basically does that.) If Searle is right that no amount of computation could ever get you to consciousness, then we basically have no idea how anything would ever get us to consciousness.
Second, I’m guessing that the argument sounds fairly compelling, especially if you’re not very familiar with the literature. Searle chose his examples very carefully to create a powerfully seductive analogy that tilts our intuitions in a particular direction.
There are various replies that have been made to the Chinese Room. Some have pointed out that the fact that I don’t understand Chinese doesn’t mean that the system doesn’t understand Chinese (the “Systems Reply”). Others have pointed out that in the real world, conscious beings interact with their environment; they don’t just passively respond to inputs (the “Robot Reply”).
Searle has his own counter-reply to these arguments: He insists that if instead of having all those instruction manuals, I memorized all the rules, and then went out in the world and interacted with Chinese speakers, it would still be the case that I didn’t actually understand Chinese. This seems quite dubious to me: For one thing, how is that different from what we would actually observe in someone who does understand Chinese? For another, once you’re interacting with people in the real world, they can do things like point to an object and say the word for it; in such interactions, wouldn’t you eventually learn to genuinely understand the language?
But I’d like to take a somewhat different approach, and instead attack the analogy directly. The argument I’m making here is very much in the spirit of Churchland’s Luminous Room reply, but a little more concrete.
I want you to stop and think about just how big those file cabinets would have to be.
For a proper Turing Test, you can’t have a pre-defined list of allowed topics and canned responses. You’re allowed to talk about anything and everything. There are thousands of symbols in Chinese. There’s no specified limit to how long the test needs to go, or how long each sentence can be.
After each 10-character sequence, the person in the room has to somehow sort through all those file cabinets and find the right set of instructions—not simply to find the correct response to that particular 10-character sequence, but to that sequence in the context of every other sequence that has occurred so far. “What do you think about that?” is a question that one answers very differently depending on what was discussed previously.
The key issue here is combinatoric explosion. Suppose we’re dealing with 100 statements, each 10 characters long, from a vocabulary of 10,000 characters. This means that there are ((10,000)^10)^100 = 10^4000 possible conversations. That’s a ludicrously huge number. It’s bigger than a googol. Even if each atom could store one instruction, there aren’t enough atoms in the known universe. After a few dozen sentences, simply finding the correct file cabinet would be worse than finding a needle in a haystack; it would be finding a hydrogen atom in the whole galaxy.
Even if you assume a shorter memory (which I don’t think is fair; human beings can absolutely remember 100 statements back), say only 10 statements, things aren’t much better: ((10,000)^10)^10 is 10^400, which is still more atoms than there are in the known universe.
In fact, even if I assume no memory at all, just a simple Markov chain that responds only to your previous statement (which can be easily tripped up by asking the same question in a few different contexts), that would still be 10,000^10 = 10^40 sequences, which is at least a quintillion times the total data storage of every computer currently on Earth.
And I’m supposed to imagine that this can be done by hand, in real time, in order to carry out a conversation?
Note that I am not simply saying that a person in a room is too slow for the Chinese Room to work. You can use an exaflop quantum supercomputer if you like; it’s still utterly impossible to store and sort through all possible conversations.
This means that, whatever is actually going on inside the head of a real human being, it is nothing like a series of instructions that say “After receiving input A B C, output X.” A human mind cannot even fathom the total set of possible conversations, much less have a cached response to every possible sequence. This means that rules that simple cannot possibly mimic consciousness. This doesn’t mean consciousness isn’t computational; it means you’re doing the wrong kind of computations.
I’m sure Searle’s response would be to say that this is a difference only of degree, not of kind. But is it, really? Sometimes a sufficiently large difference of degree might as well be a difference of kind. (Indeed, perhaps all differences of kind are really very large differences of degree. Remember, there is a continuous series of common ancestors that links you and I to bananas.)
Moreover, Searle has claimed that his point was about semantics rather than consciousness: In an exchange with Daniel Dennett he wrote “Rather he [Dennett] misstates my position as being about consciousness rather than about semantics.” Yet semantics is exactly how we would solve this problem of combinatoric explosion.
Suppose that instead of simply having a list of symbol sequences, the file cabinets contained detailed English-to-Chinese dictionaries and grammars. After reading and memorizing those, then conversing for awhile with the Chinese speaker outside the room, who would deny that the person in the room understands Chinese? Indeed what other way is there to understand Chinese, if not reading dictionaries and talking to Chinese speakers?
Now imagine somehow converting those dictionaries and grammars into a form that a computer could directly apply. I don’t simply mean digitizing the dictionary; of course that’s easy, and it’s been done. I don’t even mean writing a program that translates automatically between English and Chinese; people are currently working on this sort of thing, and while still pretty poor, it’s getting better all the time.
No, I mean somehow coding the software so that the computer can respond to sentences in Chinese with appropriate responses in Chinese. I mean having some kind of mapping within the software of how different concepts relate to one another, with categorizations and associations built in.
I mean something like a searchable cross-referenced database, so that when asked the question, “What’s your favorite farm animal?” despite never having encountered this sentence before, the computer can go through a list of farm animals and choose one to designate as its ‘favorite’, and then store that somewhere so that later on when it is again asked it will give the same answer. And then why asked “Why do you like goats?” the computer can go through the properties of goats, choose some to be the ‘reason’ why it ‘likes’ them, and then adjust its future responses accordingly. If it decides that the reason is “horns are cute”, then when you mention some other horned animal, it updates to increase its probability of considering that animal “cute”.
I mean something like a program that is programmed to follow conversational conventions, so when you ask it its name, will not only tell you something; it will ask you your name in return, and stores that information for later. And then it will map the sound of your name to known patterns of ethnic naming conventions, and so when you say your name is “Ling-Ling Xu” it asks “Is your family Chinese?” And then when you say “yes” it asks “What part of China are they from?” and then when you say “Shanghai” it asks “Did you grow up there?” and so on. It’s not that it has some kind of rule that says “Respond to ‘Shanghai’ with ‘Did you grow up there?’”; on the contrary, later in the conversation you may say “Shanghai” and get a different response because it was in a different context. In fact, if you were to keep spamming “Shanghai” over and over again, it would sound confused: “Why do you keep saying ‘Shanghai’? I don’t understand.”
In other words, I mean semantics. I mean something approaching how human beings actually seem to organize the meanings of words in their brains. Words map to other words and contexts, and some very fundamental words (like “pain” or “red”) map directly to sensory experiences. If you are asked to define what a word means, you generally either use a lot of other words, or you point to a thing and say “It means that.” Why can’t a robot do the same thing?
I really cannot emphasize enough how radically different that process would be from simply having rules like “After receiving input A B C, output X.” I think part of why Searle’s argument is so seductive is that most people don’t have a keen grasp of computer science, and so the difference between a task that is O(N^2) like what I just outlined above doesn’t sound that different to them compared to a task that is O(10^(10^N)) like the simple input-output rules Searle describes. With a fast enough computer it wouldn’t matter, right? Well, if by “fast enough” you mean “faster than could possibly be built in our known universe”, I guess so. But O(N^2) tasks with N in the thousands are done by your computer all the time; no O(10^(10^N)) task will ever be accomplished for such an N within the Milky Way in the next ten billion years.
I suppose you could still insist that this robot, despite having the same conceptual mappings between words as we do, and acquiring new knowledge in the same way we do, and interacting in the world in the same way we do, and carrying on conversations of arbitrary length on arbitrary topics in ways indistinguishable from the way we do, still nevertheless “is not really conscious”. I don’t know how I would conclusively prove you wrong.
But I have two things to say about that: One, how do I know you aren’t such a machine? This is the problem of zombies. Two, is that really how you would react, if you met such a machine? When you see Lieutenant Commander Data on Star Trek: The Next Generation, is your thought “Oh, he’s just a calculating engine that makes a very convincing simulation of human behavior”? I don’t think it is. I think the natural, intuitive response is actually to assume that anything behaving that much like us is in fact a conscious being.
And that’s all the Chinese Room was anyway: Intuition. Searle never actually proved that the person in the room, or the person-room system, or the person-room-environment system, doesn’t actually understand Chinese. He just feels that way, and expects us to feel that way as well. But I contend that if you ever did actually meet a machine that really, truly passed the strictest form of a Turing Test, your intuition would say something quite different: You would assume that machine was as conscious as you and I.