At the forefront of Artificial Intelligence
  Home Articles Reviews Interviews JDK Glossary Features Discussion Search
Home » Interviews » Philosophy » General

John Searle

John Searle, Professor of Philosophy at Berkeley, is best known for his famous "Chinese Room" Analogy. The analogy goes like this: Dr. Searle is in a large room with two holes marked I (Input) and O (Output). From the 'I' box, he gets handed questions written in Chinese kanji. Also in his room is a huge book with English instructions as to how to look up the answers and write them on a piece of paper to the Chinese questions - therefore, practicalities aside, he could look up any question and give the right answer. Searle says this is analogous to computers running NLP programs just because they input the correct answer given an input, no matter how complicated the algorithm, it does not constitute understanding.

The analogy has been a huge area of debate for the twenty years that has passed since Dr. Searle first published his paper on it. Generation5 is very proud to have had the chance to interview him.


Glossary
Chinese Room, The
Searle, John
G5: Your 'Chinese Room' analogy is probably the singly most talked-about subject in the philosophical side of Artificial Intelligence. Did you ever think it would have such an impact?

I knew when I originally formulated the Chinese Room Argument that it was decisive against what I call "Strong Artificial Intelligence", the theory that says the right computer program in any implementation whatever, would necessarily have mental contents in exactly the same sense that you and I have mental contents. The Chinese Room Argument refutes the view that the implemented computer program, regardless of the physics of the implementing medium, is sufficient, by itself to guarantee mental contents. I did not think it would receive the amount of attention it did. What I expected, in so far as I had any expectation at all, is that the people who could appreciate its force would simply accept it, and the people who for one reason or another did not want to face the issue would simply avoid it. What I did not anticipate is that there would be twenty years of continuing debate.

G5: How would you define understanding? Do you differentiate between the idea of intelligence and that of understanding? (Could a computer be intelligent, but not *understand* what you were saying?)

The literal meaning of "understanding" is not important for my argument. My argument primarily concerns intentionality, that is, mental content. I have written a book about intentionality, so I would not attempt to define it here, but briefly the basic idea is that the mind has mental contents which enable mental states to refer to or be about objects and states of affairs in the world. Where language is concerned this is semantic content.

"Understanding" is normally contrasted with "misunderstanding", but for the purposes of my argument this distinction is not important. I don't "understand" Chinese, but then I don't "misunderstand" it either, because I do not have any Chinese intentional content at all. The point of the argument is simply that the syntax of the implemented program is not sufficient to guarantee the presence of the semantics, or mental content, of intrinsic intentionality. Where English is concerned I have meaning attached to the words, that is, I have intentional content attaching to the words. Where Chinese is concerned in the Chinese room, I am just manipulating formal symbols.


Looking back on the original Chinese Room article twenty years later, I think I would not change anything
When it comes to the notion of "intelligence" there is a massive confusion in the AI literature. "Intelligence" has two different senses (at least). In one sense there is an intelligence that is psychologically relevant, that is the intelligence that, for example, humans and animals have. My dog is literally more intelligent than a mouse, because he has a bigger brain and has greater psychological capacity. But there is also a metaphorical, or observer-relative, or derivative sense of intelligence, where we simply apply the concept of intelligence to things that have no mental life at all. Thus I can say that my pocket calculators today are more "intelligent" than my calculators of twenty years ago, and my "smart modem" of today is much smarter that the modem I had twenty years ago. This is a harmless use of the term intelligence. The mistake is to suppose that somehow or other the existence of the behavioral or observer-relative phenomenon of intelligence is somehow psychologically relevant, that "intelligent" behavior guarantees actual psychological content, and of course it doesn't.

By the way, the very expression "artificial intelligence" also trades on these ambiguities. "Artificial" also has different senses. An artificial x can be a real x produced artificially, or it can be something that is not a real x at all. Thus artificial dyes, now commonly used in oriental rugs, are real dyes, they just happen to be produced artificially. But artificial leather is not real leather produced artificially, rather it is not leather at all, but merely a plastic imitation. The expression "artificial intelligence" trades on the ambiguity of both "artificial" and "intelligence". The confusion is to suppose that because we can artificially produce something that behaves as if it is intelligent, then somehow or other we have artificially produced real intelligence in the psychologically relevant sense. One thing the Chinese Room shows is that this is a fallacy.

G5: If a computer manipulating symbols via a specified algorithm is not understanding, how do you believe humans process information? Do you believe this to be non-computable?

This question can be answered briefly. The brain processes information by biologically very specific, though still largely ill-understood, neuronal processes. The brain is, after all, a machine that produces consciousness, intelligence, intentionality, etc., by machine processes such as neuron firings at synapses. I regard the notion of what is "computable" as, for the most part irrelevant to the operations of the brain. The brain is a physical system like the stomach. You can describe many, perhaps most, brain processes in a precise enough way that you can simulate them on a computer, just as you can describe the stomach in such a way that you can give a computational simulation of the processes in the stomach. But the computability of these processes at the formal level is not what is essential to the corresponding physical processes at the biological level. At the biological level there are actual causal mechanisms that produce consciousness, intentionality, and all the rest of it.

Basically I think the brain is important. We might be able to do what the brain does using some other medium, but we would have to duplicate the specific causal powers of the brain, and not just do a simulation or a modeling of brain processes, you actually have to duplicate the causal powers. An analogy will make this point clear:.You do not have to use feathers in order build a machine that can fly, but you do have to duplicate the bird's causal power to overcome the force of gravity in the earth's atmosphere, and a computer simulation of flight is not a flight. The notion of what is computable and what is not computable is a mathematical notion having to do with what sorts of problems you can solve with algorithms, and I do not see any difficulty posed for artificial intelligence by the limits of computability. The problem is not computability, the problem is psychological reality. I realize that some people have made a great deal out of the fact that there are non-computable problems that humans can solve, but this argument seems to me irrelevant to AI because humans may be using algorithms to solve those problems even though the algorithms are not, for example, theorem-proving algorithms.

G5: I very much agree that classical AI programs have no concept of 'understanding' -- yet, with the advent of massively parallel computers, complex neural networks and advances in neurobiology I feel that one day will indeed have a fully cognitive, understanding computer. Do you hold this same view? Why or why not?

A lot of people think that the existence of massively parallel computers somehow avoids the problem, but we need to be exactly clear about what the problem is and what the solution is supposed to be. If we are told that the massively parallel computer will avoid the problems of the Chinese Room, then we have to ask ourselves what is it exactly about the PDP systems which is different from the standard Von Neumann architecture? If we are told that there are computations that the PDPs can carry out, but that the traditional architecture cannot carry out, we know that this is false. We know that it is false from the Church-Turing thesis that any computable function is Turing computable. And we know furthermore from Turing's theorem that a Universal Turing Machine can compute any function that a PDP system can compute, because we know that anything at all that is computable can be computed on a Universal Turing Machine. So if the claim is that if the PDP systems have computational powers that Von Neumann systems do not have, we know that is false. The claim, then, must be not that there is some special computation involved in PDP, but rather there is some specific feature of the architecture which gives it powers in addition to its computational powers. But now we are no longer talking computation, we are talking speculative neurobiology. We are speculating that a certain pattern of the physical architecture, again regardless of the implementing medium, will duplicate and not merely simulate the causal powers of the brain, simply because of its network structure. I think this is preposterous as a piece of neurobiological speculation, and nobody who knows any neurobiology would take it seriously, but it is not something that my arguments are designed to answer, because my arguments are purely logical arguments that have nothing to do with answering neurobiological speculations.

G5: Do you think that Artificial Intelligence is fighting a lost cause, or is it just the definitions of understanding that you disagree with? (As in, do you believe, for example, we will have fully humanoid robots capable of speaking and listening to humans but you would disagree with the statement that they would understand what was said to them).

I think that what I call weak AI, or cautious AI, is immensely useful. I think we use the computer to simulate cognition the same way we use the computer to simulate any other natural process, so computers are immensely useful in studying molecular biology, or digestion for that matter. I do not have any objection to the use of computers in studying cognition and in simulating cognition. The objection that I have made and continue to insist on is that simulation is not duplication. The question of whether a "humanoid robot" would actually understand something, or merely simulate understanding, is a question of how the system works. If the system works simply by processing meaningless symbols, then that by itself would not be sufficient to guarantee understanding. If the system works by duplicating and not merely simulating the causal powers of neurons (remember of course that this might be done in some medium other than neurons), then the system would actually have cognition, and not merely a simulation of cognition. The point to remember is that the behavior by itself does not settle the issue. I can get an electrical engine to give the same power output as a gasoline engine, but all the same it works on different principles. We might have a robot that could give the same input/output functions as a human, but work on different internal principles. It might, in short, have no mental life at all. The important point to remember is that where the existence of mental life is concerned, behavior is irrelevant. It is useful epistemologically in finding out about mental life, but the presence of behavior does not guarantee the presence of any mental life.

It is important to keep emphasizing that of course, in a sense, we are robots. We are all physical systems capable of behaving in certain ways. The point, however, is that unlike the standard robots of science fiction, we are actually conscious. We actually have conscious and unconscious forms of mental life. The robots we are imagining, I take it, have no consciousness whatever. They are simply computer simulations of the behavior patterns of human beings.

G5: It is not just 'understanding' that is under great debate. They are computer viruses that fulfill nearly all the criteria of 'life.' Or 'emotions': since they are merely chemical reactions, a computer can feel them since by altering the necessary variables (Grey Walter's tortoise, for example). What are your feelings on these?

The computer viruses are not alive, and of course my PC has no emotions. To have an emotion is among other things to be capable of consciousness, and we are assuming that the computers in question are not conscious. Of course, if they were conscious computers, then they could have emotions, but to put the point very simply, a system that is incapable of consciousness is incapable of emotion regardless of what its behavior is.

G5: What amendments to the Chinese Room would you make, if any, to the original analogy?

Looking back on the original Chinese Room article twenty years later, I think I would not change anything.

Submitted: 08/03/2001

 Article Toolbar
Print
BibTeX entry

Search

Latest News
- The Latest (03/04/2012)
- Generation5 10-year Anniversary (03/09/2008)
- New Generation5 Design! (09/04/2007)
- Happy New Year 2007 (02/01/2007)
- Where has Generation5 Gone?! (04/11/2005)

What's New?
- Back-propagation using the Generation5 JDK (07/04/2008)
- Hough Transforms (02/01/2008)
- Kohonen-based Image Analysis using the Generation5 JDK (11/12/2007)
- Modelling Bacterium using the JDK (19/03/2007)
- Modelling Bacterium using the JDK (19/03/2007)


All content copyright © 1998-2007, Generation5 unless otherwise noted.
- Privacy Policy - Legal - Terms of Use -