tmoody@sjuvax.UUCP (T. Moody) (10/18/85)
[] Here is the promised summary of John Searle's much-discussed "Minds, Brains, and Programs". In this posting, I shall *just* summarize, and withhold any comments that I might be tempted to make. Searle identifies what he calles a "strong AI" thesis: "The appropriately programmed computer really *is* a mind, in the sense that computers given the right programs can be literally said to *understand* and have other cognitive states." Strong AI, then, is a thesis that belongs to the more general philosophy of mind called "Turing machine functionalism" (so much for refraining from comment). Searle believes the strong AI thesis to be false. His key argument is the "Chinese Room" counterexample... Imagine yourself in a little room. From a slot in one wall, you receive sheets of paper, with various inscriptions on them. On the table before you is a *large* manual. Depending on the precise configuration of the inscription you have received, you refer to various rules and tables in the manual. Following the complex instructions therein, you draw some marks on a blank page and, when finished, pass it through a slot on the other wall. The manual is written in English, which you understand, but the marks on the papers mean nothing to you. They are just marks. What you don't know is that the papers which you are receiving are Chinese texts, and the papers you are passing through the outbound slot are also Chinese texts. Furthermore, your output papers are perfectly *appropriate* Chinese texts; they are natural responses to the input texts (even though you don't know this). In short, you are passing the Turing Test in Chinese. But you still don't *understand* Chinese. All you understand is the fiendishly complicated manual in front of you. Conclusion: instantiating a Turing Machine algorithm -- which is what you've been doing -- is not a sufficient condition for understanding a natural language, so the strong AI thesis is false. In Searle's words, "whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything." Searle considers and responds to various objections. I won't rehearse that part of the paper here. I will, however, point out that Searle is not defending dualism. He says, "My view is that *only* a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines." I hope that this clarifies things for those who were wondering about the references to Searle that have been appearing in this newsgroup. (Hey, Searle is at Berkeley. Why doesn't somebody at ucbvax get him onto the net?) Todd Moody | {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody Philosophy Department | St. Joseph's U. | "I couldn't fail to Philadelphia, PA 19131 | disagree with you less."
dmcanzi@watdcsu.UUCP (David Canzi) (10/22/85)
Suppose you had the means to selectively and temporarily disable parts of a human brain. You might eventually discover, by experiment, two portions of my brain with the following property: when the rest of my brain is shut off, I can still understand; yet when either of these two parts is shut off, I become unable to understand. Neither of these two portions of my brain understand, yet the system consisting of both of them working together *can* understand. I suggest that, even though neither the man in the Chinese room, nor the manual he reads from can be said to understand Chinese, the system consisting of both man and manual understands Chinese. -- David Canzi There are too many thick books about thin subjects.
tmoody@sjuvax.UUCP (T. Moody) (10/27/85)
In article <1779@watdcsu.UUCP> dmcanzi@watdcsu.UUCP (David Canzi) writes: > >I suggest that, even though neither the man in the Chinese room, nor >the manual he reads from can be said to understand Chinese, the system >consisting of both man and manual understands Chinese. >-- >David Canzi Searle anticipates this move, which he calls the "systems reply." I shall briefly summarize his response to it, and throw in my own $.02. In fact, the easiest thing is to quote Searle directly: "Let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him." It seems to me that the response is quite clear. But let's throw in a few reminders about what Searle is up to. First, he is NOT trying to prove that the mind is not a Turing Machine. Second, he is NOT trying to prove that "machines will never think". He IS interested in the roots of intentionality, and he IS claiming that what makes a system an "intentional system" is NOT the fact that it passes the Turing Test, nor is it the fact that its brain is instantiating a Turing Machine algorithm. He is NOT claiming that intentional systems must be made of neurons, although he does point out that biological systems are just the right sorts of things to possess intentionality. Why? Because of the "causal powers" of biological systems. If you want to know more about what Searle thinks about this, his recent book _Intentionality_ is where he puts it together. The Chinese Room argument is only supposed to be a critique of the Turing Test as providing a sufficient condition of intentionality. Searle believes that you have to have a richer repertoire of interactions with the environment and other beings to have sufficient conditions of intentionality. That means richer than what the Turing Test measures. Remember, the Turing Test is based on blind exchange of typed texts. People like Hofstadter claim that all interpresonal relations are informal Turing Tests, but this is ridiculous. The whole point of the Turing Test, as set up by A. M. Turing himself, is to establish a carefully *restricted* mode of interaction, in which only text-exchange counts. Throw away the restrictions, and you're not talking about the Turing Test anymore. Turing believed that anything more than text-exchange would be extraneous to determining intentionality. THIS is what Searle is trying to refute. Of course, Turing didn't talk about intentionality; he talked about *thinking* and mental states. But maybe there are important distinctions to be made between intentionality, mental states, and consciousness. Todd Moody | {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody Philosophy Department | St. Joseph's U. | "I couldn't fail to Philadelphia, PA 19131 | disagree with you less."
dmcanzi@watdcsu.UUCP (David Canzi) (10/31/85)
In article <2461@sjuvax.UUCP> tmoody@sjuvax.UUCP (T. Moody) writes: >>I suggest that, even though neither the man in the Chinese room, nor >>the manual he reads from can be said to understand Chinese, the system >>consisting of both man and manual understands Chinese. > >Searle anticipates this move, which he calls the "systems reply." ... > >"Let the individual internalize all of these elements of the system. >He memorizes the rules in the ledger and the data banks of Chinese >symbols, and he does all the calculations in his head. The individual >then incorporates the entire system. There isn't anything at all to >the system that he does not encompass. We can even get rid of the >room and suppose he works outdoors. All the same, he understands >nothing of the Chinese, and a fortiori neither does the system, >because there isn't anything in the system that isn't in him." That's a tough one. I won't attempt to argue against it until after I've read Searle's paper, and maybe not even then. But I do have a couple of comments. First, even though you say Searle was not trying to prove that machines will never think, I can't see how one can escape that conclusion if we accept the Chinese Room argument. Let's carry the above a step further, and have the man memorize a manual describing phonetic Chinese instead of written Chinese, and have him follow the rules to generate spoken responses to a *real* Chinese man who is talking to him. Suppose, in the middle of the conversation, the phone rings. The Chinese man answers the phone, frowns, hangs up, then walks over to the rule-following man and says, in Chinese, "There's been a bomb threat. We have to leave the building." The rule-following man responds in Chinese, saying "Let's go." Then he sits and waits for the Chinese man to say something else. One step further: the manual not only describes the Chinese language, but uses some notation to represent sensory observations and movements of the body. The man memorizes the manual and can carry out the rules at the normal speed of somebody who really understands Chinese. (Clearly he must be *very* talented.) Repeat the bomb threat scenario, and he gets up from his chair and heads for the exit, but doesn't know why he's leaving. There is no observable difference between understanding and the lack thereof. AI people have a good reason to be annoyed by Searle's argument. One final step, and I think this'll amuse you: add back the specs for written Chinese in the manual used in the previous paragraph, have the man memorize it, then put him to work in a Swahili Room, with a Swahili manual written in Chinese... -- David Canzi Lazlo's Chinese Relativity Axiom: No matter how great your triumphs or how tragic your defeats approximately one billion Chinese couldn't care less.
dmcanzi@watdcsu.UUCP (David Canzi) (10/31/85)
Interesting, isn't it, that the man who knows the rules for Chinese explicitly doesn't understand Chinese, while the people who *do* understand *don't* know the rules... -- David Canzi "Permission is not freedom."
baba@spar.UUCP (Baba ROM DOS) (11/02/85)
> Let's carry the above a step > further, and have the man memorize a manual describing phonetic Chinese > instead of written Chinese, and have him follow the rules to generate > spoken responses to a *real* Chinese man who is talking to him. > Suppose, in the middle of the conversation, the phone rings. The > Chinese man answers the phone, frowns, hangs up, then walks over to the > rule-following man and says, in Chinese, "There's been a bomb threat. > We have to leave the building." The rule-following man responds in > Chinese, saying "Let's go." Then he sits and waits for the Chinese man > to say something else. > > One step further: the manual not only describes the Chinese language, > but uses some notation to represent sensory observations and movements > of the body. The man memorizes the manual and can carry out the rules > at the normal speed of somebody who really understands Chinese. > (Clearly he must be *very* talented.) Repeat the bomb threat scenario, > and he gets up from his chair and heads for the exit, but doesn't know > why he's leaving. There is no observable difference between > understanding and the lack thereof. > -- > David Canzi Indeed, following Wittgenstein, one can argue that the man in question most evidently *did* understand at least the sentence "we have to leave the building", even though he might not be able to identify the individual words within the sentence. Of course, a real human could eventually learn to isolate and recombine the individual words once he had seen them in a variety of contexts and associated with appropriately related "sensory observations and movements". Having studied Wittgenstein under Searle years ago, I think that Searle would maintain that it is precisely the isolation of language behavior from other behavior in the Chinese room that implies a lack of understanding. Baba
rlr@pyuxd.UUCP (Rich Rosen) (11/04/85)
> Interesting, isn't it, that the man who knows the rules for Chinese > explicitly doesn't understand Chinese, while the people who *do* > understand *don't* know the rules... [CANZI] Might this have something to do with the very notion of consciousness, that our minds understand and exercise rules, but "we" (our consciousnesses?) are not (consciously?) aware of them? (Sort of like believing we make decisions "freely", through independent will free of dependencies, though unaware of the root causes...) -- "I was walking down the street. A man came up to me and asked me what was the capital of Bolivia. I hesitated. Three sailors jumped me. The next thing I knew I was making chicken salad." "I don't believe that for a minute. Everyone knows the capital of Bolivia is La Paz." Rich Rosen pyuxd!rlr
rlr@pyuxd.UUCP (Rich Rosen) (11/04/85)
> Indeed, following Wittgenstein, one can argue that the man in question > most evidently *did* understand at least the sentence "we have to leave > the building", even though he might not be able to identify the individual > words within the sentence. Of course, a real human could eventually learn > to isolate and recombine the individual words once he had seen them in > a variety of contexts and associated with appropriately related > "sensory observations and movements". > > Having studied Wittgenstein under Searle years ago, I think that Searle > would maintain that it is precisely the isolation of language behavior > from other behavior in the Chinese room that implies a lack of > understanding. > Baba Could one ever "learn" language in a vacuum, without context based on experiencing and sensing the things that words represent? -- "Mrs. Peel, we're needed..." Rich Rosen ihnp4!pyuxd!rlr