[comp.ai.digest] AIList V6 #51 ..implications of the Chinese Room

hayes.pa@XEROX.COM (03/15/88)

Adrian G C Redgers writes:
>a) I thought Searle's point was that humans might not "understand" >Chinese (or
English) and are simply manipulating symbols which >model the world. The
'Chinese room' is then a brain. ... Or was >Searle pointing out that the room is
unsatisfactory for those very >reasons? 

Why not try reading Searle?  He couldnt be clearer or more entertaining ( or
wronger, but thats another story ).    He isnt claiming that brains arent
machines, or that humans dont understand Chinese.  His point is that a
programmed computer cant understand anything, even if it behaves impeccably,
passing the Turing test all over the place.  Reason: well, the program cant
because its just a pile of symbols, and the unprogrammed hardware ( =the man in
the room ) certainly doesnt know anything, being just dumb electronics, and
that/s all there is in a programmed computer, QED.  A brain, now, is different,
because of course brains understand things: and the conclusion obviously is that
whatever sort of machine the brain is, it isnt a programmed computer.   So
`strong AI' is wrong, Turing test and all. Weak AI, on the other hand, just
claims that it is simulating intelligence on electronics, which is fine ( says
Searle ) - probably a scientific mistake, he guesses, but not a philosophical
mistake, and immune from the Chinese room argument.

Pat Hayes