[comp.ai] Searle's Chinese Room

tj@mks.UUCP (T. J. Thompson) (03/13/89)

It is evident from the length and heat of the debate over Searle's
Chinese Room thought experiment that the two major positions on it
(pro: the experiment ``proves'' that a program cannot ``understand'';
anti: the system Searle+rules does ``understand'') are based on
very different premises. I think this is because there are two major
flaws in the design of the experiment, which deflect attention from
the interesting implications. The two flaws are that Searle is in the
room, and that the discourse with the room is in Chinese.

I intend to show that these features of the thought experiment appeal
to dangerously misleading intuitions about the situation; and that an
experiment in which these features are corrected reveals that any
system intending to show human-like ``understanding'' must closely
approximate human real-world experience.

I call the presence of Searle in the room a flaw in experimental
design because, as has been noted by a number of other commentators, a
real human being fails to approach the speed and memory capacity
required to ``run'' the Chinese program, by a factor of at least a
million (very conservative estimate). This is vitally important, not a
mere detail of implementation. Certainly no-one would suggest that the
Chinese Room contained a native speaker if it took weeks to begin an
utterance, and months to complete it. Conversely, if Searle did have
the speed and capacity to memorize and run the program, we would
hardly consider him human; and he would certainly have great
difficulty conversing with us ordinary mortals, who take months (of
his time) to say ``Do you understand Chinese?'' Rather than banish
Searle completely, i will replace him with a daemon of adequate
performance, and show that his understanding (of human language) is
irrelevant to the performance of the program.

The detail that the program understands Chinese is more subtle, in
leading experimenters not to inspect too deeply the content of the
messages issuing from the room. It is enough (almost) that the
mysterious squiggles turn out to be intelligible at all! But what if
the program the Searle-daemon is running ``understands'' English?
Would one have to conclude that the ``understanding'' resided in the
daemon? Let us imagine a fragment of conversation...

Experimenter:	Tell me, what do you think of Shakespeare?
Searle-daemon:	Well, he is certainly the most widely read and
(to himself)	quoted writer in the English language...
(runs the program at daemon speed)
Room:           Oh, boring, boring, boring!
		We had to study a bunch of that stuff in school,
		y'know, and it's like, totally weird, y'know.
Searle-daemon:	Hey, wait a minute! I would never say anything even
		remotely like that!

Remember, this program is to convince a sceptical experimenter that it
is a native human speaker. It will certainly have to demonstrate a
consistent style and idiom, and espouse individual ideas and opinions.
It would hardly be surprising if this ``individual'' were very
different from Searle. (Indeed if the ``individual'' was in any way
recognizable as Searle we would suspect that the daemon was taking
short-cuts, and substituting his own responses instead of running the
program.)

Here we approach the heart of the matter, from which perspective we
can see the premises on which the opposing sides are camped. If the
program has no connection with the rest of the world except for the
slips of paper passing under the door (which is certainly implicit in
Searle's formulation of the experiment), then the intuition that it
cannot embody ``understanding'' is certainly well founded. No
sceptical enquirer is likely to be fooled for long by a correspondent
lacking a rich experiential background, a sense of humour, particular
tastes in art and music, opinions on the weather, politics, Searle,
and so on.

Searle grants in his premises that the program appears to be a native
speaker (of Chinese, now modified to English). If you take that
premise seriously (the ``anti'' position) then you have to provide
that the program normally has far richer connections to the rest of
the world than slips of paper under the door. After all, it has to
engage in convincing dialogues following from questions like ``What do
you think of Shakespeare?'', and ``Why are you in that room?'', and
``What do you do when you are not conversing by slips of paper?''. If,
on the other hand, you assume the absence of these connections (the
``pro'' position), then the program is revealed for a straw man, for
it certainly could not demonstrate the richness of experience
characteristic of a normal adult human, and would quickly fail the
Turing test.

(One might suppose that a complete human background is somehow built
in to the program. Disregarding how this background might be acquired,
other than by actual experience, it must somehow be kept up to date.
For this updating to be effective, it must provide the rich
connections posited above. On the other hand the program could avoid
the updating problem by claiming to be recently afflicted with total
deafness, blindness, and paralysis. It is an interesting but different
question how likely anyone would be to accept the ``humanity'' of such
a program.)

To interpret the strong AI position as claiming that a program can
provide convincing human performance (of discourse or conversation)
while being connected to the world only via the discourse channel, is
to set up an easily demolished straw man. This is the program
characterized as performing ``mere symbol-crunching''. No-one (that i
have read) is claiming that such a program could pass the (linguistic)
Turing test. In fact, as my remarks above intend to illustrate, there
is no real difference between the ``linguistic'' and ``total'' Turing
tests: given a careful and sceptical enquirer, a program must
demonstrate an embedding in the experiential world as rich as that of
any normal adult human in order to pass the linguistic test.

It should be apparent by now that i accept that an artificial
intelligence can in principle exist, embodying a genuine understanding
gained through interacting with the world. However, i confidently
claim that nothing in existence today comes anywhere close to that
performance, or even suggests how it might be achieved. To claim
``understanding'' on behalf of any current AI program is to grossly
abuse the term.
-- 
     ||  // // ,'/~~\'   T. J. Thompson              uunet!watmath!mks!tj
    /||/// //|' `\\\     Mortice Kern Systems Inc.         (519) 884-2251
   / | //_// ||\___/     35 King St. N., Waterloo, Ont., Can. N2J 2W9
O_/                                long time(); /* know C */

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/14/89)

From article <696@mks.UUCP>, by tj@mks.UUCP (T. J. Thompson):

I like the distinction between the person Searle and the demon
that the experiment actually requires.  But here:

" ... , then the intuition that it
" cannot embody ``understanding'' is certainly well founded. No
" sceptical enquirer is likely to be fooled for long by a correspondent
" lacking a rich experiential background, a sense of humour, particular
" tastes in art and music, opinions on the weather, politics, Searle,
" and so on. ...

I think you're just refusing the premise of the experiment.  If
there was a "rich experiential background" involved it was that
of the programmers who composed the rules.  Now, when the curtain
is raised, that part of it is out of sight, except that it's
embodied in the rules.  Perhaps you think such embodiment is
not possible -- others have expressed doubts.  But that's the
premise.

		Greg, lee@uhccux.uhcc.hawaii.edu

jones@amarna.gsfc.nasa.gov (JONES, THOMAS) (07/14/90)

Searle's Chinese Room concept has triggered considerable head-scratching among
both philosophers and AIniks.  I propose to use Searle's analysis to
reexamine the Turing test and reconsider just what is needed to make the
Chinese Room work.

By hypothesis, the Room matches the ability of a human speaker of Chinese to
answer questions and carry on a conversation in Chinese.  The first thing to
note is that the questions are not restricted to simple ones like, "What is
your favorite color?"  Instead, the Room would have to make *some* reasonable
answer to complicated questions involving its (i.e., the Room's) mental state,
its ethics and values, its family, friends, and childhood; i.e.,that, in order
to satisfy Searle's criterion, the Room
would have to be a *true artificial person,* endowed with every mental ability
which a normal human has.  (Searle would have to really hustle to be a working
CPU for this.) We will call this the Room Person, or just Mr. Li.  A typical
conversation might look like this:

Me: Good morning, Mr. Li. I trust you are well.

Mr. Li: Quite well, thank you.

Me: Why don't you start by telling me a little about yourself,
    your family, etc.

Mr. Li: I live in a little village in Quedong province. I work as my ancestors
        have for thousands of years, tilling the land.  I have a wife and two
        grown sons.

Me: What do you think of the events in Tienanmin Square?

(Conversation comes to a halt for a few seconds while Searle adjusts his
glasses.)

Mr. Li: Well, democracy is a fragile flower in China.  We have a history of
        many years of authoritarian government.

Me: Now if you will just lie back on the couch and talk about whatever comes
    to your mind.

Mr. Li: I am thinking about something that happened when I was about five
        years old. Etc., etc.


The point being that the Room is a person, not too different from the rest
of us.  It must have a mind, emotions, values, a culture, an id, ego, and
superego (assuming that Freud was right about this). 

Now we are prepared to answer Searle's claim that no computer can, in
principle, understand a natural language.  The solution to the puzzle
is that Searle, the CPU, understands no Chinese, while Mr. Li, the
Room Person, is fluent in it.

******************************************************************************


Tom Jones--jones@dftnic.gsfc.nasa.gov