[comp.ai] No more Chinese rooms, please

kenp@ntpdvp1.UUCP (Ken Presting) (07/11/90)

> In article <593@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
> >       Searle is trying to prove the following:
> >
> >       For any program P whatsoever, and for any machine M whatsoever,
> >       the following inference is always invalid:
> >
> >       Machine M runs Program P, therefore Machine M understands.
>
> (Daryl McCullough) writes: 
> . . .  
> . . .  the validity of the above inference is not claimed by Strong
> AI (or if it is, then they are just speaking loosely). The more
> precise claim would be that, for the right program P, one can infer
> 
> 	     Machine M runs Program P, therefore the system (Machine M running
>	     Program P) understands.
> 

Daryl, I take it that your only objection to my reading of Searle is 
that "System" should replace "Machine" after the "therefore".

> . . .                        For the Chinese room to count as an
> argument against this claim, it would be necessary to establish that
> the system (man + rules + room) does not understand Chinese. And
> Searle cannot establish this without offering *some* definition of
> what it means for a system to understand. (Comment: Searle's variant
> of having the man memorize the rules does not change anything; there
> would still be two systems: the man "acting himself" and the man
> following the rules. Establishing that one system does not understand
> does not automatically establish that the other doesn't.)
> 
> Daryl McCullough
> 

As I see it, the Systems Reply suffers from a far worse problem of 
vagueness than does Searle's original argument.  What is a "System"?
All of us programmers have a very robust notion of how a computer
runs a program.  We are not confused by such complications as virtual
memory, multitasking, distributed processing, and asynchronous execution.
(Well, we do get confused sometimes, but whoever it is that wrote the
operating system must have figured it out at least once).  

There is a big difference between using the concept of "System" in the
engineering context which originated it, as opposed to using the same
concept in an unrelated philosophical context.  

Here is a simple way to see the problem.  You said that when Searle 
memorizes the rules, there are two systems, one of them being Searle 
acting as himself.  You have also said that a "System" is a machine 
plus a program.  But the question being debated is whether intelligence
is constituted by programs.  

Your version of the systems reply does not *directly* beg the question,
but you are making a very big assumption, just by describing Searle-as-
himself and Searle-following-rules in the same terms.  If you could 
justify the application of the term "System" to human beings, you would
be well on your way to refuting Searle. 

The problem is identifying just what "program" a natural object like a 
human brain is "running".  You can't take a core dump and disassemble the
object code.  In _Representation and Reality_, Hilary Putnam presents
a "theorem" in which he argues that every physical object instantiates
every finite automaton.  If he is right, then the Systems Reply has some 
big troubles.


Ken Presting  ("Speaking in fork()'ed tongues")

llama@eleazar.dartmouth.edu (Joseph A. Francis) (07/11/90)

In article <598@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>As I see it, the Systems Reply suffers from a far worse problem of 
>vagueness than does Searle's original argument.  What is a "System"?
>All of us programmers have a very robust notion of how a computer
>runs a program.  We are not confused by such complications as virtual
>memory, multitasking, distributed processing, and asynchronous execution.
>(Well, we do get confused sometimes, but whoever it is that wrote the
>operating system must have figured it out at least once).  

This vagueness only occurs because of our limited understanding.  In the CR,
our understanding is limited because the decription of the system is so 
sketchy (we don't really know how the rule book is interacting with the
symbols).  In human machines (so to speak), it gets even more complex.
What portion of you, for instance, understands?

It would actually be a more telling blow to the systems response if it
did NOT leave us with questions about what it is that constitutes the
system.

"Read My Lips: No Nude Texans" - George Bush clearing up a misunderstanding
                                                                       

daryl@oravax.UUCP (Steven Daryl McCullough) (07/12/90)

I am a little embarrassed about participating in the to the
interminable Chinese Room argument, but I (foolishly) feel that a
teeny, tiny bit of progress is being made in the direction of
clarifying the issues.

In article <598@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
(> > is me, Daryl McCullough)
> > . . .                        For the Chinese room to count as an
> > argument against this claim [a version of Strong AI], it would be
> > necessary to establish that
> > the system (man + rules + room) does not understand Chinese. And
> > Searle cannot establish this without offering *some* definition of
> > what it means for a system to understand.
> 
> As I see it, the Systems Reply suffers from a far worse problem of 
> vagueness than does Searle's original argument.  What is a "System"?

Vague or not, the fact that the Systems Reply is possible at all means
that Searle has failed to prove his point. Searle answers the question
"Does the Chinese Room understand?" with the seeming non sequitur "The
man in the Chinese Room doesn't understand." It seems to me to be up
to Searle to show that his answer is relevant; he must show that the
understanding of the man is relevant to the understanding of the room,
which he cannot do without giving a more complete definition of what
it means to understand. He seems to be relying on a terribly naive
(since it leads to infinite regress) principle:

     For an entity to understand, some smaller part of that entity
     must understand.

There is nothing vague or question-begging about denying that the
above principle is self-evident, which is all that the Systems Reply
amounts to. To defeat Searle, one must only show that his "proof" is
not a valid proof. It is not necessary (as you yourself have pointed
out) to show that its conclusion is incorrect, only that the reasoning
is inconclusive. Therefore it is not necessary to define "system" in
order to defeat Searle.

> There is a big difference between using the concept of "System" in the
> engineering context which originated it, as opposed to using the same
> concept in an unrelated philosophical context.

The claim of Strong AI, it seems to me (I am just a kibbitzer, not a
researcher or philosopher) is that the same notion of system can and
should be used in engineering and in philosophical contexts. A
defender of Strong AI (which, at least for the moment, I am) would
claim that the context is *not* unrelated---that intelligence and
understanding *are* essentially software engineering issues.

> Your version of the systems reply does not *directly* beg the question,
> but you are making a very big assumption, just by describing Searle-as-
> himself and Searle-following-rules in the same terms.  If you could 
> justify the application of the term "System" to human beings, you would
> be well on your way to refuting Searle.

Circular reasoning, or "begging the question" is the fallacy of
proving a point by (usually indirectly) assuming the conclusion.  I
don't think that Strong AI is, in anyone's mind, something that can be
proved; it should have the status of a thesis, such as Church's
thesis, or a hypothesis, such as Newton's law of universal
gravitation. Such statements can't be proved, but one can come up with
evidence for or against them.

I feel the Systems Reply has the same status. You are right that I
can't justify the application of the term "System" to a human being;
it is an assumption, or working hypothesis that we can try out to see
if it fits, to see if it has interesting consequences, or to see if it
has consequences which are clearly false.

> The problem is identifying just what "program" a natural object like a 
> human brain is "running".  You can't take a core dump and disassemble the
> object code.  In _Representation and Reality_, Hilary Putnam presents
> a "theorem" in which he argues that every physical object instantiates
> every finite automaton.  If he is right, then the Systems Reply has some 
> big troubles.

If you temporarily (for the sake of argument) assume that intelligence
(and more generally, all mental properties) is a function of software,
then an immediate consequence of this assumption is that it doesn't
make sense to talk about a physical object (such as a computer, or a
human brain) as having one unique intelligence, just as a computer
(through multitasking) can run more than one program.

To my mind, Putnam's theorem (which I find completely plausible)
doesn't defeat the Systems Reply at all; it simply implies that there
must be more than one (infinitely many, actually) systems associated
with any given physical object. This seems to be exactly what is going
on in the thought experiment in which Searle memorizes the Chinese
Room rules: Searle's body would then implement simultaneously (1) a
staunch opponent of Strong AI, and (2) an artificially intelligent
Chinese-speaking mind. To interact with the first system, speak in
English, and to interact with the second, write in Chinese.

What this comes down to is that a System is not specified by pointing
to a physical object; an AI researcher cannot simply hand you a
computer and a floppy disk and claim that he has handed you an
artificially intelligent being. A specification of a system must also
involve a specification of its interface to the world: how it is to be
"plugged in", how one talks to it, etc. 

Daryl McCullough