[comp.ai] Is the Chinese Room Experiment Consistent?

markh@csd4.csd.uwm.edu (Mark William Hopkins) (01/06/90)

     I have been left behind on this issue without having my substantial
question answered?  What makes anyone think that it is even possible to
formulate a complete set of language rules that do not also take into account
our mobility and musculature, our sensory systems (since a large part of our
vocabulary directly relates to it) -- that is: the human being as a control
system?
     
     If you conduct the Chinese Room Experiment -- incorporating a TRULY
complete set of rules for Chinese -- you're going to end up proving the
Chinese Room Argument wrong.  The understanding process, whereby our actual
life-processes are linked to our internal symbols, is an integral part of a
language's semantics and (especially) pragmatics -- because we are first and
foremost intelligent control systems that process sensors and actuators.
Somewhere in your semantic description the elemantary symbols underlying
language have to be linked to the control routines we use in our everyday
living.  How are you going to teach a system a languages' semantics if it
can't at least simulate these processes?

     The machine will probably even participate in a future Tiennamen Square
conflict and stop a tank dead in its tracks after having learned Chinese. :)

     A simple example to make this more concrete: you can't teach a
congenitally blind person the meaning of the word "green", because our
understanding of the word derives at least in part from the very algorithm
we use to actually perceive the color (a good part of which is implemented in
the hardware that goes to make our retina).  Or, more simply ... the meaning IS
the algorithm, abstracted as a data item.

     There's Syntax and Semantics.  Have people forgotten about Pragmatics,
after all?

     The meaning of locatives, such as "at", directly relate to the actual set
of rules we use in guiding our motion and our manipulation of objects.  You
can't understand those words as we do without already having an implemented plan
generating system to control a mobile unit's actions in its environment (or
at least a simulation of this).

     That's gonna be an awfully huge Chinese Room.

     So the question is, why do we even accept the premise of the Chinese Room
Experiment when it is, in my mind, obviously contradictory? (that a language
can be "described" independent of the way it is "understood" and "used".)

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/06/90)

From article <1798@uwm.edu>, by markh@csd4.csd.uwm.edu (Mark William Hopkins):

>...  What makes anyone think that it is even possible to
>formulate a complete set of language rules that do not also take into account
>our mobility and musculature, our sensory systems ...

Nobody does think that, so far I can gather.  So the rules must
take those things into account.

>If you conduct the Chinese Room Experiment -- incorporating a TRULY
>complete set of rules for Chinese -- you're going to end up proving the
>Chinese Room Argument wrong. ...

How so?  Where does the proof of wrongness come in?

>How are you going to teach a system a languages' semantics if it
>can't at least simulate these processes? ...

You aren't, so you incorporate the means to do the simulations
in the rules.

>That's gonna be an awfully huge Chinese Room.

Yes, it is.  Is this your proof?  The size of the Room is great,
therefore the argument is wrong?

>So the question is, why do we even accept the premise of the Chinese Room
>Experiment when it is, in my mind, obviously contradictory? (that a language
>can be "described" independent of the way it is "understood" and "used".)

That there is no understanding is the conclusion of the argument,
not the premise.  But you mean, I guess, that the rules are
suppose to be "formal", apparently meaning that their symbols
are uninterpreted.  But you've shown that some of those symbols
must have interpretations.  Right?  But so far as the system of
rules goes, if no reference is made by a rule to any interpretations 
that might be assignable to the symbols, the system of rules is still
syntactic and not semantic.  The fact that in observing the way
the rules work you can arrive at interpretations for some of
the symbols or that the programmers made use of interpretations
in formulating the rules does not make the system semantic.

But if you *could* conclude that the premise was contradictory,
this would be to *agree* with the argument, not to disagree
with it.  So when I disagree with you about the premise being
contradictory, I am attacking the argument, not defending it.

				Greg, lee@uhccux.uhcc.hawaii.edu

muttiah@cs.purdue.EDU (Ranjan Samuel Muttiah) (01/07/90)

In article <6048@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <1798@uwm.edu>, by markh@csd4.csd.uwm.edu (Mark William Hopkins):
>
>>...  What makes anyone think that it is even possible to
>>formulate a complete set of language rules that do not also take into account
>>our mobility and musculature, our sensory systems ...
>
>Nobody does think that, so far I can gather.  So the rules must
>take those things into account.
>





Interestingly, the Korean language letters consists mainly
of symbols signifying the mouth.  Actually, Korea has a very
interesting language history.  But anyway, I don't want to start 
a Korean room problem or even a Tamil room problem which I know
no one will pass :-).

jeff@aiai.ed.ac.uk (Jeff Dalton) (01/11/90)

In article <1798@uwm.edu> markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:
>     So the question is, why do we even accept the premise of the Chinese Room
>Experiment when it is, in my mind, obviously contradictory? (that a language
>can be "described" independent of the way it is "understood" and "used".)

I suspect Searle did it that way because he was arguing against a
position that made such assumptions.  Maybe we can start talking about
Dreyfus (?sp) again.  He at least used to argue that understanding
can't be captured by rules.

markh@csd4.csd.uwm.edu (Mark William Hopkins) (01/14/90)

In article <1529@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>In article <1798@uwm.edu> markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:
>>     So the question is, why do we even accept the premise of the Chinese Room
>>Experiment when it is, in my mind, obviously contradictory? (that a language
>>can be "described" independent of the way it is "understood" and "used".)
>
>I suspect Searle did it that way because he was arguing against a
>position that made such assumptions.  Maybe we can start talking about
>Dreyfus (?sp) again.  He at least used to argue that understanding
>can't be captured by rules.

I never implied in my question that understanding could not be captured
by rules.  I just implied that the rules would involve considerations of
the signal processing capabilities between the mind and the body and the
environment, and the manipulatory capabilities of the body in its
environment in a crucial way.

The rules can still be formal rules involving "meaningless" symbols.

In light of this clarification, I ask the question again.  Searle posed
a straw man argument, and I question the premise.