[comp.ai] Fallacy in Chinese Room experiment.

ins_cscs@jhunix (Surag Surendrakumar) (03/09/89)

Hello everybody. I find the Chinese Room discussion very interesting. So I 
wanted to add a few of my opinions and comments. All comments to this
posting is appreciated.

According to Searle there is a difference between thinking and understanding.
Understanding has this additional biological aspect which causes the actual
understanding. This may be true or may be false, no one knows for certain.
Even though a lot of people including myself do not believe Searle. Searle puts
forth the Chinese room argument to support his theory. The Chinese Room 
argument is baseless and does not really prove anything. The following is
my criticism of the Chinese room argument.

Searle says that by learning just the rules for Chinese he will be able 
to pass the Turning Test but he does not really  know Chinese. He also 
claims that he can learn all the rules and get good enough to convince
a native speaker that he really does understand Chinese but he really, does
not understand it. I think that it is going to be impossible for Searle
to learn all those rules and still not understand Chinese.

Take for example a class where we are thought elementary Chinese. You are
thought Chinese by learning the syntax and symbols. This is very much the 
same way that Searle learns Chinese. But you understand Chinese but Searle
does not ? So it is not going to be possible for Searle to learn all the
rules and not understand any Chinese. Searle also understands the rules 
which is essential for him to manipulate them, he is able to reply to
questions in Chinese as good as a native speaker and knows all the rules.
He claims that still he does not understand any Chinese. If any person can
do this then **he has to understand** Chinese. 

Saying that there is some biological demon within you which does the
understanding, and not having any idea what it is not a scientific argument.
Neither Searle nor any other person has been able to come up with any
valid explanation to what the biological demon is. Saying that there exists
this demon ( which is highly doubtful in first place ) and having no idea
how it works or what is actually is made of is not scientific or logical.
It is like saying the answer to all problems is X, where X represents the
answer to the problem and refuting a logical explanation and answer to
the problem. At the same time you don't accept any other answers but just X.


So you can clearly see that the argument made by Searle is not correct. His 
assumptions are wrong and his experiment is baseless. He also does not have
any valid explanation to support his results either. Searle starts wrong
and ends wrong. His experiment is not scientific.

ALL COMMENTS TO THIS POSTING APPRECIATED.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

		|| SURAJ C. SURENDRAKUMAR. ||

The John's Hopkins University.
BITNET: ins_cscs@jhunix.bitnet	ins_cscs@jhuvms.bitnet
Other networks I have not yet figured out.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/10/89)

ins_cscs@jhunix (Surag Surendrakumar)
of The Johns Hopkins University - HCF wrote:

" According to Searle... Understanding has this additional biological
" aspect... Saying that there is some biological demon within you which does
" the understanding, and not having any idea what it is [is] not a
" scientific argument.

Searle said nothing about biological "demons." He just said that there
must be properties that the brain has that symbol-crunchers lack. He
was perfectly right to say that, and his Chinese Room Argument was
quite valid, as far as it went.

But you wanted a specific candidate for what the missing function might
be? I've proposed one in my JETAI paper: nonsymbolic function (e.g.,
transduction, analog processing, A/D, D/A, motor effectors). And before
you give the reflexive response that all you have to do is "hook up"
those processes to your symbol-cruncher and you're back where you
started: In my paper I give reasons why grounding a symbol system is
not just a simple matter of hooking on peripherals to a
symbol-cruncher. The nonsymbolc function may be INTRINSIC to
TTT-passing power (and hence mental function) and may not be isolable
as independent symbolic and nonsymbolic "modules."

" Searle says that by learning just the rules for Chinese he will be able
" to pass the Turing Test but... not really know Chinese... I think that
" it is going to be impossible for Searle to learn all those rules and
" still not understand Chinese... You are [taught] Chinese by learning
" the syntax and symbols. This is very much the same way that Searle
" learns Chinese...

One version of what I called in my JETAI paper "the symbol grounding problem"
is the "Chinese-Dictionary-Go-Round": Suppose we had to learn Chinese 
AS A FIRST LANGUAGE and our only source of information were a
Chinese-Chinese dictionary. In looking up any (for-us-so-far-meaningless)
symbol or symbol-string, all we could find would be still more
meaningless symbol-strings ("definitions"). The trip through the
dictionary would be UNGROUNDED: It could never come to a halt on
something other than meaningless symbols. How do we break out of this
meaningless syntactic circle to meaning, reference, understanding? It's
obvious that some, at least, of the symbols (the elementary ones) must
be grounded in something other than still more symbols. (This is true
even for second-language learning, except there we have a grounded
first language as a leaping-off point [to which the computer in
the LTT is of course not entitled]. This is the only reason that
cryptological feats like the deciphering of the Rosetta stone are
possible at all.)


In the real world there are real objects, which produce
proximal (so far NONSYMBOLIC) projections on our sensors.
We somehow learn (or in some cases have evolved) the ability to
reliably pick out CATEGORIES of objects on the basis of our sensory
input and to assign to them a unique, arbitrary symbol. My book
describes how nonsymbolic representations may play an essential role in
our ability to do that, thereby grounding our elementary symbols in the
objects they refer to. No demons are involved; but there turns out to
be no natural "joint" at which one can carve cognitive function that would
leave nonsymbolic processes on one side and an "understanding" symbol
cruncher on the other.

Refs:   Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
        Harnad, S. (1987) (ed.) Categorical Perception: The Groundwork of
                          Cognition. Cambridge University Press.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

fransvo@htsa.uucp (Frans van Otten) (03/14/89)

Stevan Harnad writes:

> ... a specific candidate for the missing function might be: nonsymbolic
> function (e.g., transduction, analog processing, A/D, D/A, motor
> effectors).  ... In my paper I give reasons why grounding a symbol system
> is not just a simple matter of hooking on peripherals to a symbol-cruncher.
> The nonsymbolic function may be INTRINSIC to TTT-passing power (and hence
> mental function) and may not be isolable as independent symbolic and
> nonsymbolic "modules."

[...]

> ... Suppose we had to learn Chinese ... our only source of information
> were a Chinese-Chinese dictionary ... looking up (meaningless) symbols ...
> all we could find ... still more meaningless symbols. ... It's obvious
> that some of the symbols (the elementary ones) must be grounded in
> something other than still more symbols.

Finally I understand what you mean by "symbolic" vs. "non-symbolic".  With
"symbols" you mean the (representations of) (English) words, and so with
"symbolic" you mean something (some concept or whatever) representated using
those symbols (words).  With "non-symbolic" you mean something _not_
representated by words, but some other way.  I fully agree with the idea.
I wrote: understanding requires representation in _internal_ symbols.  What
I mean by "internal symbols" is what you mean by "something non-symbolic".

Now this makes me think of some model from the psychotherapeutical world.
This theory is about "primary wordforms" and "secondary wordforms".
Basically, a primary wordform is something you experienced, something you
physically felt, with your body.  You don't have a word for it, you only
know the experience.  Later, someone can tell you: "this, what you are
feeling right now, is called ...".  Now you know the secondary wordform.
(We also "invented" a tertiary wordform: something you know the word for,
but not the experience).

So this seemingly supports your choice of candidate for the missing
function.  Seemingly, because this is what is neccesary for _human_
understanding, but not for understanding which can be done by computers.
When we talk about physics, e.g. speed, you may want to think of your
bicycle.  Then you map the abstract concept "speed" to _your_ "real" world.
But why would this be neccesary for understanding physics ?  Why do you think
physics is so hard nowadays ?  At the sub-sub-atomic level, you can't
map anything into something of _your_ real world !  Does that make physics
un-understandable ?  I doubt it.  But it will get more difficult.
Abstract thinking has always been difficult for many people.

Now I can explain where you went wrong.  You say "non-symbolic", then you
think: "not representated by something that can be a symbol neccesary for
understanding... computers work with bytes, very symbolic... computers
cannot understand !  Eureka !"

The point you see is very correct: representation by something else then
the word-symbols.  The point you don't see: what you call "non-symbolic"
can actually be represented by neuron states or whatever physical entity
you want. This _has_ to be true, because human understanding happens in
your head and there is nothing un-physical there !
-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/15/89)

fransvo@htsa.uucp (Frans van Otten) of
AHA-TMF (Technical Institute), Amsterdam Netherlands writes:

" Finally I understand what you mean by "symbolic" vs. "non-symbolic".  With
" "symbols" you mean the (representations of) (English) words, and so with
" "symbolic" you mean something (some concept or whatever) representated using
" those symbols (words).  With "non-symbolic" you mean something _not_
" representated by words, but some other way.  I fully agree with the idea.
" I wrote: understanding requires representation in _internal_ symbols.  What
" I mean by "internal symbols" is what you mean by "something non-symbolic".

No, I'm afraid it's somewhat more complicated than that. Since a lot of
the discussion of Searle's Argument depends on a clear grasp of what
Searle (and most of his opponents) mean by "symbolic," "symbol manipulation,"
etc., I will in a separate posting venture a definition of a symbol system
based on what I've gleaned from what the computationalists and symbolic
functionalists (Turing, Newell, Pylyshyn, Fodor) seem to mean by symbolic.

We have to make some commitment about what is and is not symbolic, otherwise
there is no basis for agreeing or disagreeing about Searle's Argument or
any other statement about what symbol crunching can and cannot do. One
trivial gambit several contributors periodically resort to in this
discussion is to call every physical process "symbolic." This collapses
the substantive issues under discussion here concerning strategies of
mind-modelling into an empty generality about materialism (which
neither Searle nor I would bother to disagree with).

(In other words, what needs defining is not "understanding," as
many of my interlocutors have kept insisting, but "symbolic.")
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771