[comp.ai] Another reply to the Chinese Room Argument

fransvo@htsa.uucp (Frans van Otten) (03/02/89)

1. Confusion about the Chinese Room Argument.

Noone ever told me or wrote what the Chinese Room Argument is exactly about.
So I had to make something up from the various postings.  First I understood
this:

-->	Searle, a human being knowing the English language, does not know
	the Chinese language or its graphical representation (the characters).
	This man is in a room from which he can read Chinese characters from
	and write Chinese characters to the "external world".  Now, if there
	are (English) books on the Chinese language and characters, Searle
	would be able to translate the incoming characters to English.  Then
	he could understand the message, and think of a reply.  He formulates
	his reply in English, and translates it to Chinese.  Finally he
	outputs this, represented by Chinese characters, to his pen-pal
	outside the room.

However, after reading some more articles in this newsgroup, I started getting
confused.  From these postings I understood the following (essentially
different) meaning of the Chinese Room Argument:

-->	Th same setting as above, except for the books in the room. These
	don't contain rules for translating Chinese characters to English,
	but merely "when you get these-and-these characters as input, you
	should produce these-and-these characters as output".

This seems a highly unlikable meaning.  If all the output depends on the
input and a set of written rules, (almost) any computer can take the place
of Searle, leaving the entire Argument without any meaning whatsoever.  So
I must assume that my original understanding was correct.


2. About "understanding" in general and "understanding a language".

Th arguments brought up in this discussion made me wonder.  E.g.:  What is
the essential difference between a simulation and an implementation ?  Can
an implementation not be viewed as a simulation by the physical world ?
And:  What is the essential difference between the "Teletype Turing Test" and
the "Total Turing Test" ?  What makes sensor input and motor output
essentially different from character (ASCII) I/O ?

Also, on the outside (result side) of human beings, you can not tell whether
they "understand" (whatever that would be) or not.  It is enough if we can
pass the Turing Test.  Then the question emerges:  Why should we, human
beings, have developed "understanding" in the course of our evolution if it
can not be a selection criterion ?

Viewing the human brain as a collection of simple elements (neurons,
molecules, atoms, quarks, name it), each of which obeys the rules of physics
and chemistry, every state of it and each process in it must be understandable
and (exactly) explainable (definable).  So it also must be possible to
formulate a definition of what "understanding" is about.  After some thinking
I reached this theory:

-->	The process or state which we call "understanding" is simply the
	representation of the understood concept in some "internal symbols"
	(some internal state of neurons/groups of neurons/transistors/logical
	gates/...).

Now let us see what a (natural) language is.  It is a (complex) code,
designed to transmit messages (concepts/...) with.  A language consists of
a vocabulary (a set of "words").  A word is a symbol with some aspects:

  - It represents a (class of) meanings;
  - It belongs to one or more syntactical classes (verbs, nouns, ...);
  - It has a "vocal" (and mostly also a graphical) representation (i.e.,
    how to pronounce (write) the word).

A language also needs a "grammar".  This is a set of rules, defining things
like:

  - The ordering of words in a sentence;
  - The modifications to the meanings and representations of words in
    different syntactical classes and subclasses (or rather, a specification
    of the contextual meaning of the word insofar as the meaning of the
    word depends on its syntactical state).

Then, "understanding" a language means:  "Having internalized (most of) the
grammar rules and (a substantial part of) the vocabulary that constitute
that language".


3. What is understood in the Chinese Room.

Inside the Chinese Room, the person of Searle understands the English
language.  The Chinese language is also understood; this understanding
is due to the fact that the Room has internalized the grammar rules and
the words of the Chinese language.  The "internal symbols" of the Room
are English words.  So it is not Searle who is doing the understanding
of the Chinese language: it is the Room as a system.

As the internal symbols of the Room are English words, a code which is
understood by Searle, he can understand the message that was codified
in Chinese.  He can think of a reply, formulate it in English, which can
be translated into Chinese, which can be output using Chinese characters.


4. My point.

Don't confuse "understanding a language" with "understanding a message
(coded in this language)" !  The Turing Test (both types) tests on the
content of the message, NOT on the representation of it !  I believe many
posters don't have this difference clear;  this misunderstanding is
responsible for quite some megabytes of articles.

The confusion is very understandable, though.  Most of us think of
"understanding" as "the human way to understand".  With this I mean: The
internal symbols/states are required to be "human brain symbols/states"
(i.e. neural states).  This causes the feeling of "implausity of the
systems theory" Searle has.

The meaning of "understanding" as I have defined it is neccesary for AI
you want a computer to understand.  It also means the problem is solved,
for each computer program has its internal state and symbols, meaning
that it understands the issue that is assigned to it.


5. What next.

This gives rise to some more interesting issues:  What to do with the
concept that is understood ?  You would need some function which
generates conclusions.  How would this function be defined, what would
be the design rules for this function ?  Does this function change
during the lifetime of the system ?  Would this be what learning is all
about ?  What would be the rules for changing the "conclusion-function" ?
Would system-goals be incorporated into the above (like "surviving" is a
system goal of human beings, in the sense of the individual and in the
sense of the species; these two may very well generate contradictionary
conclusion-functions, how to deal with this ?) ?

Other interesting (though probably not important) related issues are:
(self-)awareness, consciousness.

By the way, the ideas I explained above fit remarkably well into certain
models of the "psyche" which emerged from psychological and
psychotherapeutical research; specifically, models using keywords like
"basic self", "functional self" and "subpersonalities".  If there is any
interest, I might see if I can explain these models and how my ideas fit
into them.

-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp