[comp.ai.philosophy] Searle's Room and intelligence - a comparison!

tskelly@ccvax.ucd.ie (11/29/90)

Hi y'all,
I was glad to see some pertinent and interesting replies to my previous
posting "Chinese Room and Stuff".

First let me outline my use of "Turing Test" :-
As far as I can see, for an interrogator to satisfy him/her self as to the
comparative intelligence of a human and another *entity*, the questions he/she
asks would have to test all of those activities that are seen as being
necessary aspects of human intelligence and which can be exhibited in a 
question/answer situation. (This of course is not that easy as there is still
no standard definition for 'intelligence' - but there are activities that we
all seem to agree are necessary aspects of intelligence!).


In <1990Nov24.210756.22801@cs.umn.edu> DHT wrote :

>I do not follow your argument that the English Room is merely symbol
>manipulation while an intelligent entity understands things.  On
>what do you base that?  To put it more simply, I can imagine a few
>possible meanings:
>
>1.  Humans understand, programs only manipulate symbols.  Therefore,
>we can always tell a human from a program by testing for understanding,
>and therefore no program can ever pass the Turing Test.
>
>2.  It is possible for a program to imitate humanity by manipulating
>symbols, but we know that it is only imitating intelligence and
>understanding, while we know that humans understand.  (Here I must
>ask, "How do we know that humans understand?  And how do we know
>that a specific human understands a specific thing, other than
>by asking the human questions on the topic?  How do we then know
>that the human understands, and has not simply assimilated some
>symbol manipulation rules and is producing correct answers to
>our questions without actual understanding?")
>
>3.  We know that programs do not understand anything because they
>only manipulate symbols.  We consider it possible that humans are
>intelligent, since the principles of their behavior are not known.
>However, should it become known that humans do no more than
>symbol manipulation, we would then conclude that they do not
>in fact understand things.
>
>It is perfectly possible that I have erred, and that your position
>is not one of these three.  Please tell me which of these three you
>are arguing from, or give me a better idea of what you actually
>think.  If you are arguing from position 2 above, I would also
>appreciate it if you would answer the questions I have provided.

To continue along the vane I started and (hopefully) to start a new topic
let me present you with a rather ambitious definition :

	UNDERSTANDING is knowledge of all aspects of all objects
	and all of their relationships to each other.
	It is necessary to admit to various degrees of understanding
	and present a starting point. So, partial understanding
	is knowledge of some objects and some relationships.
	And one starts by knowing(thro' experience) one object and
	it's relationships to all other known objects (ie. itself).

It is important to realise that "knowledge of all aspects of an object" implies
"experienceing the object through all available inputs/senses".
I also allow for the development of new knowledge of objects through combination
of previously known objects (I don't think anyone would dare say that it is 
possible to imagine something entirely new ie. with no similarity to already
known concepts)
I would also like to qualify my use of the word "objects" by saying that I do
mean it to include abstract objects as well as physical objects.

Aside: An interesting discussion may be whether or not ALL abstract notions can
be seen as having a basis in physical reality. Or, is there such thing as 
"purely abstract". An example is 'numbers' - which clearly has a physical basis.

Also, realise the importance of "relationships" in the above definition.
The broadest possible sense of the word should be taken (without going into
what we call "human relationships") which includes notions such as "on",
"beside" etc. as well as "causes" and "results from".

Now, given this definition,the Chinese Room would never UNDERSTAND, as knowledge
of symbols has nothing to do with knowledge of the real world until the symbols
become associated with real world objects. And remember symbols are entirely
arbitrary!!
I don't think anyone would claim that the Chinese room would understand given
any real definition of the word BUT
***** The symbol manipulation wich we call language is derived from, based on,
***** has as it's foundations understanding. Without the understanding, language
***** would never have come about or if, by some perverse accident, the symbols
***** of human language lost their connections with real objects then the
***** language would dissapear even if the rules for manipulating it remained.
In other words, no rules for manipulating symbols could ever be enough to
produce what we call 'intelligent' answers to questions!
	The Chinese room could not pass the TT.

I think this argument more or less concurs with the first interpretation above.
There are some questions in the second possibility which warrant some thought
and would probably make interesting discussion in this group!


I wrote a passage concerning a Chinese room built in 1800 which, I claim,
could not possibly discuss computers or space shuttles today!
DHT replied :-

>In this case it is not a true Chinese Room.  See my other posting on this
>- in summary, Searle assumes that the C.R. is an implementation of a
>program that will pass the Turing Test; the T.T. is, essentially,
>indistinguishability from a human.  Since a human mature 190 years
>ago who has not aged since, and who has kept up conversations with
>others since, could discuss computers and shuttles, the Chinese Room
>could also.  If you have problems with this, you have some
>misunderstanding of the Chinese Room or the Turing Test, or you
>simply do not believe that any program could pass the Turing Test.

I'm afraid I'm rather a stickler for detail (when it suits :-)) ) and
I have to say that I have not come across any mention of some sort of
'learning' mechanism in connection with the "Chinese Room". And it is
'learning' that I'm getting at here!
I suppose I should make my point before backing it :- the Chinese room
as described would not, could not, should not pass the Turing Test as
this test would have to test the learning ability of both participants
and the Chinese room would be lacking!
Now, to argue that the Chinese room would be lacking.
There would be only two possible ways for the Chinese room to learn :
	1) Somebody gets in and changes the rule book!
	2) There are rules which allow for the addition of new symbols to
	   the rule book.

(I re-iterate that the rule book consists entirely of rules for the 
manipulation of symbols - a new symbol would need new rules for it's manip.)
Let's dismiss possibility 1 as this activity is not plausible under the
stringent environment of the TT.
So, we are left with possibility 2. This possibility is one of the largest
areas of research in AI ie. machine learning. I would not presume to make
any statements in this field against any experts but, to my knowledge, only
limited success has been forthcoming in this field. The most successful appear
to be the connectionists who realy don't deal in symbol manipulation. And,
while it's not final, I think we can only say that, as far as we know, 
learning cannot be simulated/mimicked (whatever) in full by symbol manipulation.
So, it would appear that a proper TT would catch out the Chinese room on
learning.



Of course you did say that Searle assumed the CR was an implementation of a
program that would pass the TT.   ^^^^^^^
	     ^^^^^
In another article <Searle's Room pass the Turing Test? - piff!!! (Related to
Chinese Room and Stuff)> I have argued that this assumption along with the
definition of the CR as a symbol manipulator is contradictory ie. NO symbol
manipulator could pass a good TT (depending on your definition of a GOOD TT)

Also, in this article, I present the same argument using different aspects of
intelligence. I quess the thrust of it is that there are aspects of human
intelligence which go beyond and could not be realised in pure symbol 
manipulation. Some of the examples are 'understanding' and 'learning'.


I would really like to see how people react to my definition of UNDERSTANDING
and, in fact, a full discussion of this aspect of human intelligence may
prove to be very interesting. So, by all means, tear my definition apart but
do so with intelligent and novel arguments!! :-)

There is also a lot more to be said about symbol manipulation completely apart
from and despite the Searle arguments. And further (novel) discussion on this
topic would be great!


Ta v. much

Stephen Kelly
TSKELLY@CCVAX.UCD.IE