[comp.ai] A Test for "Understanding"

sn13+@andrew.cmu.edu (S. Narasimhan) (12/31/89)

     As is known, the recent issue of Scientific American carries an article by
John Searle ,where he speaks against strong A.I. He has used two examples : The
chinese room and the horse (refer the fig. in the issue).

     The chinese room example is not convincing enough, because he has'nt explained what he means by "understanding" exactly. When do we say a person has "understood" something. When he can respond to "it" properly ? That is the question we must address.Besides, at this point we have to distinguish between "understanding" and "learning".A system has "learnt" if it has "understood" and can respond
to NEW situations not present when "understood".Thus learning should implicitly
mean    "understanding". However, what is "understanding"?
    In the chinese rom example the person definitely does'nt "learn" because if
 given a question to which none of the rules in the book apply ,he can't respond. This is to say,

              learning => understanding but the vice versa need'nt be true.
This also implies, if we can build a system which "learns" ,we would have built
a system which "understands".

    But, can we build a system which only "understands", but does'nt learn? But,how are we to test whether the system really "understands"? For that matter, when do we say a person has learnt   something? I'd say when,

          (1) he can respond to that "something" as does a person who has "understood" that and

       (2) he is REMINDED of another event or episode which is SIMILAR to that recalled by a person who has understood.

Of course, we have to have a person whom all of us accept as to have "understood" that.
     Thus, the person in the chinese room does'nt understand chinese,because he's not reminded of anything that a chinese would be reminded of, even though the
person responds "intelligently" ,thus satisfying (1) but not (2). This also raises another possibility.What if the person IS reminded of something similar to that of the chinese but is unable to respond properly.We'd say that the person lacks "intelligence".

    In other words, (1) is a test for "inteelligence" and (2) is a test for "understanding ability". By now you might have noticed that (1) is nothing but the well-known "Turing's test". I'd call (2) as the "Case-Retrieval" test,where by case, I mean a previous episode or event.

    By the above definitions, the person in the chinese room is "intelligent" but, lacks the ability of "understanding". This applies to computers as well.A system which satisfies both the tests would inevitably possess "learning ability".
In essence, learning is a sufficient condition for both "intelligence" and "understanding".
     Let us design a test for "understanding" based on the above.

The Case-Retrieval test
-----------------------
     This is similar to the Turing's test.
 Say, we have to test whether a system "understands" chinese.
First we select a person whom we accept as "understands" chinese.
Next we present both the system and this person a passage in chinese,which describes an event.
We then, ask the person to write down all the things or events he is reminded of.
We also run the program and note down all the cases it retrieves from its case-
base.
We select an equal number of "remindings" from each of these and mix them up.
If we can't distinguish between which of the remindings are the person's and whic the system's then, the system clearly "understands" chinese.


S.Narasimhan
sn13+@andrew.cmu.edu