[comp.ai] Fwd: A Test for "Understanding"

sn13+@andrew.cmu.edu (S. Narasimhan) (01/10/90)

------------------------------
Forwarded message begins here:
------------------------------
X-Andrew-WideReply: netnews.comp.ai
X-Andrew-Authenticated-as: 0;andrew.cmu.edu;Network-Mail
Received: via nntppoll with nntp; Sun, 31 Dec 89 17:00:18 -0500 (EST)
Path: andrew.cmu.edu!pt.cs.cmu.edu!andrew.cmu.edu!sn13+
From: sn13+@andrew.cmu.edu (S. Narasimhan)
Newsgroups: comp.ai
Subject: A Test for "Understanding"
Message-ID: <IZbI6R_00WB7E3fV1Z@andrew.cmu.edu>
Date: 30 Dec 89 23:15:41 GMT
Organization: Carnegie Mellon, Pittsburgh, PA
Lines: 46


     As is known, the recent issue of Scientific American carries an article by
John Searle ,where he speaks against strong A.I. He has used two examples : The
chinese room and the horse (refer the fig. in the issue).

     The chinese room example is not convincing enough, because he has'nt explained what he means by "understanding" exactly. When do we say a person has "understood" something. When he can respond to "it" properly ? That is the question we must address.Besides, at this point we have to distinguish between "understanding" and "learning".A system has "learnt" if it has "understood" and can respond
to NEW situations not present when "understood".Thus learning should implicitly
mean    "understanding". However, what is "understanding"?
    In the chinese rom example the person definitely does'nt "learn" because if
 given a question to which none of the rules in the book apply ,he can't respond. This is to say,

              learning => understanding but the vice versa need'nt be true.
This also implies, if we can build a system which "learns" ,we would have built
a system which "understands".
    But, can we build a system which only "understands", but does'nt learn? But,how are we to test whether the system really "understands"? For that matter, when do we say a person has learnt   something? I'd say when,

          (1) he can respond to that "something" as does a person who has "understood" that and

       (2) he is REMINDED of another event or episode which is SIMILAR to that recalled by a person who has understood.

Of course, we have to have a person whom all of us accept as to have "understood" that.
     Thus, the person in the chinese room does'nt understand chinese,because he's not reminded of anything that a chinese would be reminded of, even though the
person responds "intelligently" ,thus satisfying (1) but not (2). This also raises another possibility.What if the person IS reminded of something similar to that of the chinese but is unable to respond properly.We'd say that the person lacks "intelligence".

    In other words, (1) is a test for "inteelligence" and (2) is a test for "understanding ability". By now you might have noticed that (1) is nothing but the well-known "Turing's test". I'd call (2) as the "Case-Retrieval" test,where by case, I mean a previous episode or event.
    By the above definitions, the person in the chinese room is "intelligent" but, lacks the ability of "understanding". This applies to computers as well.A system which satisfies both the tests would inevitably possess "learning ability".
In essence, learning is a sufficient condition for both "intelligence" and "understanding".
     Let us design a test for "understanding" based on the above.

The Case-Retrieval test
-----------------------
     This is similar to the Turing's test.
 Say, we have to test whether a system "understands" chinese.
First we select a person whom we accept as "understands" chinese.
Next we present both the system and this person a passage in chinese,which describes an event.
We then, ask the person to write down all the things or events he is reminded of.
We also run the program and note down all the cases it retrieves from its case-
base.
We select an equal number of "remindings" from each of these and mix them up.
If we can't distinguish between which of the remindings are the person's and whic the system's then, the system clearly "understands" chinese.


S.Narasimhan
sn13+@andrew.cmu.edu

crowston@athena.mit.edu (Kevin G Crowston) (01/10/90)

One strategy to make Searle's argument clearer is to separate the
different uses of understand that are floating around.  

Searle-understanding is understanding as defined by Searle;
Turing-understanding is understanding as defined by the Turing test.
The Chinese room does not Searle-understand anything, but it does
Turing-understand Chinese.  

Searle obviously prefers Searle-understanding to Turing-understanding
(or more accurately, doesn't believe Turing-understanding is "true"
understanding), but I prefer Turing-understanding to Searle-
understanding (at least I know how to tell when something
Turing-understands).  

In this case, AI can be viewed as an attempt to create programs that
Turing-understand.  As Searle says, we're not even in the game; but
then, it's not clear we ever claimed (or want) to be.  

Kevin Crowston

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (01/10/90)

I haven't seen any exposition of the "neural" way of viewing
"understanding". In this paradigm, the internal universe of the machine
consists purely of associations ("resonances", if you like). The "richness
of understanding" then corresponds to the sum total of category activations,
weak and strong, which are stimulated by the subject of the "understanding".
One new activated category clearly is a poor candidate for "understanding",
whereas a whole slew of resultant partial and full activations corresponds
to a "fuller understanding". In this case, many memory traces are, to a
greater or lesser extent, simultaneously present and provide a "feel" for
the subject at hand. This view of understanding can therefore be seen as
a graded model, whereby "degree of understanding" lies on a continuous
scale. In the case where a novel concept is presented, and gives rise to
(a) new association(s) with previously-learned nets of association, this
can be viewed as a 'deep level of understanding", since it gives rise to
the "aha!" effect.

My $0.02.
-- 
...........................................................................
Andrew Palfreyman	andrew@dtg.nsc.com	Albania before April!

jgk@osc.COM (Joe Keane) (01/12/90)

In article <MZeW3XO00WB4MBpksI@andrew.cmu.edu> sn13+@andrew.cmu.edu (S.
Narasimhan) writes:
>    But, can we build a system which only "understands", but does'nt learn?
>But,how are we to test whether the system really "understands"? For that
>matter, when do we say a person has learnt   something? I'd say when,
>
>          (1) he can respond to that "something" as does a person who has
>"understood" that and
>
>       (2) he is REMINDED of another event or episode which is SIMILAR to
>that recalled by a person who has understood.

I'm not sure why you think being reminded of something is a good test for
understanding.  In fact, i'd say that the more of the opposite is true.  If
you don't understand something, you tend to use logical reasoning, and
analogy, using something things you are reminded of to reason about the thing
you want to reason about.  However, if you truly `understand' something, you
don't need to do that at all, you just _know_ what properties it has or how it
will behave.

> In other words, (1) is a test for "inteelligence" and (2) is a test for
>"understanding ability". By now you might have noticed that (1) is nothing
>but the well-known "Turing's test". I'd call (2) as the "Case-Retrieval"
>test,where by case, I mean a previous episode or event.

In fact, your Case-Retrieval Test is included in the Turing Test.  It is
perfectly reasonable as part of the Turing Test to ask the candidates for
word-association, their childhood history, their feelings about Manuel
Noriega, or whatever else you want.  That's what makes it so difficult.