[comp.ai] Understanding involves Learning?

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (03/31/89)

There has been a lot of discussion lately about the Chinese Room, and
whether purely syntactic processes could understand, or even appear to
understand. In the course of this Stevan Harnad has argued that the kind
of linguistic competence needed to pass the Turing Test couldn't be
possessed by anything short of a creature potentially capable of passing
the Total Turing Test, i.e., a creature "living" in the real world, with
sensors, effectors, and no doubt, a personal history. If I have grasped
his argument properly, this is because convincing linguistic competence
will require the kind of complex internal mechanisms inevitably involved
in handling rich sensors in a capable way; the mechanisms involved in
"symbol grounding" as it is often called, although it is the whole
syntactic mechanics which needs grounding, not just the symbols.

In other words (and not as simply as these few words suggest),
convincing linguistic competence requires semantics as well as syntax.

My question is this. Does convincing linguistic competence involve
learning? For it seems to me that one of the things that happens in
human conversations is that, in many trivial little ways, in hints,
metaphors, negotiations, etc., both parties are offering one another
opportunities to learn, even trying to teach. Sooner or later a
conversational robot which couldn't learn new ideas would be suspected
of being a metal-head.

To approach it from another direction: does understanding involve
learning?

-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/02/89)

From article <319@edai.ed.ac.uk>, by cam@edai.ed.ac.uk (Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550):
" There has been a lot of discussion lately about the Chinese Room, and
" whether purely syntactic processes could understand, or even appear to
" understand.

You've made too much sense of the discussion.  It's conceded for the
sake of the CR argument that the processes do appear to understand.
The CR does pass the Turing Test and so does possess convincing
linguistic competence.

" My question is this. Does convincing linguistic competence involve
" learning? ...

Sure.  You make a very good point here.

" Sooner or later a
" conversational robot which couldn't learn new ideas would be suspected
" of being a metal-head.

Yes.  This suspicion has been raised about certain participants
in this discussion.

		Greg, lee@uhccux.uhcc.hawaii.edu

harnad@elbereth.rutgers.edu (Stevan Harnad) (04/02/89)

cam@edai.ed.ac.uk (Chris Malcolm) of Dept. of AI, Univ. of Edinburgh, UK
asks:

" Does convincing linguistic competence involve learning?... does
" understanding involve learning?

There is good reason to believe that a candidate that will be able to
pass the Linguistic Turing Test (LTT) will have to have and draw
indirectly upon the robotic capacities that would be needed in
order to pass the Total Turing Test (TTT), and that these will include
the ability to learn. Suitably defined, "learning" is even involved in
the normal course of coherent discourse, since information is
exchanged, and the change must be reflected in the ensuing discourse.
It is another question, however, whether the candidate would
necessarily have had to arrive at its LTT-passing ability through the
exercise of its learning capacity, in real time. In principle, the
learning capacity, like the robotic capacity, could be latent  --
present as a functional capability, but not yet used directly. In other
words, there's no reason why a device with the functional wherewithal
to pass the LTT couldn't have sprung, like Athena, fully developed from
the head of Zeus (or some other artificer). In that sense there's
nothing magic about learning or development (or even about real -- as
opposed to apparent -- experiential history).
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

bwk@mbunix.mitre.org (Barry W. Kort) (04/04/89)

In article <319@edai.ed.ac.uk> cam@edai.ed.ac.uk (Chris Malcolm) writes:

 > To approach it from another direction: does understanding involve
 > learning?

I have been advocating this idea for several years now.

It seems to me that understanding means more than just reposing
a static knowledge base.  To my mind, understanding includes
the process of gaining knowledge over time.

Incidentally, I maintain that a sentient being who is in the
process of acquiring knowledge over time will exhibit such
emotions as puzzlement, confusion, and curiosity.

--Barry Kort