[comp.ai.digest] Dual encoding, propostional memory and...

gilbert@cs.glasgow.ac.UK (Gilbert Cockton) (08/25/88)

From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Date: Tue, 23 Aug 88 05:54 EDT
To: ailist@ai.ai.mit.edu
Subject: Re: Dual encoding, propostional memory and...

In reply to Pat Hayes last posting

>Yes, but much of this debate has been between psychologists, and so has little
>relevance to the issues we are discussing here.
[psychologist's definition of different defined]
>That's not what the AI modeller means by `different', though.
>it isn't at all obvious that different behavior means different 
>representations (though it certainly suggests different implementations).

How can we talk about representation and implementation being different
in the human mind.  Are the two different in Physics, Physiology,
Neurobiology ....  And why should AI and psychology differ here?
Aren't they adressing the same nature?

I'm sorry, but I for one can see how these categories from software
design apply to human information processing.  Somewhere or other, some
neurotransmitters change, but I can't see how we can talk convincingly
about this physiological implementation having any corresponding
representation except itself.

Representation and implementation concern the design of artefacts, not
the structure of nature.  AI systems, as artefacts, must make these
distinctions.  But in the debate over forms of human memory, we are
debating nature, not artefact. Category mistake.

>It seems reasonable to conclude that these facts that they
>know are somehow encoded in their heads, ie a change of knowledge-state is a
>change of physical state.  Thats all the trickery involved in talking about
>`representation', or being concerned with how knowledge is encoded.

I would call this implementation again (my use of the word 'encoding'
was deliberately 'tongue in cheek' :-u).  I do not accept the need for
talk of representation.  Surely what we are interested in are good
models for physical neurophysiological processes?  Computation may be
such a model, but it must await the data.  Again, I am talking about
encoding.  Mental representations or models are a cognitive
engineering tool which give us a handle on learning and understanding
problems.  They ae a conative convenience, relevant to action in the
world.  They are not a scientific tool, relevant to a convincing modelling
of the mental world.

>what alternative account would you suggest for describing, for example,
>whatever it is that we are doing sending these messages to one another?

I wouldn't attempt anything beyond the literary accounts of
psychologists.  There is a reasonable body of experimental evidence,
but none of it allows us to postulate anything definite about
computational structures.  I can't see how anyone could throw up a
computational structure, given our present knowledge, and hope to be
convincing.  Anderson's work is interesting, but he is forced to ignore
arguments for episodic or iconic memory because they suggest nothing
sensible in computational terms which would be consistent with the
evidence for long term memory of a non-semantic, non-propositional form.

Computer modelling is far more totalitarian than literary accounts.
Unreasonable restrictions on intellectual freedom result.  Worse still,
far too many cognitive scientists confuse the inner loop detail of
computation with increased accuracy.  Detailed inaccuracy is actually
worse than vague inaccuracy.

Sure computation forces you to answer questions which would otherwise
be left to the future.  However, having the barrel of a LISP
interpreter pointing at your head is no greater guarantee of accuracy
than having the barrel of a revolver pointing at your head.  Whilst
computationalists boast about their bravado in facing the compiler, I
for one think it a waste of time to be forced to answer unanswerable
questions by an inanimate LISP interpreter.  At least human colleagues
have the decency to change the subject :-)

>If people who attack AI or the Computational Paradigm, simultaneously tell me
>that PDP networks are the answer

I don't.  I don't believe either the symbolic or the PDP approach.  I
have seen successes for both, but am not well enough read on PDP to
know it's failings.  All the talk of PDP was a little tease, recalling
the symbolic camp's criticism that a PDP network is not a
representation.  We certainly cannot imagine what is going on in a
massively parallel network, well not with any accuracy.  Despite our
inability to say EXACTLY what is going on inside, we can see that
systems such as WISARD have 'worked' according to its design
criteria.  PDP does not accurately model human action, but it gets
some low level learning done quite well, even on task requiring what
AI people call intelligence (e.g. spotting the apple under teddy's bottom).

>Go back and (re)read that old 1969 paper CAREFULLY,
Ah, so that's the secret of hermeneutics ;-]