[comp.ai.neural-nets] new theory of brain and learning, June'90 Psychobiology

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (07/03/90)

In article <1990Jun28.214635.974@sci.ccny.cuny.edu> richard@sci.ccny.cuny.edu (Richard Hogen) writes:
>I. Vectors vs Packets
>   Rather than seeing memory as discrete packets, ALL stored from early
>fetal stages on, I see memory as a layering of vectors or pointers.

From a connectionist viewpoint, we often deal with information as
a fixed-width vector in our models.  One problem with fixed-width
representations is how they can represent recursive structures,
such as grammars.  Some solutions have been found, as in this quote from
Jordan Pollack's "Recursive Distributed Representations" paper
available via anonymous ftp from cheops.cis.ohio-state.edu

"A long-standing difficulty for connectionist modeling has been how to
represent variable-sized recursive data structures, such as trees and
lists, in fixed-width patterns.  This paper presents a connectionist
architecture which automatically develops compact 
distributed representations for such compositional structures,
as well as efficient accessing mechanisms for them."

It is the connectionist inclination, however, that memory and knowledge
are stored as _weights_ in the connections between neurons,
and that activations of neurons are methods of using that knowledge
to perform processing.  In Pollack's paper, for instance, he creates
a neural network which has weights which allow compaction and
extraction of applied activation vectors containing those recursive
data structures.

So although connectionists do believe that information can be
represented by vectors in cognitive systems, we feel that
weighted connections between neurons which allow processing to
occur represent the true long term memory a person has.

-Thomas Edwards