[comp.ai.neural-nets] Baby bootstrap

gaudiano@retina.bu.edu (Paolo Gaudiano) (03/05/90)

>Note that babies bootstrap, so that "knowing what to look for" becomes
>increasingly sophisticated.

>Note that networks are especially good at feature extraction.

>So, combining my two short comments, why not build a network that bootstraps
>on an increasingly complex environment? - in other words, build two networks.

>There are a couple of ways you might do this; one way could be hierarchical;
>one on top of another. The top network is the "knowing what to look for" unit,
>which would provide a parallel "vigilance" input to the lower unit which
>is actually performing the task in hand.

I have been working on an adaptive model for autonomous control of arm
trajectories. The non-adaptive model for arm trajectory formation is
the VITE model of Bullock & Grossberg[1]. Grossberg and I have extended
this to include adaptability, and also an additional circuit that
autonomously generates vectors to the Adaptive VITE (AVITE) [2,3]. This
works as a bootstrapping procedure that makes use EXCLUSIVELY of an
internal measure of error to generate correct trajectory learning. We
call the bootstrapping a "circular reaction" after Piaget [e.g.,4], who
basically observed eye-hand coordination in children and suggested
that during a circular reaction children "spontaneously" move their
hand to some position, and as the eyes automatically follow the hand,
a transformation is learned between motor and sensory domains. This is
an oversimplified summary, but we have suggested that the Endogenous
Generator (EG) circuit can be used universally. The babbling phase of
speech acquisition is another example of this circular reaction.

Also, funny that you should use the term "vigilance". Carpenter and
Grossberg's Adaptive Resonance Theory (ART) [5,6,7] makes use of what they
call a "vigilance parameter" to control the level of discrimination
that must be met if two objects are to be categorized together. In his
reply, Lawrence says:

>The ultimate neural network would probably allow direct modification of
>its weights (like in artificial models) and also "automatic" mode (like
>in the brain).  You'd get the best of both worlds.  This is a convincing
>argument (among others) that if an artificial brain is ever devised with
>the capabilities of the human one, then there also exists one of the
>former that is superior to any of the latter.

well, ART is part of such a network. The ART model constantly compares
incoming inputs with the model's internal representation of what that
input should be (if such an internal representation already exists).
ART has been embedded within (among other things) circuits that have
explained large amounts of data on classical and instrumental
conditioning [8,9,10]. Here ART acts at different levels: (1) it
performs "object recognition" at the sensory input level, and (2) it
is also embodied in the interactions between internal representations
of the sensory inputs, and the drives that "motivate" behavior in the
organism. Note that the concept of "Adaptive resonance" was first
introduced in 1976 as a general concept [11], which has since been
implicated in a number of different contexts (vision, speech,
conditioning).

It is only unfortunate that so much of Grossberg's work is so
difficult to read. The unfortunate side is that--while *sometimes* his
writing style is not the clearest--the reason for this difficulty is
usually that the work is so overwhelming in scope. It takes a true
interdisciplinary scientist to be able to follow through even the
simplest of his papers. Without guidance, it is almost impossible to
know which papers to read in which order, and what are the important
points, etc. Oh well. Maybe in some years this stuff will be taught at
the undergraduate level!

			Paolo

REFERENCES:

[1] D. Bullock, and S. Grossberg (1988a), ``Neural Dynamics of
Planned Arm Movements: Emergent Invariants and Speed-Accuracy
Properties During Trajectory Formation.'' {\em Psychological Review},
{\bf 95} (1), 49-90.

[2] P. Gaudiano and S. Grossberg (1990) ``A Self-Regulating Endogenous
Generator of Sample-and-Hold Random Training Vectors.'' In M. Caudill
(Ed.) {\em International Joint Conference on Neural Networks.
Washington, DC, January 1990.}\/ Hillsdale, NJ: Earlbaum.

[3] P. Gaudiano and S. Grossberg (1990), ``A Self-Organizing Neural
Circuit for Control of Planned Movement Trajectories.'' In
preparation.

[4] Piaget, J. (1963). {\em The origins of intelligence
in children}. New York: Norton.
{\bf 95(1)}, 49-90.

[5] G. Carpenter and S. Grossberg (1986) ``A Massively Parallel
Architecture for a Self-Organizing Neural Pattern Recognition
Machine.'' {\em Computer Vision, Graphics, and Image Processing.}\/
{\bf 37}, 54-115.

[6] G. Carpenter and S. Grossberg (1987) ``ART 2: self-organization of
stable category recognition codes for analog input patterns.'' {\em
Applied Optics},\/ {\bf 26}, (23), 4919-4930.

[7] G. Carpenter and S. Grossberg (1990) ``ART 3: Hierarchical Search
Using Chemical Transmitter in Self-Organizing Pattern Recognition
Architectures'' {\em Neural Networks,} In Press.

[8] S. Grossberg (1986) {\bf The Adaptive Brain 1: Cognition,
Learning, Reinforcement, and Rhythm.} Amsterdam: Elsvier/North-Holland.

[9] S. Grossberg and D. Levine (1987) ``Neural dynamics of
attentionally modulated Pavlovian conditioning: blocking,
interstimulus interval, and secondary reinforcement.'' {\em Applied
Optics},\/ {\bf 26}, (23), 5015-5030.

[10] S. Grossberg and N. Schmajuk (1987) ``Neural dynamics of
attentionally modulated Pavlovian conditioning: conditioned
reinforcement, inhibition, and opponent processing.'' {\em
Psychobiology,}\/ {\bf 15} (3), 195-240.

[11] S. Grossberg (1976) "Adaptive Pattern classification and
universal recoding, II: Feedback, expectation, olfaction, and
illusion". {\em Biological Cybernetics},\/ {\bf 23}, 187-202