[net.ai] neural networks

js2j@mhuxt.UUCP (sonntag) (04/30/86)

A recent issue of 'Science' had an article on 'neural networks', which,
apparently consist of a highly interconnected repetition of some sort of
simple 'nodes' with an overall positive feedback and some sort of
randomness thrown in for good measure.  When these networks are 'powered up',
the positive feedback quickly forces the system into a stable state, with
each node either 'on' or 'off'.  The article claimed that some
simulations of moderate sized (10K nodes?) networks had been done, and
reported some rather amazing results.  For one thing, it was discovered
that if just 50 out of 10k nodes are preset to a particular value, the
network has just ~100 very similar stable states, out of 10**1000 possibilities.
They also claimed that one such system was able to arrive at a 'very good'
solution to arbitrary 'traveling salesman' problems!  And that another
network (hooked to a piece of equipment which could produce phonemes, and
presumably some kind of feedback) had been 'trained' to read english text
reasonably well.  They said incredibly little about the actual details of
how each node operates, unfortunately.
    So how about it?  Has anybody else heard of these things?  Is this 
really a way of going about AI in a way which *may* be similar to what
brains do?  Just exactly what algorithms are the nodes implementing, and
how do you provide input and get output from them?  Does anyone know 
where I could get more information about them?
Jeff Sonntag
ihnp4!mhuxt!js2j
-- 
Jeff Sonntag
ihnp4!mhuxt!js2j

crs%lanl@lanl.UUCP (05/02/86)

> A recent issue of 'Science' had an article on 'neural networks', which,
> 	.
> 	.
> 	.
In a related vein, the 7 April, 1986 issue of Electronic Engineering Times
(an electronics engineering newspaper) featured the following articles in
the Computer Engineering section:

		Hopfield's Nerve Nets Realize Biocomputing

		Neural Chips Emulate Brain Functions

		Brain-Emulating Circuits Need `Sleep' and `Dreams'

Several other issues of this weekly paper have, over the past month or so,
carried one or more related articles.

-- 
The opinions expressed are not necessarily those of my employer,
the government or your favorite deity.

Charlie Sorsby
...!{cmcl2,ihnp4,...}!lanl!crs
crs@lanl.arpa

sr%pyuxv@pyuxv.UUCP (05/02/86)

In article <837@mhuxt.UUCP> js2j@mhuxt.UUCP (sonntag) writes:
>A recent issue of 'Science' had an article on 'neural networks', which,
>apparently consist of ...
etc.

To set the facts straight.
The name of the mag is Science 86 which is published by AAAS and is not
to be confused with the journal Science, also published by AAAS.

>They said incredibly little about the actual details of
>how each node operates, unfortunately.
Probably because its intended audience is rather broad - intelligent
people with no particular expertise or training assumed.
Kind of a Readers Digest for Yuppies with a high-tech inclinations.

>    So how about it?  Has anybody else heard of these things?  Is this 
>really a way of going about AI in a way which *may* be similar to what
>brains do?  Just exactly what algorithms are the nodes implementing, and
>how do you provide input and get output from them?  Does anyone know 
>where I could get more information about them?
You might try turning to the back of the magazine, to a section listing
articles for further, deeper reading.
Or you can look in today's paper (if you happen to read the NY Times) and
check the article on page D2 which announces the commercial availability
of the Connection Machine from a start-up concern in Cambridge.


Probably next week there will be ads on CBS during the evening news.

Steve Radtke
bellcore!u1100a!sr
Bell Communications Research
Piscataway, NJ

cottrell%sdics@sdics.UUCP (05/02/86)

Hopfield is the one who did the traveling salesman problem. I'm not sure where
he is, tho. For the NETTALK (reading aloud) report, write Terry Sejnowski at
Dept. of Biophysics
Johns Hopkins University
Baltimore,  Maryland 21218

(This work was done with Charles Rosenberg of Princeton)

For lots of stuff on "connectionist" or Parallel Distributed Processing
(PDP) models, see the the last few years of Cognitive Science Society,
AAAI and IJCAI proceedings, the January 85 issue of Cognitive Science
and buy the "Parallel Distributed Processing: Explorations in the 
Microstructure of Cognition" book when it comes out this month from Bradford/
MIT books.

gary cottrell				
Institute for Cognitive Science, UCSD
cottrell@nprdc (ARPA)
{ucbvax,decvax,akgua,dcdwest}!sdcsvax!sdics!cottrell (USENET)

goddard@rochester.UUCP (05/02/86)

Departments working in this area include, amongst others:

C.S. University of Rochester
     Carnegie Mellon
Cog Sci University of California, San Diego
??   University of Massachussets, Amherst

There is a technical report "Rochester Connectionis Papers" availible
here which probably references a lot of other work as well.

Nigel Goddard

regier@dali.berkeley.edu (Terrance P. Regier) (05/03/86)

In article <175@sdics.UUCP> cottrell@sdics.UUCP (Gary Cottrell) writes:
>
>Hopfield is the one who did the traveling salesman problem. I'm not sure 
>where he is, tho. 
>

J.J Hopfield is at the: Division of Chemistry and Biology
			California Institute of Technology
			Pasadena, CA   91125


				-- Terry

alfke@cit-vax.UUCP (05/04/86)

In article <175@sdics.UUCP> cottrell@sdics.UUCP (Gary Cottrell) writes:
>
>Hopfield is the one who did the traveling salesman problem. I'm not sure where
>he is, tho.

Hopfield is the "Roscoe G. Dickinson Professor of Chemistry and Biology" here
at Caltech.
						--Peter Alfke
						  alfke@csvax.caltech.edu
-- 
						--Peter Alfke
"Man, Woman, Child:				  alfke@csvax.caltech.edu
 All Is Up Against the Wall of
 SCIENCE"		--Firesign Theatre

jam%bu-cs@bu-cs.UUCP (05/04/86)

Stephen Grossberg has been publishing on neural networks for 20 years.
He pays special attention to designing adaptive neural networks that
are self-organizing and mathematically stable.  Some good recent
references are:

(Category Learning):----------
   G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for
     a Self-Organizing Neural Patttern Recognition Machine."  Computer
     Vision, Graphics, and Image Processing.  In Press.
   G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning
     and Recognition: Structural Invariants, Reinforcement, and Evoked
     Potentials."  In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds),
     Pattern Recognition in Animals, People, and Machines.  Hillsdale, NJ:
     Erlbaum, 1986.
(Learning):-------------------
   S. Grossberg, "How Does a Brain Build a Cognitive Code?"  Psychological
     Review, 1980 (87), p.1-51.
   S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning,
     Perception, Development, Cognition, and Motor Control.  Boston:
     Reidel Press, 1982.
   S. Grossberg, "Adaptive Pattern Classification and Universal Recoding:
     I. Parallel Development and Coding of Neural Feature Detectors."
     Biological Cybernetics, 1976 (23), p.121-134.
   S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation,
     and Rhythm.  Amsterdam: North Holland, 1986.
(Vision):---------------------
   S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor
     Control.  Amsterdam: North Holland, 1986.
   S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping:
     Textures, Boundaries, and Emergent Segmentations."  Perception &
     Psychophysics, 1985 (38), p.141-171.
   S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception:
     Boundary Completion, Illusory Figures, and Neon Color Spreading."
     Psychological Review, 1985 (92), 173-211.
(Motor Control):---------------
   S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory-
     Motor Control: Ballistic Eye Movements.  Amsterdam: North-Holland, 1985.


If anyone's interested, I can supply more references.

orsay@mtuxo.UUCP (j.ratsaby) (05/11/86)

> 
> Stephen Grossberg has been publishing on neural networks for 20 years.
> He pays special attention to designing adaptive neural networks that
> are self-organizing and mathematically stable.  Some good recent
> references are:
> 
> (Category Learning):----------
>    G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for
>      a Self-Organizing Neural Patttern Recognition Machine."  Computer
>      Vision, Graphics, and Image Processing.  In Press.
>    G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning
>      and Recognition: Structural Invariants, Reinforcement, and Evoked
>      Potentials."  In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds),
>      Pattern Recognition in Animals, People, and Machines.  Hillsdale, NJ:
>      Erlbaum, 1986.
> (Learning):-------------------
>    S. Grossberg, "How Does a Brain Build a Cognitive Code?"  Psychological
>      Review, 1980 (87), p.1-51.
>    S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning,
>      Perception, Development, Cognition, and Motor Control.  Boston:
>      Reidel Press, 1982.
>    S. Grossberg, "Adaptive Pattern Classification and Universal Recoding:
>      I. Parallel Development and Coding of Neural Feature Detectors."
>      Biological Cybernetics, 1976 (23), p.121-134.
>    S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation,
>      and Rhythm.  Amsterdam: North Holland, 1986.
> (Vision):---------------------
>    S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor
>      Control.  Amsterdam: North Holland, 1986.
>    S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping:
>      Textures, Boundaries, and Emergent Segmentations."  Perception &
>      Psychophysics, 1985 (38), p.141-171.
>    S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception:
>      Boundary Completion, Illusory Figures, and Neon Color Spreading."
>      Psychological Review, 1985 (92), 173-211.
> (Motor Control):---------------
>    S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory-
>      Motor Control: Ballistic Eye Movements.  Amsterdam: North-Holland, 1985.
> 
> 
> If anyone's interested, I can supply more references.

I would like to ask you the following:
From all the books that you read,was there any machine built or simulation
that actually learned by adapting its inner structure ?

if so then what type of information was learned by the machine and in what
quantities ? what action was taken to ask the machine to "remember" and 
retrive information ? and finally , where are we standing today,that is, to
your knowledge, which is the machine that behaves the closest to the
biological brain ?
I would very much apreciate reading some of your thoughts about the above,

      thanks in advance.
      joel Ratsaby
      !mtuxo!orsay

jam@bu-cs.UUCP (Jonathan A. Marshall) (05/14/86)

In article <1583@mtuxo.UUCP> orsay@mtuxo.UUCP (j.ratsaby) writes:
> In article <538@bu-cs.UUCP> jam@bu-cs.UUCP (Jonathan Marshall) writes:
>> 
>> Stephen Grossberg has been publishing on neural networks for 20 years.
>> He pays special attention to designing adaptive neural networks that
>> are self-organizing and mathematically stable.  Some good recent
>> references are:
>>  .
>>  .
>>  .
>> If anyone's interested, I can supply more references.

> I would like to ask you the following:
> From all the books that you read,was there any machine built or simulation
> that actually learned by adapting its inner structure ?

TRW is building a chip called the MARK-IV which implements some of
Grossberg's earlier adaptive neural networks.  The chip basically acts
as an adaptive pattern recognizer.

Also, Grossberg's group, the Center for Adaptive Systems, has
simulated some of his parallel learning algorithms in software.  In
particular, "masking fields" have been applied to speech-recognition,
the "boundary contour system" has been applied to visual pattern
segmentation, and other networks have been applied to symbolic
pattern-recognition.

> if so then what type of information was learned by the machine and in what
> quantities ? what action was taken to ask the machine to "remember" and 
> retrive information ? and finally , where are we standing today,that is, to
> your knowledge, which is the machine that behaves the closest to the
> biological brain ?
> I would very much apreciate reading some of your thoughts about the above,
>      thanks in advance.      joel Ratsaby      !mtuxo!orsay

The network simulations learned to discriminate patterns based on
arbitrary similarity measures.  They also performed associative
learning tasks that explain psychological data such as "inverted U,"
"overshadowing," "attentional priming," "speed-accuracy trade-off,"
and more.  The networks learned and remembered spatial patterns of
neural activity. The networks then later retrieved the patterns, using
them as "expectation templates" to match with newer patterns.  The
degree of match or mismatch determined whether (1) the newer patterns
were represented as instances of the "expected" pattern, or (2) a fast
parallel search was initiated for another matching template, or (3)
the new pattern was allocated its own separate representation as an
unfamiliar pattern.

One of Grossberg's main contributions to learning theory has been the
design of self-organizing associative learning networks.  His networks
function more robustly than most other designs because they are
self-scaling (big patterns get processed just as effectively as small
patterns), self-tuning (the networks dynamically adjust their own
capacities to simultaneously prevent saturation and suppress noise),
and self-organizing (learning occurs within the networks to produce
finer or coarser pattern discriminations, as required by experience).
Grossberg's mathematical analyses of "mass-action" systems enabled him
to design networks with these properties.

In addition, his networks are physiologically realistic and unify a
great deal of otherwise fragmented psychological data.  Read one or
two of his latest papers to see his claims.

The question of which _machine_ behaves closest to the biological
brain is not yet appropriate.  The candidates I know of are all
software simulations, with the possible exception of the TRW Mark-IV,
which is quite limited in capacity.  Other schemes, such as Hopfield
nets, are not mass-action (in the technical sense) simulations, and
hence fail to observe certain kinds of local-global tradeoffs that
characterize biological systems.

However, the situation is hopeful today.  More AI researchers have
been recognizing the importance of studying biological systems in
detail, to gain intuition and insight for designing adaptive neural
networks.

gordon@warwick.UUCP (Gordon Joly) (05/17/86)

This may be a bit of  a tangent, but I feel it  might have some impact on
the current discussion.
The mathematical theory of chaotic systems is currently an active area of
research. The main observation is that models of even very simple systems
become chaotic in a very small space of time.
The human brain is far from being a simple system,  yet the transition to
chaos rarely occurs.  There must be a self-correcting  element within the 
system itself, as it is often perturbed by myriad external stimuli.
Is the positive feedback mentioned in article <837@mhuxt.UUCP> thought to 
be similar to the self-correcting mechanisms in the brain?

Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon

kempf@hplabsc.UUCP (05/23/86)

> This may be a bit of  a tangent, but I feel it  might have some impact on
> the current discussion.
> The mathematical theory of chaotic systems is currently an active area of
> research. The main observation is that models of even very simple systems
> become chaotic in a very small space of time.
> The human brain is far from being a simple system,  yet the transition to
> chaos rarely occurs.  There must be a self-correcting  element within the 
> system itself, as it is often perturbed by myriad external stimuli.
> Is the positive feedback mentioned in article <837@mhuxt.UUCP> thought to 
> be similar to the self-correcting mechanisms in the brain?
> 
> Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon

Not having seen <837@mhuxt.UUCP>, I can't comment on the question. 
However, I do have some thoughts on the relation between chaos
in dynamical systems and the brain. The "chaotic" dynamical behavior
seen in many simple dynamical systems models is often restricted
to a small region of the state space. By a kind of renormalization
procedure, this small region might be topologically shrunk, so that,
from a more macroscopic view, the chaotic region actually looks
more like a point attractor. Another possibility is that complex
systems like the brain are able to perform a kind of ensemble
averaging to filter out chaos. Sorry if this sounds like speculation.
		Jim Kempf	kempf@hplabs