[net.ai] simulating a neural network

iarocci@eneevax.UUCP (Bill Dorsey) (10/17/86)

   Having recently read several interesting articles on the functioning of
neurons within the brain, I thought it might be educational to write a program
to simulate their functioning.  Being somewhat of a newcomer to the field of
artificial intelligence, my approach may be all wrong, but if it is, I'd
certainly like to know how and why.
   The program simulates a network of 1000 neurons.  Any more than 1000 slows
the machine down excessively.  Each neuron is connected to about 10 other
neurons.  This choice was rather arbitrary, but I figured the number of
connections would be proportional the the cube root of the number of neurons
since the brain is a three-dimensional object.
   For those not familiar with the basic functioning of a neuron, as I under-
stand it, it functions as follows:  Each neuron has many inputs coming from
other neurons and its output is connected to many other neurons.  Pulses
coming from other neurons add or subtract to its potential.  When the pot-
ential exceeds some threshold, the neuron fires and produces a pulse.  To
further complicate matters, any existing potential on the neuron drains away
according to some time constant.
   In order to simplify the program, I took several short-cuts in the current
version of the program.  I assumed that all the neurons had the same threshold,
and that they all had the same time constant.  Setting these values randomly
didn't seem like a good idea, so I just picked values that seemed reasonable,
and played around with them a little.
   One further note should be made about the network.  For lack of a good
idea on how to organize all the connections between neurons, I simply connect-
ed them to each other randomly.  Furthermore, the determination of whether
a neuron produces a positive or negative pulse is made randomly at this point.
   In order to test out the functioning of this network, I created a simple
environment and several inputs/outputs for the network.  The environment is
simply some type of maze bounded on all sides by walls.  The outputs are
(1) move north, (2) move south, (3) move west, (4) move east.  The inputs are
(1) you bumped into something, (2) there's a wall to the north, (3) wall to
the south, (4) wall to the west, (5) wall to the east.  When the neuron 
corresponding to a particular output fires, that action is taken.  When a
specific input condition is met, a pulse is added to the neuron corresponding
to the particular input.
   The initial results have been interesting, but indicate that more work
needs to be done.  The neuron network indeed shows continuous activity, with
neurons changing state regularly (but not periodically).  The robot (!) moves
around the screen generally winding up in a corner somewhere where it occas-
ionally wanders a short distance away before returning.
   I'm curious if anyone can think of a way for me to produce positive and
negative feedback instead of just feedback.  An analogy would be pleasure
versus pain in humans.  What I'd like to do is provide negative feedback
when the robot hits a wall, and positive feedback when it doesn't.  I'm 
hoping that the robot will eventually 'learn' to roam around the maze with-
out hitting any of the walls (i.e. learn to use its senses).
   I'm sure there are more conventional ai programs which can accomplish this
same task, but my purpose here is to try to successfully simulate a network
of neurons and see if it can be applied to solve simple problems involving
learning/intelligence.  If anyone has any other ideas for which I may test
it, I'd be happy to hear from you.  Furthermore, if anyone is interested in
seeing the source code, I'd be happy to send it to you.  It's written in C
and runs on an Atari ST computer, though it could be easily be modified to
run on almost any machine with a C compiler (the faster it is, the more
neurons you can simulate reasonably).

-- 
-------------------------------------------------------------------------------
| Bill Dorsey                                                                 |
|                      'Imagination is more important than knowledge.'        |
|                                            - Albert Einstein                |
| ARPA : iarocci@eneevax.umd.edu                                              |
| UUCP : [seismo,allegra,rlgvax]!umcp-cs!eneevax!iarocci                      |
-------------------------------------------------------------------------------

jam@bu-cs.BU.EDU (Jonathan Marshall) (10/20/86)

In article <223@eneevax.UUCP> iarocci@eneevax.UUCP (Bill Dorsey) writes:
>
>   Having recently read several interesting articles on the functioning of
>neurons within the brain, I thought it might be educational to write a program
>to simulate their functioning.  Being somewhat of a newcomer to the field of
>artificial intelligence, my approach may be all wrong, but if it is, I'd
>certainly like to know how and why.
>   The program simulates a network of 1000 neurons.  Any more than 1000 slows
>the machine down excessively.  Each neuron is connected to about 10 other
>neurons.
>  .
>  .
>  .
>   The initial results have been interesting, but indicate that more work
>needs to be done.  The neuron network indeed shows continuous activity, with
>neurons changing state regularly (but not periodically).  The robot (!) moves
>around the screen generally winding up in a corner somewhere where it occas-
>ionally wanders a short distance away before returning.
>   I'm curious if anyone can think of a way for me to produce positive and
>negative feedback instead of just feedback.  An analogy would be pleasure
>versus pain in humans.  What I'd like to do is provide negative feedback
>when the robot hits a wall, and positive feedback when it doesn't.  I'm 
>hoping that the robot will eventually 'learn' to roam around the maze with-
>out hitting any of the walls (i.e. learn to use its senses).
>   I'm sure there are more conventional ai programs which can accomplish this
>same task, but my purpose here is to try to successfully simulate a network
>of neurons and see if it can be applied to solve simple problems involving
>learning/intelligence.  If anyone has any other ideas for which I may test
>it, I'd be happy to hear from you.


Here is a reposting of some references from several months ago.
* For beginners, I especially recommend the articles marked with an asterisk.

Stephen Grossberg has been publishing on neural networks for 20 years.
He pays special attention to designing adaptive neural networks that
are self-organizing and mathematically stable.  Some good recent
references are:

 (Category Learning):----------
*  G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for
     a Self-Organizing Neural Patttern Recognition Machine."  Computer
     Vision, Graphics, and Image Processing.  In Press.
   G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning
     and Recognition: Structural Invariants, Reinforcement, and Evoked
     Potentials."  In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds),
     Pattern Recognition in Animals, People, and Machines.  Hillsdale, NJ:
     Erlbaum, 1986.
 (Learning):-------------------
*  S. Grossberg, "How Does a Brain Build a Cognitive Code?"  Psychological
     Review, 1980 (87), p.1-51.
*  S. Grossberg, "Processing of Expected and Unexpected Events During
     Conditioning and Attention."  Psychological Review, 1982 (89), p.529-572.
   S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning,
     Perception, Development, Cognition, and Motor Control.  Boston:
     Reidel Press, 1982.
   S. Grossberg, "Adaptive Pattern Classification and Universal Recoding:
     I. Parallel Development and Coding of Neural Feature Detectors."
     Biological Cybernetics, 1976 (23), p.121-134.
   S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation,
     and Rhythm.  Amsterdam: North Holland, 1986.
*  M.A. Cohen and S. Grossberg, "Masking Fields: A Massively Parallel Neural
     Architecture for Learning, Recognizing, and Predicting Multiple
     Groupings of Patterned Data."  Applied Optics, In press, 1986.
(Vision):---------------------
   S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor
     Control.  Amsterdam: North Holland, 1986.
   S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping:
     Textures, Boundaries, and Emergent Segmentations."  Perception &
     Psychophysics, 1985 (38), p.141-171.
   S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception:
     Boundary Completion, Illusory Figures, and Neon Color Spreading."
     Psychological Review, 1985 (92), 173-211.
(Motor Control):---------------
   S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory-
     Motor Control: Ballistic Eye Movements.  Amsterdam: North-Holland, 1985.


If anyone's interested, I can supply more references.

--Jonathan Marshall

harvard!bu-cs!jam

wendt@megaron.UUCP (10/21/86)

Anyone interested in neural modelling should know about the Parallel
Distributed Processing pair of books from MIT Press.  They're
expensive (around $60 for the pair) but very good and quite recent.

A quote:

Relaxation is the dominant mode of computation.  Although there
is no specific piece of neuroscience which compels the view that
brain-style computation involves relaxation, all of the features
we have just discussed have led us to believe that the primary
mode of computation in the brain is best understood as a kind of 
relaxation system in which the computation proceeds by iteratively
seeking to satisfy a large number of weak constraints.  Thus,
rather than playing the role of wires in an electric circuit, we
see the connections as representing constraints on the co-occurrence
of pairs of units.  The system should be thought of more as "settling
into a solution" than "calculating a solution".  Again, this is an
important perspective change which comes out of an interaction of
our understanding of how the brain must work and what kinds of processes
seem to be required to account for desired behavior.

(Rumelhart & Mcclelland, Chapter 4)

Alan Wendt
U of Arizona

btb@ncoast.UUCP (Brad Banko) (10/23/86)

Bill,
	Your program sounds very interesting... I have heard of related work
being done using matrices and transforms upon them to cause the "learning", but
your approach does something very interesting... it points out just what the
"missing" link is in the learning mode... getting the feedback in...
I suppose you have heard of these hardware devices that have been used recently
(Bell Labs, I think) based on a neural network model to find good solutions to
hard problems (travelling salesman...) fast... not the best solutions, but
fast good solutions...
	I would like to have a look at your source, if you would post it, or
send it to me...

					Brad Banko

				...!decvax!cwruecmp!ncoast!btb
		
-- 
Bradley T. Banko

lishka@uwslh.UUCP (a) (10/29/86)

	I just read an interesting short blurb in the most recent BYTE issue 
(the one with the graphics board on the cover)...it was in Bytelines or 
something.  Now, since I skimmed it, my info is probably a little sketchy,
but here's about what it said:

	Apparently Bell Labs (I think) has been experimenting with neural
network-like chips, with resistors replacing bytes (I guess).  They started
out with about 22 'neurons' and have gotten up to 256 or 512 (can't
remember which) 'neurons' on one chip now.  Apparently these 'neurons' are
supposed to run much faster than human neurons...it'll be interesting to see
how all this works out in the end. 

	I figured that anyone interested in the neural network program might
be interested in the article...check Byte for actual info.  Also, if anyone
knows more about this experiment, I would be interested, so please mail me
any information at the below address.                 

-- 
Chris Lishka                   /l  lishka@uwslh.uucp
Wisconsin State Lab of Hygiene -lishka%uwslh.uucp@rsch.wisc.edu
                               \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

cdaf@iuvax.UUCP (Charles Daffinger) (10/30/86)

In article <151@uwslh.UUCP> lishka@uwslh.UUCP 	[Chris Lishka] writes:
>
>...
>	Apparently Bell Labs (I think) has been experimenting with neural
>network-like chips, with resistors replacing bytes (I guess).  They started
>out with about 22 'neurons' and have gotten up to 256 or 512 (can't
>remember which) 'neurons' on one chip now.  Apparently these 'neurons' are
>supposed to run much faster than human neurons...
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

What bothers me is that the performance is rated upon speed.  Unlike the 
typical syncronous digital computer, neuronal networks are asyncronous, 
communicating via a temporal discharge of 'spikes' through axons which vary
considerably in length, as well as speed, and exploit the use of SLOW signals
just as they do those of FAST signals.  (look at the neral mechanism for a
reflex, or for that of focusing the eye, as an example).

I am curious as to how much of the essence of their namesakes was really
captured in these 'neurons'?


-charles

 



-- 
... You raise the blade, you make the change, you re-arrange me til I'm sane...
    Pink Floyd