[comp.ai.neural-nets] Neuron Digest V5 #17

neuron-request@HPLABS.HP.COM ("Neuron-Digest Moderator Peter Marvit") (04/12/89)

Neuron Digest	Tuesday, 11 Apr 1989
		Volume 5 : Issue 17

Today's Topics:
				NN Question
	  Re: NN Question (how can a few neurons mimic the brain?)
			      Re: NN Question
	Re: Re: NN Question (how can a few neurons mimic the brain?)
				   Thanks
			      Re: NN Question
		       Flexibility of nervous systems
	Re: Re: NN Question (how can a few neurons mimic the brain?)
		  Re: Re: bottom-up (was Re: NN Question)
			      Re: NN Question
		     Re: Flexibility of nervous systems
		     Re: Flexibility of nervous systems
		  Re: Re: bottom-up (was Re: NN Question)
		  Re: Re: bottom-up (was Re: NN Question)
		  Re: Re: bottom-up (was Re: NN Question)
		  Re: Re: bottom-up (was Re: NN Question)
		  Re: Re: bottom-up (was Re: NN Question)
		  Re: Re: bottom-up (was Re: NN Question)


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).
This issue is an edited amalgalm of the discussion on the USENET group
comp.ai.neural-nets.  The Moderator takes sole responsibility for editing.

------------------------------------------------------------

Subject: NN Question
From:    kirlik@hms3 (Alex Kirlik)
Organization: Center for Human-Machine Systems Research - Ga Tech
Date:    Thu, 02 Mar 89 00:53:26 +0000 

For those interested in the potential psychological/ physiological
significance of neural-net models:

Has anyone else been puzzled by the following phenomenon?  (I haven't found
it discussed in the literature).

Why should a net with only a few dozen neural units be successful at
mimicking human behavior that is presumably the result of the activation of
a tremendous number of neurons?  That is, why should a small number of units
be successful at simulating the behavior of a large number of neurons?

I know that the validity of this question depends upon the "level" at which
we interpret our models, but, after all, these units are modeled to mimic
the behavior of individual neurons, aren't they.  I am aware of the drastic
simplifications that are made but this doesn't change the intended referents
of our theoretical objects.

One answer would seem to be that there is a tremendous amount of additional
processing in the brain that is extraneous to the processing critical to the
task being modeled, yet we are only modeling this "critical" segment. For
many reasons (that could be discussed if necessary) I do not find this
answer particulary compelling.

A second answer might be that that neural processing has self-similar
properties.  That is, the behavior of neural collectives share properties
with the behavior of individual neurons. I find this answer to be
interesting and attractive, yet I know of no evidence for it.

A third answer might be to suggest that this is all unreasoned dribble,
since we don't want to interpret these models realistically, anyway.

It seems OK to go this way, but for those who don't, I suggest that the
question merits consideration. Or does it?

Thanks for reading,

Alex Kirlik

UUCP:	kirlik@chmsr.UUCP
        {backbones}!gatech!chmsr!kirlik
INTERNET:	kirlik@chmsr.gatech.edu

------------------------------

Subject: Re: NN Question (how can a few neurons mimic the brain?)
From:    sbrunnoc@hawk.ulowell.edu (Sean Brunnock)
Date:    Thu, 02 Mar 89 20:01:54 +0000 

  I don't see why not: programs such as Doctor, Racter, and Eliza are also
successful at mimicking human behavior without the need for nets at all. The
point that I am trying to make is that these programs simply mimic, they do
not emulate the human brain.

  I find that there are some people who are under the impression that by
linking together many specialized programs(a vision processor, a language
processor,...), they will be able to create something akin to the human
mind. I do not subscribe to this theory because the human brain is pretty
much uniform. This fact becomes dramatically obvious in the cases of people
who have had accidents resulting in the damage of sections of the brain.  If
the damaged section performed a specialized function, then for awhile, the
person will not be able to perform that action.  After some time, the rest
of the brain is able to assimilate the functions performed by the damaged
section and the person is able to function normally again.

  I look at the market and current research and I see a lot of neural
network expert systems, handwriting recognizers, and image processors. The
term neural network here is very misleading.  I believe that a neural
network should be able to learn to do anything and still remain flexible
enough to deal with abrubt changes as the human brain is capable of doing.

				Sean Brunnock

------------------------------

Subject: Re: NN Question
From:    brp@sim.uucp (bruce raoul parnas)
Organization: University of California, Berkeley
Date:    Fri, 03 Mar 89 02:02:08 +0000 

I beg to differ substantially on this claim.  No man made neural networks
have yet come close to modelling/mimicking human behavior, no matter what
the level of abstraction we assume.  They do not reflect the temporal
properties, and are totally incapable of *MANY* of the things humans can do.
Neural nets take inputs and associate them with outputs, nothing more.  They
do not reflect even the simplest levels of cognition!

Natural selection would eliminate a great deal of "extraneous" processing

I think that a great many people view neural networks as good models for
what goes on inside our heads.  Since these models are, mainly, discrete
time automata they do not reflect the fact that real neural systems are,
essentially,nonlinear continuous-time multi-dimensional vector spaces in
which the neurons evolve in time.  So while they are real neat computational
tools, they are far from representing real neural processes.

bruce (brp@sim)

------------------------------

Subject: Re: Re: NN Question (how can a few neurons mimic the brain?)
From:    demers@beowulf.ucsd.edu (David E Demers)
Organization: EE/CS Dept. U.C. San Diego
Date:    Fri, 03 Mar 89 05:27:09 +0000 


I don't believe that much is known about how human behavior results from the
action of neurons or collections of neurons.  The fact that connectionist
systems can do pattern recognition does not mean that they are doing it in
the way humans do.  Thus it shouldn't necessarily be surprising that
"similar" tasks can be done with nets and brains.  Many pattern
recognition/mapping networks appear to be doing interpolation; is that what
WE do?  Maybe...  But you do ask a question worthy of study.
 
- ->I am aware of the drastic simplifications
- ->that are made but this doesn't change the intended referents of
- ->our theoretical objects.

Many if not most researchers are not attempting to model the brain, but are
trying to see if highly parallel and distributed processing can produce
useful and interesting computational systems.  It is known, for example,
that networks with one hidden layer and feedforward architecture can
approximate any Borel-measurable function from R^n to R^m to any degree of
accuracy (given sufficiently many hidden units). [Hornik, Stinchcombe &
White, 1988] Can brains do that?  Anyone know?

- ->One answer would seem to be that there is a tremendous amount
- ->of additional processing in the brain that is extraneous...

Or perhaps the brain just has a lot to do, with a lot of redundancy built in
for safety.  The brain is built from material that is not robust and does
not have high precision, and does not operate faster than maybe 10ms/step.
But there are perhaps 10^10 neurons with about 1000-10000 connections each.
Our models can be built from pretty reliable and fast stuff, operating 1000
or more times faster per step.

I suppose a "collective" could be considered to be a higher order unit,
processing a more sophisticated function than threshold logic.  This is an
efficiency issue, I believe, not a fundamental issue of computational
complexity.

Jack Cowan recently suggested at a workshop in San Diego that we should all
read (or re-read) David Marr's early work.  I plan to do so soon... even if
I'm not trying to model the brain, nature sure did build some wonderful
mechanisms to learn from.

Dave DeMers			demers@cs.ucsd.edu
Computer Science & Engineering 
UCSD
La Jolla, CA 92093

------------------------------

Subject: Thanks
From:    kirlik@hms3.gatech.edu (Alex Kirlik)
Organization: Center for Human-Machine Systems Research - Ga Tech
Date:    Sat, 04 Mar 89 03:38:09 +0000 

This will be my final posting concerning my previous neural-net
question. (To thunderous applause)

Thanks for the many replies via email and the net; I have learned from all -
I guess that's the purpose of this forum.

I just want to conclude with two points. The two most frequent criticisms of
my comments were: 1. I have drastically overestimated the degree to which
nets have successfully mimicked human behavior; and 2. I have drastically
overestimated the degree to which any such successes have been suggested to
be the result of structural/ processing similarities between neural nets and
the brain.

WRT point 1, I only want to suggest that some behavioral validity
demonstrations have been made, e.g. in _Parallel Distributed Processing_ Vol
II, p. 266, Rumelhart and McClelland write "We have shown that our simple
learning model shows, to a remarkable degree, the characteristcs of young
children learning the morphology of the past tense in English."

My original posting was not concerned with defending the view that nets are
extremely successful in mimicking behavior (at whatever level), rather I was
concerned with examining the validity of arguments that suggest that
behavioral validity is due to structural/processing similarities between our
models and the brain (point 2).

WRT this point, the general reaction was that I was naive to think that
people take these models seriously at the level of units and neurons.  I
AGREE that we shouldn't take these things seriously, that is exactly the
point I was trying to make by posing the question.

More specifically, my point is that the brain analogy cannot and should not
be used to explain any successes of these models until appropriate
referential relations that tie the model's constructs to the world can be
identified. I offered the "self-similarity" hypothesis as a possible such
relation, and recieved some interesting responses to it.

But I have probably overestimated the degree to which explanations in terms
of unit-neuron relationships are still fashionable.

Thanks all,
Alex Kirlik

UUCP:	kirlik@chmsr.UUCP
        {backbones}!gatech!chmsr!kirlik
INTERNET:	kirlik@chmsr.gatech.edu

------------------------------

Subject: Re: NN Question
From:    Fish-Guts <uwslh!lishka@speedy.wisc.edu>
Organization: U of Wisconsin-Madison, State Hygiene Lab
Date:    06 Mar 89 19:22:00 +0000 

> No man made neural networks have
>yet come close to modelling/mimicking human behavior, no matter what the level
>of abstraction we assume.  

     This all depends on what you claim is "human behavior."  Below is quote
taken from a paper in which the authors describe a neural network that they
use to model the pyriform (olfactory) cortex.  The neural network contained
about 300 artificial neurons, whereas the piriform cortex of a rat contains
about 10^6 neurons.  In the paper, they show that their model does reproduce
certain key characteristics of piriform cortex (which is also found in
humans, but is usually studied in animals).  Presumably, this "behavior" of
piriform cortex also occurs in humans.  They have modeled this on a
relatively coarse level.

     Granted, this may not be what most consider "human behavior" as we all
see it, but it is behavior of the human brain (IMHO).  Although I think
models of this sort are rare at this point in time, I would expect that more
will appear in the future.

>>I know that the validity of this question depends upon the
>>"level" at which we interpret our models, but, after all,
>
>At no level is this valid, i believe.

     As a student of AI, with a couple semesters of neurobiology under my
belt, I disagree.  At certain "lower" levels there have been been some
interesting neural nets that model certain low-level behaviors in animals.

     As a practical example, I offer this quote from the abstract of a paper
by Matthew A. Wilson and James M. Bower titled "A Computer Simulation of
Olfactory Cortex with Functional Implications for Storage and Retrieval of
Olfactory Information." The authors were *neurobiology* graduate students of
one of my professors, Lewis B.  Haberly.

	Based on anatomical and physiological data, we have 
	devloped a computer simulation of piriform (olfactory)
	cortex which is capable of reproducing spatial and
	temporal patterns of actual cortical activity under a 
	variety of conditions. [...]  We have shown that 
	different representations can be stored with minimal
	interference, and that following learning these
	representations are resistant to input degradation,
	allowing reconstruction of a representation following
	only a partial presentation of an original training
	stimulus.  Further, we have demonstrated that the
	degree of overlap of cortical representations for
	different stimuli can also be modulated.  For instance
	similar input patterns can be induced to generate
	distinct cortical representations (discrimination),
	while dissimilar inputs can be induced to generate
	overlapping representations (accomodation).  Both
	features are presumably important in classifying
	olfactory stimuli.
	
This quote is reproduced without permission.  At the time the paper was
written, the authors could be reached at the Computation and Neural Systems
Program, Division of Biology, California Institute of Technology, Pasadena,
CA 91125

> So while [ANNs] are real neat computational tools, they are far
>from representing real neural processes.

     I disagree; I feel that the above paper proves my point.  One
interesting point, however, is that the neural network used in the above
model used artificial neurons that modeled behavior of individual neurons in
the piriform cortex, complete with considerations of membrane potential,
delay due to the velocity of the signal through the axon, and time course,
amplitude, and waveform due to particular ionic channel types (of which Na+,
Cl-, and K+ channels types were included in the model).  In other words, the
model was *NOT* a simple neural network based on simple "units" or
McCulloch-Pitts neurons.  However, it *was* a neural network, although its
artificial neurons were more complex than most used today.

				.oO Chris Oo.
- -- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp


------------------------------

Subject: Flexibility of nervous systems
From:    Carol Freinkel <sco!carolf@uunet.uu.net>
Organization: The Santa Cruz Operation, Inc.
Date:    06 Mar 89 20:14:20 +0000 


>the human brain is pretty much uniform. This 
>fact becomes dramatically obvious in the cases of people who have
>had accidents resulting in the damage of sections of the brain.
>If the damaged section performed a specialized function, then
>for awhile, the person will not be able to perform that action.
>After some time, the rest of the brain is able to assimilate
>the functions performed by the damaged section and the person
>is able to function normally again. 

This is only partially true.  There are many areas of the brain which cannot
be replaced if damaged.  If the vision-processing region at the back of the
brain is removed, the person will be blind.  Also, if both sides of the
hippocampus are removed, the person will not be able to retain long-term
memory anymore.  (This operation was performed only once.  When the damage
this causes was realized, it was never done again.  I read about this case
in a neurobiology class.  This man lives in a perpetual present.  If you
were to visit him, leave the room, and walk back in, he wouldn't know you.)
And the human brain is definitely *not* uniform.  There is an elaborate
architecture on both the macroscopic and microscopic level.  The list of
names which describes these structures is frighteningly long.

There are many areas of the brain which are mostly inflexible.  On the other
hand, it is true that people can sustain large amounts of damage to the
frontal lobes with (apparently) minimal effects.  Also, some children born
with brains compressed/damaged from hydrocephaly (water on the brain) are
quite intelligent.  When damage occurs at a younger age, adapation is more
likely to occur.

Generally speaking, animals with larger brains have more flexibility.  If
the eye of a newt is rotated, the newt will perpetually move its head up to
reach food which is below it and vice versa.  When this experiment was done
on kittens, the kittens eventually adapted to the change and were able to
move appropriately.  With some animals, the circuitry is essentially
"hard-wired."  (As one anatomy professor put it, there might as well be
pulleys and levers in there.)  Creatures with more complex nervous systems
have more flexibility and possibility of "reprogramming."

Carol Freinkel
carolf@sco.COM
...!uunet!sco!carolf

------------------------------

Subject: Re: Re: NN Question (how can a few neurons mimic the brain?)
From:    ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date:    Tue, 07 Mar 89 01:00:06 +0000 

   It is indeed correct that the brain is capable of changing the functions
of some of its different parts to a limited extent (the classical example is
loss of a nerve going to the skin of the hand, and neurons which originally
were connected strongly to the part of the skin served by that nerve connect
themselvs to nerves going to other parts of the hand).

   However, the brain -does- have a great deal of differentiation.  (just
look at cerebellum vs brain stem vs cereberal cortex).  In addition, large
enough damage does produce irreperable dammage (such as damage to Broca's
area involves in speech propduction leading to Broca's aphasia).

   Moreover, after learning, neurons "differentiate" across the network.
Look at the hidden units of a feedforward backpropogated NN.  Each hidden
unit will tend to code for a certain part of the input signal.  If we excise
a neuron or two, we typically have enough distributed representation for the
NN to still work.  If we excise more, we have to re-teach the network.
Eventually, if we excise enough neurons, the network will not be able to
work at all (with size depending on the complexity of the problem, which is
also closely related to the number of patters to be coded for and size of
input field).  There is, by the way, a whole science to figuring out how
many hidden units to excise from a network to maintain the minimum number of
neurons and still have the NN operate properly.

   (I personally have a gut feeling that genetic algorithms will help NN
researchers "evolve" alot of NN structure, in a similar way to what happened
to humans).>

>I believe that a neural network should be able to learn to do 
>anything and still remain flexible enough to deal with abrubt 
>changes as the human brain is capable of doing. 

      Ah, it all depends on the learning algorithm.  Infact, it may be that
there are meta-learning rules in brain (i.e. a network which is taught using
neron-level learning rules to "learn" on a larger scale, including input
selectivity, some ammount of theorem proving, and alot of other "symbolic
AI" stuff that people think NN's will replace, albeit on a
massively-parallel fault-tolerant scale).

Thomas Edwards
ins_age@jhuvms (BITNET)
tedwards@nrl-cmf.arpa

#include<disclaimer.hs>   /* ported to connection machine */

------------------------------

Subject: Re: Re: bottom-up (was Re: NN Question)
From:    ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date:    Tue, 07 Mar 89 01:18:57 +0000 

> Neural nets take inputs and associate them with outputs, nothing more.
> They do not reflect even the simplest levels of cognition!

   While it is definately true that we haven't even gotten anywhere close to
a 10^13 neuron device like humans, one could very well argue brain is also a
device which associates inputs, memory, and produces an output.  Mind you,
the tranfer function is very complex :-).  Recurrent neural networks are
capable of holding memories in neural "loops," and also there are algorithms
for learning in a contiually running NN (Williams, Zisper "A Learning
Algorithm for Continually Running Fully Recurrent Neural Networks" UCSD ICS
8805 Oct. 1988).
 
[[ Regarding discrete vs. continuous automata ]]

Pineda, in "Dynamics and Architecture in Neural Compuation", Jorunal of
Complexity, Sept. 1988, points out that time is very important to NN's,
especially if we want to store multiple pattern associations in them.  He
proposes a formalization for recurrent NN's dealing with them as dynamical
systems, and can thus bring them into continuous time instead of discrete
time.  (I think people working with recurrent nets should look at this
paper...It didn't seem to draw the attention it deserved).

Another major drawback to current neural networks is that human NN's are a
product of evolutionary search.  There are, however, a large bunch of people
working with "neuro-evolution" now, and maybe we'll see some neat stuff.
Also there is alot of neat recurrent stuff now which people who have only
read PDP have missed out on.  Someone needs to write a good book aimed at
Joe Programmer concerning these issues (or has someone, and I have just
missed it?)

Thomas Edwards
ins_atge@jhuvms (BITNET)
tedwards@nrl-cmf.arpa

#include<disclaimer.hs>   

------------------------------

Subject: Re: NN Question
From:    The Usenet <news@psuvax1.cs.psu.edu>
Organization: Penn State University
Date:    07 Mar 89 17:01:17 +0000 

I think [sim!brp is] guilty of over-stating the case for your discipline.
Real neural systems are real neural systems.  They are not "nonlinear
continuous-time multi-dimensional vector spaces", although it may be
constructive to model them as such.

Real neural systems can also be modelled as (borrowing your terminology)
"discrete time automata".  One must distinguish between reality and the
scientific model of choice.  I believe that you meant to say that modelling
real neural systems as "nonlinear continuous-time multi- dimensional vector
spaces" leads to a better understanding of real neural systems than
modelling them as "discrete time automata".

The discrete vs continuous competition is not new.  You sit on the same side
of the fence as many distinguished people.  I lean towards the discrete side
myself, although I am open to argument.

I have not seen any arguments which convince me that the analog behaviour
that we observe in real neural systems is of fundamental computational
importance.  Some of the arguments that I have seen have been based on the
premise that the real world is analog.  Unfortunately, the real world
appears to be discrete.  By this I mean that scientific models which are
based on discrete units (atoms, quarks etc.) give a good understanding of
observable phenomena.  Real numbers, continuous functions etc., are
abstractions which help us deal with the fact that the number of discrete
units is larger than we can deal with comfortably.

There are (at least) two objections to the classical automata- theoretic
view of neural systems.  One is that neural systems are not clocked (I
presume that this is what you mean by "continuous time"), and that neurons
have analog behaviour.  Two burning questions which, in my mind, are among
the most important open questions in neural networks research are:

1.  Is unclocked behaviour important?  Was the non-availability
    of a system clock something that Nature had to fight to overcome,
    or did it bring inherent advantages?
2.  Is analog behaviour important?  If I restrict neuron excitation
    values to 6 decimal places, will the networks still function
    correctly?  More importantly, how does the precision scale with
    the number of neurons and/or connections?

Needless to say, these questions are not new.  I am not claiming to be the
first person to have thought of them.  Some information is known.  I am
planning two papers this year (not yet written up) which address aspects of
them.  The Truth (if it exists) still remains to be found.

- -------------------------------------------------------------------------------
			Ian Parberry
  "The bureaucracy is expanding to meet the needs of an expanding bureaucracy"
  ian@psuvax1.cs.psu.edu  ian@psuvax1.BITNET  ian@psuvax1.UUCP  (814) 863-3600
 Dept of Comp Sci, 333 Whitmore Lab, Penn State Univ, University Park, Pa 16802

------------------------------

Subject: Re: Flexibility of nervous systems
From:    Jonathan Eckrich <astroatc!johne@speedy.wisc.edu>
Organization: Astronautics Technology Cntr, Madison, WI
Date:    14 Mar 89 21:21:47 +0000 

(Sean Brunnock) writes:

>>the human brain is pretty much uniform. This 

(Carol Freinkel) replies:

>This is only partially true.  There are many areas of the brain 
>which cannot be replaced if damaged.  If the vision-processing
>region at the back of the brain is removed, the person will be
>blind. 

I recently read an article (Sorry, but I cannot recall the name) that
discussed operations performed on infant ferrets.  The optic nerves were
rerouted to what should be the part of the brain that handles hearing.

As the baby ferrets grew and experienced their environment, they developed
essentially normal sight - I don't know the quality of the surgeon's work in
reattaching the optic nerves to the auditory section of the brain.

This suggests to me that certain parts of the brain are uniform at birth,
but as experiences accumulate, new synaptic connections are made, and that
these parts of the brain become specialized by virtue of the unique
processing that they must learn.

						Jon Eckrich
				   (rutgers, ames)!uwvax!astroatc!johne
					    nicmad!astroatc!johne

------------------------------

Subject: Re: Flexibility of nervous systems
From:    vickroy@mis.ucsf.edu (Chip Vick Roy)
Organization: UCSF Medical Information Sciences
Date:    Wed, 15 Mar 89 16:49:46 +0000 

The article you refer to is:
	"Experimentally Induced Visual Projections into Auditory
	Thalamus and Cortex", by Mriganka Sur, Preston E. Garraghty
	and Anna W. Roe, Science v242, Dec 9, 1988, p1437-41.

This is a marvelous study which demonstrates significant plasticity 
of the developing nervous system, even across different sensory modalities.

------------------------------

Subject: Re: Re: bottom-up (was Re: NN Question)
From:    brp@sim.uucp (bruce raoul parnas)
Organization: University of California, Berkeley
Date:    Wed, 15 Mar 89 17:28:45 +0000 


Actually my discipline is more neurobiology than it is nonlinear systems, 
although i do think they are a good model.  you are right, though, that this
is only a model.  what i meant to say was that i believed that this was a
better modelling approach than automata theory.

The world is (possibly) discrete on a very fine level.  first, it seems to me
that researchers keep finding yet smaller particles into which matter is sub-
divided: maybe it really is a continuum?  second, even assuming that it is
discrete, this exists on such a fine level that i believe it is irrelevant
here.  modelling of neural systems in terms of their atomic properties is, i
believe, quiet the unenviable task!

>Real numbers, continuous functions etc., are abstractions which help
>us deal with the fact that the number of discrete units is larger
>than we can deal with comfortably.

right.  and in most physical systems we may, for our understanding, treat them
as essentially analog since we simply can't deal with the complexity presented
by the true (?) discrete nature.

>There are (at least) two objections to the classical automata-
>theoretic view of neural systems.  One is that neural systems
>are not clocked (I presume that this is what you mean by
>"continuous time"), and that neurons have analog behaviour.

that is precisely what i meant.  neurons each evolve on their own, independent
of system clocks.

i believe that a system clock would be more of a hindrance that a help.  
studies with central pattern generators and pacemaker activity (re: the heart)
show clearly that system clocks are not unavailable.  if evolution had found
a neural system clock advantageous, one could have been created.  i feel,
however, that the continuous-time evolution of neural systems imbues them
with their remarkable properties.

>2.  Is analog behaviour important?  If I restrict neuron excitation
>    values to 6 decimal places, will the networks still function
>    correctly?  More importantly, how does the precision scale with
>    the number of neurons and/or connections?

I don't think that such a fine level of precision is necessary in neural
function, i.e. six places would likely be enough.  but since digital circuitry
is made actaully from analog circuit elements limited to certain regions of
operation, why go to this trouble in real neural systems when analog seems
to work just fine?

bruce
brp@sim

------------------------------

Subject: Re: Re: bottom-up (was Re: NN Question)
From:    brp@sim.uucp (bruce raoul parnas)
Organization: University of California, Berkeley
Date:    Wed, 15 Mar 89 17:51:14 +0000 


>     This all depends on what you claim is "human behavior."  

By "behavior" i refer to the underlying strategy, if you will, governing the
actions, not simply the actions themselves.  Given a set of inputs and a set
of outputs it is quite easy to construct, for example, a simple digital
circuit made from combinational logic which can perform the required tasks,
yet no one would argue that this, in any way, represents the brain.
Cognition is something we do not yet understand and we can do little more
than model the responses rather than the process.  A small child can repeat
words that he/she can not understand; is this an understanding of the
language?

I think we're interpreting the word "level" in the original posting
differently.  I believed it referred to levels of interpretation of a
cognitive model as opposed to modeling of lower-level functions.  i do agree
that some of these latter functions are quite well understood and have been
modeled well.  Prime examples of this are the mechanisms in the sensory
periphery (see, for example, Feld, et al in Advances in Neural Information
Processing Systems due around April).  I think that models of cognition,
however, are not very useful at any level toward an understanding of the
"big picture" yet, although i hope that further work will change this.

>     As a practical example, I offer this quote from the abstract of a

[quote concerning modeling of the olfactory system]

the paper you reference (removed for brevity) is quite interesting.  i still
feel that it models the results rather than the cause of the behavior, but
it is, i believe, a step in the right direction.  the inclusion of the
temporal aspect of neurons is crucial to a realistic model.

>> So while [ANNs] are real neat computational tools, they are far
>>from representing real neural processes.

>     I disagree; I feel that the above paper proves my point.  
> ... In other words, the model was
>*NOT* a simple neural network based on simple "units" or
>McCulloch-Pitts neurons.  However, it *was* a neural network, although
>its artificial neurons were more complex than most used today.

I misspoke.  what i meant to say was that neural networks are from modeling
COGNITIVE neural processes such as memory and the like.  the peripheral
sensory system, including olfaction, is quite a bit easier to model (as
mentioned above), and the quote you reproduced corroborates this.  i have no
arguement against these models, only those of higer cortical function.

bruce (brp@sim)


------------------------------

Subject: Re: Re: bottom-up (was Re: NN Question)
From:    andrew@nsc.nsc.com (andrew)
Organization: National Semiconductor, Santa Clara
Date:    Wed, 15 Mar 89 20:24:46 +0000 

[[ Regarding system clocks ]]

Having just browsed through "Fractals Everywhere" by Barnsley, I'm reminded
of a comment about the heart and clocks in there. Loosely paraphrased, it is
stated that a healthy heart exhibits a measurable degree of chaotic
behaviour - i.e. the fractal dimension of some representation of the
heartbeat over time - whereas a low or zero fractal dimension (a very steady
beat) is an excellent indicator that something unhealthy - an attack or
arhythmia - is imminent.

This may say something in general about organic systems as you've been
discussing; that exact synchronisation is not something desirable.

Further, I believe Walter Friedman has presented recently on information
processing _in vivo_ where he postulates that chaotic attractors are a key
element in biological information processing. I'm afraid that's as much
detail as I have - I'm not "into chaos".

Therefore, although, as you say, locality of processing tends to exclude a
system clock approach, the above give perhaps stronger reasons as to why a
manmade ANNS would actually be inferior, were it to use a system clock.

While I'm here, I'll mention something else from biology, which filled me
with great dismay(!) - this month's Scientific American's feature on the
brain's star-like "astrocyte" cells. Their role becomes important in direct
proportion to the amount of time they are investigated; akin to glial cells,
I believe. Now the diagram of how the astrocytes connect to the neuron net
is frightening .. they hook between everywhere (neuron body, node of Ranvier
on the axon myelin sheath tap point, on the bare axon, capillaries, and the
cells at both the surface (meningeal) and the centre (water-bearing) cells
of the whole brain. This means computationally that it's a whole new ball
game, I imagine... anyone have any comments?
==========================================================================
	DOMAIN: andrew@logic.sc.nsc.com  
	ARPA:   nsc!logic!andrew@sun.com
	USENET: ...{amdahl,decwrl,hplabs,pyramid,sun}!nsc!logic!andrew

	Andrew Palfreyman, MS D3969		PHONE:  408-721-4788 work
	National Semiconductor				408-247-0145 home
	2900 Semiconductor Dr.			there's many a slip
	P.O. Box 58090				'twixt cup and lip
	Santa Clara, CA  95052-8090
==========================================================================

------------------------------

Subject: Re: Re: bottom-up (was Re: NN Question)
From:    ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date:    Thu, 16 Mar 89 21:44:38 +0000 

In article <10192@nsc.nsc.com> andrew@nsc.nsc.com (andrew) writes:
[concerning clocked NN's]

There is a big concern over synchronicity of NN's.  Two points come to mind,

1) Back-prop in particular is an approximation of gradient-descent of
   the error surface, and there are a few problems caused by finitely
   small quanta of learning steps...but that's what you get for not
   spending the time to search the entire error surface!
   But it would be nice if a method can be determined which allows for
   infinitely-small learning steps at a reasonable speed.
   Pineda claims his recurrent learning algorithm is "presented
   in a formalism appropriate for implementation as a physical
   nonlinear dynamical system," and thus he is able to avoid
   "certains kinds of oscillations which occur in discrete time
   models usually associated with backpropogation."  

2) To a limited extent, using "delay neurons," a syncrhonous neural
   network can approach a non-synchronous one.

>While I'm here, I'll mention something else from biology, which filled me
>with great dismay(!) - this month's Scientific American's feature on the
>brain's star-like "astrocyte" cells. Their role becomes important in direct
>proportion to the amount of time they are investigated; akin to glial cells,
>I believe.

Ah, the important thing to remeber is that NN's are based upon mathematical
solutions to the problem of getting the proper output from a network for a
certain input by changing the network weights...they might at some level of
abstraction resemble real neural networks, but lack neuropharmacology (which
is _very_ important to human cognition!), and a whole host of other
qualities.  (The brain also has many different styles of neurons!).
   This is _not_ to say that human brain study is irrelevent to NN's, but
that NN's are going to be a simpler structure than the brain because they
exist (currently...this may change) in the realm of information instead of
being physical things which need support, oxygen, nutrients, immune systems,
etc.

- -Thomas Edwards

------------------------------

Subject: Re: Re: bottom-up (was Re: NN Question)
From:    ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date:    Thu, 16 Mar 89 21:51:19 +0000 

gack....the Pineda reference is
  "Dynamics and Architecture for Neural Compuation", Fernando J.
Pineda, Journal of Complexity 4,216-245 (1988)  Academic Press
(Harcourt Brace Jovanovich)

------------------------------

Subject: Re: Re: bottom-up (was Re: NN Question)
From:    brp@sim.uucp (bruce raoul parnas)
Organization: University of California, Berkeley
Date:    Thu, 16 Mar 89 22:13:35 +0000 

In article <10192@nsc.nsc.com> andrew@nsc.nsc.com (andrew) writes:
>Further, I believe Walter Friedman has presented recently on information
                             ^ ^   (Freeman)
>processing _in vivo_ where he postulates that chaotic attractors are a
>key element in biological information processing. I'm afraid that's as much

This is all presuming that you believe it is possible to experimentally
distinguish between chaos and noise, which is also assumed to be present in
the nervous system.  Personally i don't have much faith in freeman's 
assertions concerning chaos, but i'm also not an expert in the area.

bruce
(brp@sim)

------------------------------

End of Neurons Digest
*********************

ian@shire.cs.psu.edu (Ian Parberry) (04/12/89)

Bruce, thanks for your interesting reply.  I have been away from the net
for a while (system installation), sorry if I am a bit out-of-date.

>Subject: Re: Re: bottom-up (was Re: NN Question)
>From:    brp@sim.uucp (bruce raoul parnas)
>Organization: University of California, Berkeley
>Date:    Wed, 15 Mar 89 17:28:45 +0000 

I think we are basically agreed that a statement like "the world is
discrete" or "the world is analog" gives us little reason to model
neural networks as discrete or analog.

>>Real numbers, continuous functions etc., are abstractions which help
>>us deal with the fact that the number of discrete units is larger
>>than we can deal with comfortably.
>
>right.  and in most physical systems we may, for our understanding, treat them
>as essentially analog since we simply can't deal with the complexity presented
>by the true (?) discrete nature.

I'm not convinced.  Computational complexity theory gives us tools for
dealing with discrete resources (time, memory, hardware) which are
too large to handle individually.  There is no need to treat them as
continuous.

>>There are (at least) two objections to the classical automata-
>>theoretic view of neural systems.  One is that neural systems
>>are not clocked (I presume that this is what you mean by
>>"continuous time"), and that neurons have analog behaviour.
>
>that is precisely what i meant.  neurons each evolve on their own, independent
>of system clocks.

Yes?  I didn't think the evidence was in on that.  I recently heard of
a paper that claimed a large amount of synchronicity in neuron firings.
I don't remember the author.  I'll send you email if I remember.

>i believe that a system clock would be more of a hindrance that a help.  
>studies with central pattern generators and pacemaker activity (re: the heart)
>show clearly that system clocks are not unavailable.  if evolution had found
>a neural system clock advantageous, one could have been created.  i feel,
>however, that the continuous-time evolution of neural systems imbues them
>with their remarkable properties.

You are entitled to your opinion.  You are reasoning by analogy here.
Could there REALLY be a wetware system clock?  You may be missing
implementation details that make it impossible.  For example, could
the correct period (milliseconds) be achieved?  And could it be
communicated reliably and in small hardware to all neurons?
I think the remarkable properties of neural networks come from other
sources; or perhaps we have different definitions of "remarkable".

Here is another way of looking at it.  When one neuron fires and its
neighbour is not receptive (building up charge) there is a fault.
Faults are relatively infrequent (receptive time is larger than
nonreceptive time).  The architecture is fault-tolerant.  That's
why we observe that the brain is fault-tolerant when some of its
neurons are destroyed.  It has to be in order to get around the lack
of system clock.  Neural architectures are better at fault-tolerance
than von-Neumann ones (at least, we can prove this when the thresholding
is physically separated from the summation of weights, as seems to be
the case for biological neurons).

>>2.  Is analog behaviour important?  If I restrict neuron excitation
>>    values to 6 decimal places, will the networks still function
>>    correctly?  More importantly, how does the precision scale with
>>    the number of neurons and/or connections?
>
>I don't think that such a fine level of precision is necessary in neural
>function, i.e. six places would likely be enough.  but since digital circuitry
>is made actaully from analog circuit elements limited to certain regions of
>operation, why go to this trouble in real neural systems when analog seems
>to work just fine?

If six decimal places is enough, then we can model everything as integers.
Why do this?  It is easier to analyze.  Combinatorics is easier than
analysis (despite Hecht-Nielson's claim in the first San Diego NN
conference that the opposite is true).  I don't care if the real neural
systems seem to behave in an analog fashion.  If it seems that the
_computationally important_ things going on are really discrete (and
you seem to have agreed that this is the case), then our model should
reflect this.  I'm not necessarily saying that we should _build_ them
that way.  That's another question.  But perhaps we ought to _think_ of
them that way.  To use an analogy, we don't usually think of a computer
as having infinite memory, but it certainly helps to program them as
if it were the case.  For a complexity theorist, infinite means "adequate
for day-to-day use".  This is where the classical attack on theoretical
computer science (my TRaSh-80 is not a Turing machine) breaks down.

I think that, despite the bad press that theoretical computer science
gets from some NN researchers (I've heard many unprofessional statements
made in conference presentations by people who should know better),
complexity theory has something to contribute.  So do other disciplines.
I'm just a little tired of people closing doors in my face.  It has
become fashionable to disparage TCS (following the bad examples mentioned
three sentences ago).  Sorry if my knee-jerk reaction to your posting
was a little harsh.
-------------------------------------------------------------------------------
			Ian Parberry
  "The bureaucracy is expanding to meet the needs of an expanding bureaucracy"
  ian@theory.cs.psu.edu  ian@psuvax1.BITNET  ian@psuvax1.UUCP  (814) 863-3600
 Dept of Comp Sci, 333 Whitmore Lab, Penn State Univ, University Park, Pa 16802