[comp.ai.neural-nets] NN Question

kirlik@hms3 (Alex Kirlik) (03/02/89)

For those interested in the potential psychological/
physiological significance of neural-net models:

Has anyone else been puzzled by the following phenomenon?
(I haven't found it discussed in the literature).

Why should a net with only a few dozen neural units be
successful at mimicking human behavior that is presumably
the result of the activation of a tremendous number of
neurons?  That is, why should a small number of units
be successful at simulating the behavior of a large 
number of neurons?

I know that the validity of this question depends upon the
"level" at which we interpret our models, but, after all,
these units are modeled to mimic the behavior of individual
neurons, aren't they.  I am aware of the drastic simplifications
that are made but this doesn't change the intended referents of
our theoretical objects.

One answer would seem to be that there is a tremendous amount
of additional processing in the brain that is extraneous to
the processing critical to the task being modeled, yet we are
only modeling this "critical" segment. For many reasons (that
could be discussed if necessary) I do not find this answer
particulary compelling.

A second answer might be that that neural processing has 
self-similar properties.  That is, the behavior of neural
collectives share properties with the behavior of individual
neurons. I find this answer to be interesting and attractive,
yet I know of no evidence for it.

A third answer might be to suggest that this is all unreasoned
dribble, since we don't want to interpret these models
realistically, anyway. 

It seems OK to go this way, but for those who don't, I suggest
that the question merits consideration. Or does it?

Thanks for reading,

Alex


Alex Kirlik

UUCP:	kirlik@chmsr.UUCP
        {backbones}!gatech!chmsr!kirlik
INTERNET:	kirlik@chmsr.gatech.edu

sbrunnoc@hawk.ulowell.edu (Sean Brunnock) (03/03/89)

From article <32125@gt-cmmsr.GATECH.EDU>, by kirlik@hms3 (Alex Kirlik):
> 
> Why should a net with only a few dozen neural units be
> successful at mimicking human behavior that is presumably
> the result of the activation of a tremendous number of
> neurons? 

  I don't see why not: programs such as Doctor, Racter, and
Eliza are also successful at mimicking human behavior without
the need for nets at all. The point that I am trying to make 
is that these programs simply mimic, they do not emulate the
human brain.

  I find that there are some people who are under the impression
that by linking together many specialized programs(a vision 
processor, a language processor,...), they will be able to create 
something akin to the human mind. I do not subscribe to this
theory because the human brain is pretty much uniform. This 
fact becomes dramatically obvious in the cases of people who have
had accidents resulting in the damage of sections of the brain.
If the damaged section performed a specialized function, then
for awhile, the person will not be able to perform that action.
After some time, the rest of the brain is able to assimilate
the functions performed by the damaged section and the person
is able to function normally again. 

  I look at the market and current research and I see a lot of
neural network expert systems, handwriting recognizers, and
image processors. The term neural network here is very misleading.
I believe that a neural network should be able to learn to do 
anything and still remain flexible enough to deal with abrubt 
changes as the human brain is capable of doing. 

				Sean Brunnock

brp@sim.uucp (bruce raoul parnas) (03/03/89)

In article <32125@gt-cmmsr.GATECH.EDU> kirlik@hms3.gatech.edu (Alex Kirlik) writes:
>Why should a net with only a few dozen neural units be
>successful at mimicking human behavior that is presumably
>the result of the activation of a tremendous number of
>neurons?  That is, why should a small number of units

I beg to differ substantially on this claim.  No man made neural networks have
yet come close to modelling/mimicking human behavior, no matter what the level
of abstraction we assume.  They do not reflect the temporal properties, and are
totally incapable of *MANY* of the things humans can do.  Neural nets take 
inputs and associate them with outputs, nothing more.  They do not reflect even
the simplest levels of cognition!

>
>I know that the validity of this question depends upon the
>"level" at which we interpret our models, but, after all,

At no level is this valid, i believe.
>
>One answer would seem to be that there is a tremendous amount
>of additional processing in the brain that is extraneous to
>the processing critical to the task being modeled, yet we are
>only modeling this "critical" segment. For many reasons (that

Natural selection would eliminate a great deal of "extraneous" processing

I think that a great many people view neural networks as good models for what
goes on inside our heads.  Since these models are, mainly, discrete time
automata they do not reflect the fact that real neural systems are, essentially,nonlinear continuous-time multi-dimensional vector spaces in which the neurons
evolve in time.  So while they are real neat computational tools, they are far
from representing real neural processes.

bruce (brp@sim)

andrew@nsc.nsc.com (andrew) (03/03/89)

Your question is both specious and deep, simultaneously! You might, in the
former case, ask what exactly is contributed to a lead guitar solo by the
wincing of the soloist.. you'd be quite happy to hear the record (this goes
for cello, violin, etc of course!) and implies that a robot might get the
final resultant nuances without the contributions of the total emotional
structure of the performer. I personally find the excruciating expressions
of rock groups hilarious, supporting the "specious" view!
Conversely, you can look at this as Carver Mead's plenary speech at the
IEEE Conference did - the tip of an iceberg of an immense cognitive system.
That's why, to avoid the "AI trap", it's maybe best to start bottom-up,
rather than the heretofore conventional psychological/serial-symbolic
approach of top-down (macroscopic) behavioural analysis. 
I guess a lot of this has to do with how interested you are with the
actual dynamics of learning. Once learned, things get quite trivial.

===========================================================================
** NOTE - DO NOT USE HEADER FOR REPLY, BUT THIS SIGNATURE ** 
	Andrew Palfreyman, MS D3969		PHONE:  408-721-4788 work 
	National Semiconductor				408-247-0145 home 
	2900 Semiconductor Dr.			there's many a slip
	P.O. Box 58090				'twixt cup and lip
	Santa Clara, CA  95052-8090

	DOMAIN: andrew@logic.sc.nsc.com  
	ARPA:   nsc!logic!andrew@sun.com
	USENET: ...{amdahl,decwrl,hplabs,pyramid,sun}!nsc!logic!andrew
===========================================================================

u-jmolse%sunset.utah.edu@wasatch.UUCP (John M. Olsen) (03/03/89)

kirlik@hms3.gatech.edu (Alex Kirlik) writes:
>Why should a net with only a few dozen neural units be
>successful at mimicking human behavior that is presumably
>the result of the activation of a tremendous number of
>neurons?  That is, why should a small number of units
>be successful at simulating the behavior of a large 
>number of neurons?

>A second answer might be that that neural processing has 
>self-similar properties.

>Alex Kirlik  UUCP: kirlik@chmsr.gatech.edu {backbones}!gatech!chmsr!kirlik

I've noticed that many natural things have a self-similar property which
looks quite a bit like what you are talking about.  Just as a very simple 
example, look at some birds flocking as they fly.  Each one is a distinct 
entity, yet they perform flying maneuvers as if they were each part of one 
larger entity.  If you're interested in this, see the ACM SIGGRAPH '87 
conference proceedings, and look up Craig W. Reynolds' paper on "Flocks, 
Herds, and Schools:  A Distributed Behavior Model" where he models how 
critters group as they move.

Just as an off-the-cuff observation, it looks to me like any gathering of
(supposedly) independently behaving things (birds, people, models of 
neurons) can be looked at as a larger entity.  Such an entity could 
actually seem to be less complex than it's parts if they interact in 
just a small number of ways.

/\/\ /|  |    /||| /\|       | John M. Olsen, 1547 Jamestown Drive  /\/\
\/\/ \|()|\|\_ |||.\/|/)@|\_ | Salt Lake City, UT  84121-2051       \/\/
/\/\  |  u-jmolse%ug@cs.utah.edu or ...!utah-cs!utah-ug!u-jmolse    /\/\
\/\/             "A full mailbox is a happy mailbox"                \/\/

demers@beowulf.ucsd.edu (David E Demers) (03/03/89)

In article <32125@gt-cmmsr.GATECH.EDU> kirlik@hms3.gatech.edu (Alex Kirlik) writes:
->Has anyone else been puzzled by the following phenomenon?
->Why should a net with only a few dozen neural units be
->successful at mimicking human behavior that is presumably
->the result of the activation of a tremendous number of
->neurons?  That is, why should a small number of units
->be successful at simulating the behavior of a large 
->number of neurons?

I don't believe that much is known about how human behavior
results from the action of neurons or collections of neurons.
The fact that connectionist systems can do pattern recognition
does not mean that they are doing it in the way humans do.
Thus it shouldn't necessarily be surprising that "similar"
tasks can be done with nets and brains.  Many pattern
recognition/mapping networks appear to be doing interpolation;
is that what WE do?  Maybe... 
But you do ask a question worthy of study.

->I know that the validity of this question depends upon the
->"level" at which we interpret our models, but, after all,
->these units are modeled to mimic the behavior of individual
->neurons, aren't they?  

Not generally.  Some are (on a crude scale), but again,
very little is known about the way nets built from meat work.
 
->I am aware of the drastic simplifications
->that are made but this doesn't change the intended referents of
->our theoretical objects.

Many if not most researchers are not attempting to model the
brain, but are trying to see if highly parallel and distributed
processing can produce useful and interesting computational
systems.  It is known, for example, that networks with one
hidden layer and feedforward architecture can approximate
any Borel-measurable function from R^n to R^m to any degree
of accuracy (given sufficiently many hidden units). [Hornik,
Stinchcombe & White, 1988]  Can brains do that?  Anyone know?

->One answer would seem to be that there is a tremendous amount
->of additional processing in the brain that is extraneous to
->the processing critical to the task being modeled, yet we are
->only modeling this "critical" segment. For many reasons (that
->could be discussed if necessary) I do not find this answer
->particulary compelling.

Or perhaps the brain just has a lot to do, with a lot of
redundancy built in for safety.  The brain is built
from material that is not robust and does not have high
precision, and does not operate faster than maybe 10ms/step.
But there are perhaps 10^10 neurons with about 1000-10000
connections each.  Our models can be built from pretty reliable
and fast stuff, operating 1000 or more times faster per step.

->A second answer might be that that neural processing has 
->self-similar properties.  That is, the behavior of neural
->collectives share properties with the behavior of individual
->neurons. I find this answer to be interesting and attractive,
->yet I know of no evidence for it.

I suppose a "collective" could
be considered to be a higher order unit, processing a more
sophisticated function than threshold logic.  This is an
efficiency issue, I believe, not a fundamental issue of
computational complexity.

Jack Cowan recently suggested at a workshop in San Diego that
we should all read (or re-read) David Marr's early work.  I
plan to do so soon... even if I'm not trying to model the
brain, nature sure did build some wonderful mechanisms to 
learn from.

->Alex Kirlik
->UUCP:	kirlik@chmsr.UUCP
->        {backbones}!gatech!chmsr!kirlik
->INTERNET:	kirlik@chmsr.gatech.edu

Dave DeMers			demers@cs.ucsd.edu
Computer Science & Engineering 
UCSD
La Jolla, CA 92093

news@nsc.nsc.com (Usenet Administration) (03/03/89)

u-jmolse%sunset.utah.edu@wasatch.UUCP (John M. Olsen) writes:
> Just as an off-the-cuff observation, it looks to me like any gathering of
> (supposedly) independently behaving things (birds, people, models of 
> neurons) can be looked at as a larger entity.  Such an entity could 
> actually seem to be less complex than it's parts if they interact in 
> just a small number of ways.

I don't think so. Just read the chapter on Ants in "Goedel, Escher, Bach"
by Doug Hofstadter to see how the low-bandwidth connection between simple
interacting elements (ants) can lead to a complex result like arch-building.
Conversely, see how a brain-dead idea like Faschism can overtake the
consciousness of millions of interacting complex elements (us).

There is a dualism at work here, as with all material phenomena. Forgive
the universalist, holistic, mystic, schizoid appearance of this view!
The simple may combine to be complex, and vice versa in fact.
And all colours in between.
===========================================================================
** NOTE: USE THIS SIGNATURE - NOT HEADER - FOR REPLY **

	Andrew Palfreyman, MS D3969		PHONE:  408-721-4788 work
	National Semiconductor				408-247-0145 home
	2900 Semiconductor Dr.			there's many a slip
	P.O. Box 58090				'twixt cup and lip
	Santa Clara, CA  95052-8090

	DOMAIN: andrew@logic.sc.nsc.com  
	ARPA:   nsc!logic!andrew@sun.com
	USENET: ...{amdahl,decwrl,hplabs,pyramid,sun}!nsc!logic!andrew
===========================================================================

bwk@mbunix.mitre.org (Barry W. Kort) (03/06/89)

In article <32125@gt-cmmsr.GATECH.EDU> kirlik@hms3.gatech.edu
(Alex Kirlik) writes:

 > Why should a net with only a few dozen neural units be
 > successful at mimicking human behavior that is presumably
 > the result of the activation of a tremendous number of
 > neurons?  That is, why should a small number of units
 > be successful at simulating the behavior of a large 
 > number of neurons?
 > 
 >   ...
 > 
 > One answer would seem to be that there is a tremendous amount
 > of additional processing in the brain that is extraneous to
 > the processing critical to the task being modeled, yet we are
 > only modeling this "critical" segment.

Most interesting computer algorithms have a small section of
coded where the "real work" is done.  The rest of the code
is cruft which handles the user interface and rarely occuring
exception conditions.

When I post a response (such as this one) most of my activity
is in the mechanics (reading and typing, using the vi editor,
and converting my ideas into parsible English).  The idea
itself consumes very little of my neural-network capacity.

--Barry Kort

dror@infmx.UUCP (Dror Matalon) (03/07/89)

In article <11945@swan.ulowell.edu> sbrunnoc@hawk.ulowell.edu (Sean Brunnock) writes:
>  I find that there are some people who are under the impression
>that by linking together many specialized programs(a vision 
>processor, a language processor,...), they will be able to create 
>something akin to the human mind. I do not subscribe to this
>theory because the human brain is pretty much uniform. This 
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

	Not true.

>fact becomes dramatically obvious in the cases of people who have
>had accidents resulting in the damage of sections of the brain.
>If the damaged section performed a specialized function, then
>for awhile, the person will not be able to perform that action.
>After some time, the rest of the brain is able to assimilate
>the functions performed by the damaged section and the person
>is able to function normally again. 
>

	While it is true that the basic processors (neurons) function pretty 
much the same way through the brain there are specialized areas.
When one of the speach areas -- Broca's, Wernicke's -- is destroyed in an adult 
brain the person's speach is maimed for life.
	

	Dror


Dror Matalon                        Informix Software Inc.		
{pyramid,uunet}!infmx!dror          4100 Bohannon drive			
                                    Menlo Park, Ca. 94025
                                    415 322-4100    

The opinions expressed here Are mine and probably 
Do not reflect Informix Software Inc.

lishka@uwslh.UUCP (Fish-Guts) (03/07/89)

In article <10624@pasteur.Berkeley.EDU> brp@sim.UUCP (bruce raoul parnas) writes:
>In article <32125@gt-cmmsr.GATECH.EDU> kirlik@hms3.gatech.edu (Alex Kirlik) writes:
>>Why should a net with only a few dozen neural units be
>>successful at mimicking human behavior that is presumably
>>the result of the activation of a tremendous number of
>>neurons?  That is, why should a small number of units
>
>I beg to differ substantially on this claim.  No man made neural networks have
>yet come close to modelling/mimicking human behavior, no matter what the level
>of abstraction we assume.  They do not reflect the temporal properties, and are
>totally incapable of *MANY* of the things humans can do.  Neural nets take 
>inputs and associate them with outputs, nothing more.  They do not reflect even
>the simplest levels of cognition!

     This all depends on what you claim is "human behavior."  Below is
quote taken from a paper in which the authors describe a neural
network that they use to model the pyriform (olfactory) cortex.   The
neural network contained about 300 artificial neurons, whereas the
piriform cortex of a rat contains about 10^6 neurons.  In the paper,
they show that their model does reproduce certain key characteristics
of piriform cortex (which is also found in humans, but is usually
studied in animals).  Presumably, this "behavior" of piriform cortex
also occurs in humans.  They have modeled this on a relatively coarse
level. 

     Granted, this may not be what most consider "human behavior" as
we all see it, but it is behavior of the human brain (IMHO).  Although
I think models of this sort are rare at this point in time, I would
expect that more will appear in the future.

>>I know that the validity of this question depends upon the
>>"level" at which we interpret our models, but, after all,
>
>At no level is this valid, i believe.

     As a student of AI, with a couple semesters of neurobiology under
my belt, I disagree.  At certain "lower" levels there have been been
some interesting neural nets that model certain low-level behaviors in
animals.  

     As a practical example, I offer this quote from the abstract of a
paper by Matthew A. Wilson and James M. Bower titled "A Computer
Simulation of Olfactory Cortex with Functional Implications for
Storage and Retrieval of Olfactory Information." The authors were
*neurobiology* graduate students of one of my professors, Lewis B.
Haberly. 

	Based on anatomical and physiological data, we have 
	devloped a computer simulation of piriform (olfactory)
	cortex which is capable of reproducing spatial and
	temporal patterns of actual cortical activity under a 
	variety of conditions. [...]  We have shown that 
	different representations can be stored with minimal
	interference, and that following learning these
	representations are resistant to input degradation,
	allowing reconstruction of a representation following
	only a partial presentation of an original training
	stimulus.  Further, we have demonstrated that the
	degree of overlap of cortical representations for
	different stimuli can also be modulated.  For instance
	similar input patterns can be induced to generate
	distinct cortical representations (discrimination),
	while dissimilar inputs can be induced to generate
	overlapping representations (accomodation).  Both
	features are presumably important in classifying
	olfactory stimuli.
	
This quote is reproduced without permission.  At the time the paper
was written, the authors could be reached at the Computation and
Neural Systems Program, Division of Biology, California Institute of
Technology, Pasadena, CA 91125

>I think that a great many people view neural networks as good models for what
>goes on inside our heads.  Since these models are, mainly, discrete time
>automata they do not reflect the fact that real neural systems are, 
>essentially,nonlinear continuous-time multi-dimensional vector spaces 
>in which the neurons
>evolve in time.  So while they are real neat computational tools, they are far
>from representing real neural processes.

     I disagree; I feel that the above paper proves my point.  One
interesting point, however, is that the neural network used in the
above model used artificial neurons that modeled behavior of
individual neurons in the piriform cortex, complete with
considerations of membrane potential, delay due to the velocity of the
signal through the axon, and time course, amplitude, and waveform due
to particular ionic channel types (of which Na+, Cl-, and K+ channels
types were included in the model).  In other words, the model was
*NOT* a simple neural network based on simple "units" or
McCulloch-Pitts neurons.  However, it *was* a neural network, although
its artificial neurons were more complex than most used today.

>bruce (brp@sim)

				.oO Chris Oo.
-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

		 "I'm not aware of too many things...
		  I know what I know if you know what I mean"
		    -- Edie Brickell & the New Bohemians

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (03/07/89)

In article <11945@swan.ulowell.edu> sbrunnoc@hawk.ulowell.edu (Sean Brunnock) writes:

>  I find that there are some people who are under the impression
>that by linking together many specialized programs(a vision 
>processor, a language processor,...), they will be able to create 
>something akin to the human mind. I do not subscribe to this
>theory because the human brain is pretty much uniform. This 
>fact becomes dramatically obvious in the cases of people who have
>had accidents resulting in the damage of sections of the brain.
>If the damaged section performed a specialized function, then
>for awhile, the person will not be able to perform that action.
>After some time, the rest of the brain is able to assimilate
>the functions performed by the damaged section and the person
>is able to function normally again. 

   It is indeed correct that the brain is capable of changing the
functions of some of its different parts to a limited extent
(the classical example is loss of a nerve going to the skin of the
hand, and neurons which originally were connected strongly to the
part of the skin served by that nerve connect themselvs to nerves
going to other parts of the hand).
   However, the brain -does- have a great deal of differentiation.
(just look at cerebellum vs brain stem vs cereberal cortex).
In addition, large enough damage does produce irreperable dammage
(such as damage to Broca's area involves in speech propduction
leading to Broca's aphasia).
   Moreover, after learning, neurons "differentiate" across the
network.  Look at the hidden units of a feedforward backpropogated
NN.  Each hidden unit will tend to code for a certain part of
the input signal.  If we excise a neuron or two, we typically have
enough distributed representation for the NN to still work.  If
we excise more, we have to re-teach the network.  Eventually, if we
excise enough neurons, the network will not be able to work at all
(with size depending on the complexity of the problem, which is also
closely related to the number of patters to be coded for and size of
input field).  There is, by the way, a whole science to figuring out
how many hidden units to excise from a network to maintain the 
minimum number of neurons and still have the NN operate properly.
   (I personally have a gut feeling that genetic algorithms will help
NN researchers "evolve" alot of NN structure, in a similar way to what
happened to humans).>
>  I look at the market and current research and I see a lot of
>neural network expert systems, handwriting recognizers, and
>image processors. The term neural network here is very misleading.
>I believe that a neural network should be able to learn to do 
>anything and still remain flexible enough to deal with abrubt 
>changes as the human brain is capable of doing. 
      Ah, it all depends on the learning algorithm.  Infact, it may be
that there are meta-learning rules in brain (i.e. a network which
is taught using neron-level learning rules to "learn" on a larger scale,
including input selectivity, some ammount of theorem proving, and
alot of other "symbolic AI" stuff that people think NN's will replace,
albeit on a massively-parallel fault-tolerant scale).
-Thomas Edwards
ins_age@jhuvms (BITNET)
tedwards@nrl-cmf.arpa

#include<disclaimer.hs>   /* ported to connection machine */

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (03/07/89)

In article <10624@pasteur.Berkeley.EDU> brp@sim.UUCP (bruce raoul parnas) writes:
> Neural nets take 
inputs and associate them with outputs, nothing more.  They do not reflect even
>the simplest levels of cognition!

   While it is definately true that we haven't even gotten anywhere close
to a 10^13 neuron device like humans, one could very well argue 
brain is also a device which associates inputs, memory, and produces an
output.  Mind you, the tranfer function is very complex :-).
Recurrent neural networks are capable of holding memories in neural "loops,"
and also there are algorithms for learning in a contiually running NN
(Williams, Zisper "A Learning Algorithm for Continually Running
Fully Recurrent Neural Networks" UCSD ICS 8805 Oct. 1988).
 
>Since these models are, mainly, discrete time
automata they do not reflect the fact that real neural systems are, 
essentially,nonlinear continuous-time multi-dimensional vector spaces in
 which the neurons
>evolve in time.  So while they are real neat computational tools, they are far
>from representing real neural processes.

Pineda, in "Dynamics and Architecture in Neural Compuation", Jorunal
of Complexity, Sept. 1988, points out that time is very
important to NN's, especially if we want to store multiple pattern
associations in them.  He proposes a formalization for recurrent
NN's dealing with them as dynamical systems, and can thus bring them
into continuous time instead of discrete time.  (I think people
working with recurrent nets should look at this paper...It didn't
seem to draw the attention it deserved).

Another major drawback to current neural networks is that human NN's
are a product of evolutionary search.  There are, however, a large
bunch of people working with "neuro-evolution" now, and maybe
we'll see some neat stuff.  Also there is alot of neat recurrent
stuff now which people who have only read PDP have missed out on.
Someone needs to write a good book aimed at Joe Programmer concerning
these issues (or has someone, and I have just missed it?)

-Thomas Edwards
ins_atge@jhuvms (BITNET)
tedwards@nrl-cmf.arpa

#include<disclaimer.hs>   

news@psuvax1.cs.psu.edu (The Usenet) (03/08/89)

In article <10624@pasteur.Berkeley.EDU> sim!brp writes:

> I think that a great many people view neural networks as good
> models for what goes on inside our heads.  Since these models
> are, mainly, discrete time automata they do not reflect the
> fact that real neural systems are, essentially, nonlinear
> continuous-time multi-dimensional vector spaces in which
> the neurons evolve in time.  So while they are real neat
> computational tools, they are far from representing real
> neural processes.

I think you are guilty of over-stating the case for your discipline.
Real neural systems are real neural systems.  They are not "nonlinear
continuous-time multi-dimensional vector spaces", although it may be
constructive to model them as such.

Real neural systems can also be modelled as (borrowing your terminology)
"discrete time automata".  One must distinguish between reality and
the scientific model of choice.  I believe that you meant to say that
modelling real neural systems as "nonlinear continuous-time multi-
dimensional vector spaces" leads to a better understanding of real
neural systems than modelling them as "discrete time automata".

The discrete vs continuous competition is not new.  You sit on the
same side of the fence as many distinguished people.  I lean towards
the discrete side myself, although I am open to argument.

I have not seen any arguments which convince me that the analog
behaviour that we observe in real neural systems is of fundamental
computational importance.  Some of the arguments that I have seen
have been based on the premise that the real world is analog.
Unfortunately, the real world appears to be discrete.  By this I mean
that scientific models which are based on discrete units (atoms,
quarks etc.) give a good understanding of observable phenomena.
Real numbers, continuous functions etc., are abstractions which help
us deal with the fact that the number of discrete units is larger
than we can deal with comfortably.

There are (at least) two objections to the classical automata-
theoretic view of neural systems.  One is that neural systems
are not clocked (I presume that this is what you mean by
"continuous time"), and that neurons have analog behaviour.
Two burning questions which, in my mind, are among the
most important open questions in neural networks research are:
1.  Is unclocked behaviour important?  Was the non-availability
    of a system clock something that Nature had to fight to overcome,
    or did it bring inherent advantages?
2.  Is analog behaviour important?  If I restrict neuron excitation
    values to 6 decimal places, will the networks still function
    correctly?  More importantly, how does the precision scale with
    the number of neurons and/or connections?

Needless to say, these questions are not new.  I am not claiming to
be the first person to have thought of them.  Some information is known.
I am  planning two papers this year (not yet written up) which address
aspects of them.  The Truth (if it exists) still remains to be found.
-------------------------------------------------------------------------------
			Ian Parberry
  "The bureaucracy is expanding to meet the needs of an expanding bureaucracy"
  ian@psuvax1.cs.psu.edu  ian@psuvax1.BITNET  ian@psuvax1.UUCP  (814) 863-3600
 Dept of Comp Sci, 333 Whitmore Lab, Penn State Univ, University Park, Pa 16802

bloch@sequoya.ucsd.edu (Steve Bloch) (03/09/89)

In article <10624@pasteur.Berkeley.EDU> brp@sim.UUCP (bruce raoul parnas) writes:
>In article <32125@gt-cmmsr.GATECH.EDU> kirlik@hms3.gatech.edu (Alex Kirlik) writes:
>>One answer would seem to be that there is a tremendous amount
>>of additional processing in the brain that is extraneous to
>>the processing critical to the task being modeled, yet we are
>>only modeling this "critical" segment. 
>
>Natural selection would eliminate a great deal of "extraneous" processing

Not necessarily.  Natural selection is a good improviser, but a terrible
designer, and in particular it's very reluctant to throw away something
just because it no longer serves its original purpose.  In addition, some
of the hypothesized "extraneous processing" might be what a designer would
call "redundancy for fault-tolerance", which is selected FOR within
reasonable limits.

"The above opinions are my own.  But that's just my opinion."
Stephen Bloch

efrethei@afit-ab.arpa (Erik J. Fretheim) (03/09/89)

In article <6082@sdcsvax.UCSD.Edu> bloch@sequoya.UUCP (Steve Bloch) writes:
>In article <10624@pasteur.Berkeley.EDU> brp@sim.UUCP (bruce raoul parnas) writes:
>>In article <32125@gt-cmmsr.GATECH.EDU> kirlik@hms3.gatech.edu (Alex Kirlik) writes:
>>>One answer would seem to be that there is a tremendous amount
>>>of additional processing in the brain that is extraneous to
>>>the processing critical to the task being modeled, yet we are
>>>only modeling this "critical" segment. 
>>
>>Natural selection would eliminate a great deal of "extraneous" processing
>
>Not necessarily.  Natural selection is a good improviser, but a terrible
>designer, and in particular it's very reluctant to throw away something
>just because it no longer serves its original purpose.  In addition, some
>of the hypothesized "extraneous processing" might be what a designer would
>call "redundancy for fault-tolerance", which is selected FOR within
>reasonable limits.


Agreed that natural selection would not trim extraneous processing, in fact
as you mention it would tend to enhance it as redundant systems. 
Take for example pilots, natural selection tends to enhance the numbers 
who fly airplanes with redundant systems - especially when external stresses
are induced.

carter@sloth.gatech.edu (Carter Bullard) (03/09/89)

>>>
>>>Natural selection would eliminate a great deal of "extraneous" processing
>>
>>Not necessarily.  Natural selection is a good improviser, but a terrible
>
>Agreed that natural selection would not trim extraneous processing, in fact

Maybe you guys should look at the book, Neural Darwinism.  

I must say that i don't agree with these opinions concerning the 
capabilities of natural selection.  To presume that you have an 
understanding of CNS function to the point where you can predict
what influence natural selection has had on the development of 
the process is somewhat, hum, how shall i say, premature.

Carter Bullard
School of Information & Computer Science, Georgia Tech, Atlanta GA 30332
uucp:	...!{decvax,hplabs,ihnp4,linus,rutgers}!gatech!carter
Internet:	carter@gatech.edu

andrew@nsc.nsc.com (andrew) (03/11/89)

In article <971@afit-ab.arpa>, efrethei@afit-ab.arpa (Erik J. Fretheim) writes:
> 
> Agreed that natural selection would not trim extraneous processing, in fact
> as you mention it would tend to enhance it as redundant systems. 
> Take for example pilots, natural selection tends to enhance the numbers 
> who fly airplanes with redundant systems - especially when external stresses
> are induced.

You make a good point, but I think you've been in the Airforce too long !
You can't compare a man-made system with its concomitant catastrophic
failure modes (a la expert system) like an aircraft with a failure-tolerant,
gracefully-degrading, adaptive system like an organism. It is precisely these
features, born out of millenia of ad hoc adaption and evolution, which
reduces the need to kludge on "triply-redundant catastrophically-failing"
stuff like we do in our little designs in the year 1989.
From this perspective, I tend to not see a particularly strong selection
force in favour of redundancy in organisms.
I guess what tickled me was the image of a gene coding for "the ability
to fly a jet fighter"! The timescales of our tech revolutions compared
with that of genetic modification are so out of kilter that it seemed
sort of absurd to use the word "natural" in this context!
===========================================================================
USE EMAIL ADR BELOW ONLY...
	Andrew Palfreyman, MS D3969		PHONE:  408-721-4788 work
	National Semiconductor				408-247-0145 home
	2900 Semiconductor Dr.			there's many a slip
	P.O. Box 58090				'twixt cup and lip
	Santa Clara, CA  95052-8090

	DOMAIN: andrew@logic.sc.nsc.com  
	ARPA:   nsc!logic!andrew@sun.com
	USENET: ...{amdahl,decwrl,hplabs,pyramid,sun}!nsc!logic!andrew

brp@sim.uucp (bruce raoul parnas) (03/16/89)

In article <18130@gatech.edu> carter%sloth@gatech.edu (Carter Bullard) writes:
>>>>
>capabilities of natural selection.  To presume that you have an 
>understanding of CNS function to the point where you can predict
>what influence natural selection has had on the development of 
>the process is somewhat, hum, how shall i say, premature.

In my original posting I did not claim that I had any idea whatsoever what
natural selection was doing to the CNS specifically.  As you state, that would
be quite presumptious on my part.  I only said that I believed that natural
selection would work in such a way as to reduce the levels of what was termed
earlier as "extraneous" processing.  I have no idea what this processing may
be or how it might be eliminated, only that for the brain to perform the vast
amount of computations and other functions that it does, there must not be too
much of this "extraneous" stuff going on.  BTW, redundancy and fault-
tolerance are not examples of, at least by my definition, "extraneous"
processing.

bruce
brp@sim

brp@sim.uucp (bruce raoul parnas) (03/16/89)

In article <8903071701.AA12290@shire.cs.psu.edu> news@psuvax1.cs.psu.edu (The Usenet) writes:
>In article <10624@pasteur.Berkeley.EDU> sim!brp writes:
>
>> I think that a great many people view neural networks as good
>> models for what goes on inside our heads.  Since these models
>> are, mainly, discrete time automata they do not reflect the
>> fact that real neural systems are, essentially, nonlinear
>> continuous-time multi-dimensional vector spaces in which
>> the neurons evolve in time.  So while they are real neat
>> computational tools, they are far from representing real
>> neural processes.
>
>I think you are guilty of over-stating the case for your discipline.
>Real neural systems are real neural systems.  They are not "nonlinear
>continuous-time multi-dimensional vector spaces", although it may be
>constructive to model them as such.

actually my discipline is more neurobiology than it is nonlinear systems, 
although i do think they are a good model.  you are right, though, that this
is only a model.  what i meant to say was that i believed that this was a
better modelling approach than automata theory.

>I have not seen any arguments which convince me that the analog
>behaviour that we observe in real neural systems is of fundamental
>computational importance.  Some of the arguments that I have seen
>have been based on the premise that the real world is analog.
>Unfortunately, the real world appears to be discrete.  By this I mean
>that scientific models which are based on discrete units (atoms,
>quarks etc.) give a good understanding of observable phenomena.

the world is (possibly) discrete on a very fine level.  first, it seems to me
that researchers keep finding yet smaller particles into which matter is sub-
divided: maybe it really is a continuum?  second, even assuming that it is
discrete, this exists on such a fine level that i believe it is irrelevant
here.  modelling of neural systems in terms of their atomic properties is, i
believe, quiet the unenviable task!

>Real numbers, continuous functions etc., are abstractions which help
>us deal with the fact that the number of discrete units is larger
>than we can deal with comfortably.

right.  and in most physical systems we may, for our understanding, treat them
as essentially analog since we simply can't deal with the complexity presented
by the true (?) discrete nature.

>There are (at least) two objections to the classical automata-
>theoretic view of neural systems.  One is that neural systems
>are not clocked (I presume that this is what you mean by
>"continuous time"), and that neurons have analog behaviour.

that is precisely what i meant.  neurons each evolve on their own, independent
of system clocks.

>Two burning questions which, in my mind, are among the
>most important open questions in neural networks research are:
>1.  Is unclocked behaviour important?  Was the non-availability
>    of a system clock something that Nature had to fight to overcome,
>    or did it bring inherent advantages?

i believe that a system clock would be more of a hindrance that a help.  
studies with central pattern generators and pacemaker activity (re: the heart)
show clearly that system clocks are not unavailable.  if evolution had found
a neural system clock advantageous, one could have been created.  i feel,
however, that the continuous-time evolution of neural systems imbues them
with their remarkable properties.

>2.  Is analog behaviour important?  If I restrict neuron excitation
>    values to 6 decimal places, will the networks still function
>    correctly?  More importantly, how does the precision scale with
>    the number of neurons and/or connections?

I don't think that such a fine level of precision is necessary in neural
function, i.e. six places would likely be enough.  but since digital circuitry
is made actaully from analog circuit elements limited to certain regions of
operation, why go to this trouble in real neural systems when analog seems
to work just fine?
>
>Needless to say, these questions are not new.  I am not claiming to
>be the first person to have thought of them.  Some information is known.

>I am  planning two papers this year (not yet written up) which address
>aspects of them.  The Truth (if it exists) still remains to be found.

I would be very interested in getting preprints of this work when it becomes
available.  i, too, am open to arguement for my views.


bruce
brp@sim

brp@sim.uucp (bruce raoul parnas) (03/16/89)

In article <418@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>In article <10624@pasteur.Berkeley.EDU> brp@sim.UUCP (bruce raoul parnas) writes:
!In article <32125@gt-cmmsr.GATECH.EDU> kirlik@hms3.gatech.edu (Alex Kirlik) writes:
!>>Why should a net with only a few dozen neural units be
!>>successful at mimicking human behavior that is presumably
!>>the result of the activation of a tremendous number of
!>>neurons?  That is, why should a small number of units


!>I beg to differ substantially on this claim.  No man made neural networks have
!>yet come close to modelling/mimicking human behavior, no matter what the level
!>of abstraction we assume.  They do not reflect the temporal properties, and are
!>totally incapable of *MANY* of the things humans can do.  Neural nets take 
!>inputs and associate them with outputs, nothing more.  They do not reflect even
!>the simplest levels of cognition!


!     This all depends on what you claim is "human behavior."  Below is

By "behavior" i refer to the underlying strategy, if you will, governing the
actions, not simply the actions themselves.  Given a set of inputs and a set
of outputs it is quite easy to construct, for example, a simple digital
circuit made from combinational logic which can perform the required tasks, 
yet no one would argue that this, in any way, represents the brain.  Cognition
is something we do not yet understand and we can do little more than model
the responses rather than the process.  A small child can repeat words that
he/she can not understand; is this an understanding of the language?

!>>I know that the validity of this question depends upon the
!>>"level" at which we interpret our models, but, after all,

>>At no level is this valid, i believe.

!     As a student of AI, with a couple semesters of neurobiology under
!my belt, I disagree.  At certain "lower" levels there have been been
!some interesting neural nets that model certain low-level behaviors in
!animals.  
I think we're interpreting the word "level" in the original posting 
differently.  I believed it referred to levels of interpretation of a cognitive
model as opposed to modeling of lower-level functions.  i do agree that some of
these latter functions are quite well understood and have been modeled well.
Prime examples of this are the mechanisms in the sensory periphery (see, for
example, Feld, et al in Advances in Neural Information Processing Systems due
around April).  I think that models of cognition, however, are not very useful
at any level toward an understanding of the "big picture" yet, although i hope
that further work will change this.

!     As a practical example, I offer this quote from the abstract of a

[quote concerning modeling of the olfactory system]

the paper you reference (removed for brevity) is quite interesting.  i still
feel that it models the results rather than the cause of the behavior, but it
is, i believe, a step in the right direction.  the inclusion of the temporal
aspect of neurons is crucial to a realistic model.

!>I think that a great many people view neural networks as good models for what
!>goes on inside our heads.  Since these models are, mainly, discrete time
!>automata they do not reflect the fact that real neural systems are, 
!>essentially,nonlinear continuous-time multi-dimensional vector spaces 
!>in which the neurons
!>evolve in time.  So while they are real neat computational tools, they are far
!>from representing real neural processes.

!     I disagree; I feel that the above paper proves my point.  One
!interesting point, however, is that the neural network used in the
!above model used artificial neurons that modeled behavior of
!individual neurons in the piriform cortex, complete with
!considerations of membrane potential, delay due to the velocity of the
!signal through the axon, and time course, amplitude, and waveform due
!to particular ionic channel types (of which Na+, Cl-, and K+ channels
!types were included in the model).  In other words, the model was
!*NOT* a simple neural network based on simple "units" or
!McCulloch-Pitts neurons.  However, it *was* a neural network, although
!its artificial neurons were more complex than most used today.

I misspoke.  what i meant to say was that neural networks are from modeling
COGNITIVE neural processes such as memory and the like.  the peripheral
sensory system, including olfaction, is quite a bit easier to model (as
mentioned above), and the quote you reproduced corroborates this.  i have no
arguement against these models, only those of higer cortical function.
!>bruce (brp@sim)
!
!				.oO Chris Oo.

andrew@nsc.nsc.com (andrew) (03/16/89)

In article <11114@pasteur.Berkeley.EDU>, brp@sim.uucp (bruce raoul parnas) writes:
> neurons each evolve on their own, independent of system clocks.
> 
> >Two burning questions which, in my mind, are among the
> >most important open questions in neural networks research are:
> >1.  Is unclocked behaviour important?  Was the non-availability
> >    of a system clock something that Nature had to fight to overcome,
> >    or did it bring inherent advantages?
> 
> i believe that a system clock would be more of a hindrance that a help.  
> studies with central pattern generators and pacemaker activity (re: the heart)
> show clearly that system clocks are not unavailable.  if evolution had found
> a neural system clock advantageous, one could have been created.  i feel,
> however, that the continuous-time evolution of neural systems imbues them
> with their remarkable properties.
> 
Having just browsed through "Fractals Everywhere" by Barnsley, I'm reminded
of a comment about the heart and clocks in there. Loosely paraphrased, it is
stated that a healthy heart exhibits a measurable degree of chaotic behaviour -
i.e. the fractal dimension of some representation of the heartbeat over time -
whereas a low or zero fractal dimension (a very steady beat) is an excellent
indicator that something unhealthy - an attack or arhythmia - is imminent.

This may say something in general about organic systems as you've been
discussing; that exact synchronisation is not something desirable.

Further, I believe Walter Friedman has presented recently on information
processing _in vivo_ where he postulates that chaotic attractors are a
key element in biological information processing. I'm afraid that's as much
detail as I have - I'm not "into chaos".

Therefore, although, as you say, locality of processing tends to exclude
a system clock approach, the above give perhaps stronger reasons as to why
a manmade ANNS would actually be inferior, were it to use a system clock.

While I'm here, I'll mention something else from biology, which filled me
with great dismay(!) - this month's Scientific American's feature on the
brain's star-like "astrocyte" cells. Their role becomes important in direct
proportion to the amount of time they are investigated; akin to glial cells,
I believe. Now the diagram of how the astrocytes connect to the neuron net
is frightening .. they hook between everywhere (neuron body, node of Ranvier
on the axon myelin sheath tap point, on the bare axon, capillaries, and the
cells at both the surface (meningeal) and the centre (water-bearing) cells
of the whole brain. This means computationally that it's a whole new ball
game, I imagine... anyone have any comments?
==========================================================================
	DOMAIN: andrew@logic.sc.nsc.com  
	ARPA:   nsc!logic!andrew@sun.com
	USENET: ...{amdahl,decwrl,hplabs,pyramid,sun}!nsc!logic!andrew

	Andrew Palfreyman, MS D3969		PHONE:  408-721-4788 work
	National Semiconductor				408-247-0145 home
	2900 Semiconductor Dr.			there's many a slip
	P.O. Box 58090				'twixt cup and lip
	Santa Clara, CA  95052-8090
==========================================================================

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (03/17/89)

In article <10192@nsc.nsc.com> andrew@nsc.nsc.com (andrew) writes:
[concerning clocked NN's]

There is a big concern over synchronicity of NN's.  Two points come to mind,
1) Back-prop in particular is an approximation of gradient-descent of
   the error surface, and there are a few problems caused by finitely
   small quanta of learning steps...but that's what you get for not
   spending the time to search the entire error surface!
   But it would be nice if a method can be determined which allows for
   infinitely-small learning steps at a reasonable speed.
   Pineda claims his recurrent learning algorithm is "presented
   in a formalism appropriate for implementation as a physical
   nonlinear dynamical system," and thus he is able to avoid
   "certains kinds of oscillations which occur in discrete time
   models usually associated with backpropogation."  

2) To a limited extent, using "delay neurons," a syncrhonous neural
   network can approach a non-synchronous one.

>While I'm here, I'll mention something else from biology, which filled me
>with great dismay(!) - this month's Scientific American's feature on the
>brain's star-like "astrocyte" cells. Their role becomes important in direct
>proportion to the amount of time they are investigated; akin to glial cells,
>I believe.

Ah, the important thing to remeber is that NN's are based upon mathematical
solutions to the problem of getting the proper output from a network
for a certain input by changing the network weights...they might at some
level of abstraction resemble real neural networks, but lack
neuropharmacology (which is _very_ important to human cognition!),
and a whole host of other qualities.  (The brain also has many different
styles of neurons!).  
   This is _not_ to say that human brain study is irrelevent to NN's,
but that NN's are going to be a simpler structure than the brain
because they exist (currently...this may change) in the realm of
information instead of being physical things which need support,
oxygen, nutrients, immune systems, etc.

-Thomas Edwards

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (03/17/89)

gack....the Pineda reference is
  "Dynamics and Architecture for Neural Compuation", Fernando J.
Pineda, Journal of Complexity 4,216-245 (1988)  Academic Press
(Harcourt Brace Jovanovich)

brp@sim.uucp (bruce raoul parnas) (03/17/89)

In article <10192@nsc.nsc.com> andrew@nsc.nsc.com (andrew) writes:

>Further, I believe Walter Friedman has presented recently on information
                             ^ ^   (Freeman)
>processing _in vivo_ where he postulates that chaotic attractors are a
>key element in biological information processing. I'm afraid that's as much

This is all presuming that you believe it is possible to experimentally
distinguish between chaos and noise, which is also assumed to be present in
the nervous system.  Personally i don't have much faith in freeman's 
assertions concerning chaos, but i'm also not an expert in the area.

bruce
(brp@sim)

*andrew@nsc.nsc.com (*andrew) (03/17/89)

In article <1163@jhunix.HCF.JHU.EDU>, ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) writes:
> 1) Back-prop in particular is an approximation of gradient-descent of
>    the error surface, and there are a few problems caused by finitely
>    small quanta of learning steps...but that's what you get for not
>    spending the time to search the entire error surface!

I believe that there exists no formal proof of global convergence for 
conventional backprop when the quanta are not "infinitely small". This might
be seen as a drawback!
> 
> 
> Ah, the important thing to remeber is that NN's are based upon mathematical
> solutions to the problem of getting the proper output from a network...
> ... NN's are going to be a simpler structure than the brain
> because they exist (currently...this may change) in the realm of
> information instead of being physical things which need support,
> oxygen, nutrients, immune systems, etc.
> 
> -Thomas Edwards

Agreed, but I was concentrating on the richness of interconnection; I
neglected to mention that the synapse itself is one of the "hookup" points
for these cells. Although it's often possible to redraw a complex circuit
in a simpler fashion by "lumping" elements to create locally more involved
transfer functions, this generally obscures the simpler structure (_vide_
feedback-type circuits). The astrocytes, being ubiquitous and highly-
connected, explode the parallelism even more than was thought - and that   
must have an impact, at some level, on future modeling with fidelity.
Andrew Palfreyman	nsc!logic!andrew