[comp.ai.neural-nets] Self similarity in neural nets?

kirlik@hms2 (Alex Kirlik) (02/25/89)

For those interested in the potential psychological/
physiological significance of neural-net models:

Has anyone else been puzzled by the following phenomenon?
(I haven't found it discussed in the literature).

Why should a net with only a few dozen neural units be
successful at mimicking human behavior that is presumably
the result of the activation of a tremendous number of
neurons?  That is, why should a small number of units
be successful at simulating the behavior of a large 
number of neurons?

I know that the validity of this question depends upon the
"level" at which we interpret our models, but, after all,
these units are modeled to mimic the behavior of individual
neurons, aren't they.  I am aware of the drastic simplifications
that are made but this doesn't change the intended referents of
our theoretical objects.

One answer would seem to be that there is a tremendous amount
of additional processing in the brain that is extraneous to
the processing critical to the task being modeled, yet we are
only modeling this "critical" segment. For many reasons (that
could be discussed if necessary) I do not find this answer
particulary compelling.

A second answer might be that that neural processing has 
self-similar properties.  That is, the behavior of neural
collectives share properties with the behavior of individual
neurons. I find this answer to be interesting and attractive,
yet I know of no evidence for it.

A third answer might be to suggest that this is all unreasoned
dribble, since we don't want to interpret these models
realistically, anyway. 

It seems OK to go this way, but for those who don't, I suggest
that the question merits consideration. Or does it?

Thanks for reading,

Alex


Alex Kirlik

UUCP:	kirlik@chmsr.UUCP
        {backbones}!gatech!chmsr!kirlik
INTERNET:	kirlik@chmsr.gatech.edu