[net.ai] Parallelism and Physiology

rik@UCLA-CS@sri-unix.UUCP (10/01/83)

From:  Rik Verstraete <rik@UCLA-CS>

I would like to comment on your message that was printed in AIList Digest
V1#63, and I hope you don't mind if I send a copy to the discussion list
"self-organization" as well.

        Date: 23 Sep 1983 0043-PDT
        From: FC01@USC-ECL
        Subject: Parallelism

        I thought I might point out that virtually no machine built in the
        last 20 years is actually lacking in parallelism. In reality, just as
        the brain has many neurons firing at any given time, computers have
        many transistors switching at any given time. Just as the cerebellum
        is able to maintain balance without the higher brain functions in the
        cerebrum explicitly controlling the IO, most current computers have IO
        controllers capable of handling IO while the CPU does other things.

The issue here is granularity, as discussed in general terms by E. Harth
("On the Spontaneous Emergence of Neuronal Schemata," pp. 286-294 in
"Competition and Cooperation in Neural Nets," S. Amari and M.A. Arbib
(eds), Springer-Verlag, 1982, Lecture Notes in Biomathematics # 45).  I
certainly recommend his paper.  I quote:

One distinguishing characteristic of the nervous system is
thus the virtually continuous range of scales of tightly
intermeshed mechanisms reaching from the macroscopic to the
molecular level and beyond.  There are no meaningless gaps
of just matter.

I think Harth has a point, and applying his ideas to the issue of parallel
versus sequential clarifies some aspects.

The human brain seems to be parallel at ALL levels.  Not only is a large
number of neurons firing at the same time, but also groups of neurons,
groups of groups of neurons, etc. are active in parallel at any time.  The
whole neural network is a totally parallel structure, at all levels.

You pointed out (correctly) that in modern electronic computers a large
number of gates are "working" in parallel on a tiny piece of the problem,
and that also I/O and CPU run in parallel (some systems even have more than
one CPU).  However, the CPU itself is a finite state machine, meaning it
operates as a time-sequence of small steps.  This level is inherently
sequential.  It therefore looks like there's a discontinuity between the
gate level and the CPU/IO level.

I would even extend this idea to machine learning, although I'm largely
speculating now.  I have the impression that brains not only WORK in
parallel at all levels of granularity, but also LEARN in that way.  Some
computers have implemented a form of learning, but it is almost exclusively
at a very high level (most current AI on learning work is at this level),
or only at a very low level (cf. Perceptron).  A spectrum of adaptation is
needed.

Maybe the distinction between the words learning and self-organization is
only a matter of granularity too. (??)

        Just as people have faster short term memory than long term memory but
        less of it, computers have faster short term memory than long term
        memory and use less of it. These are all results of cost/benefit
        tradeoffs for each implementation, just as I presume our brains and
        bodies are.

I'm sure most people will agree that brains do not have separate memory
neurons and processing neurons or modules (or even groups of neurons).
Memory and processing is completely integrated in a human brain.
Certainly, there are not physically two types of memories, LTM and STM.
The concept of LTM/STM is only a paradigm (no doubt a very useful one), but
when it comes to implementing the concept, there is a large discrepancy
between brains and machines.

        Don't be so fast to think that real computer designers are
        ignorant of physiology.

Indeed, a lot of people I know in Computer Science do have some idea of
physiology.  (I am a CS major with some background in neurophysiology.)
Furthermore, much of the early CS emerged from neurophysiology, and was an
explicit attempt to build artificial brains (at a hardware/gate level).
However, although "real computer designers" may not be ignorant of
physiology, it doesn't mean that they actually manage to implement all the
concepts they know.  We still have a long way to go before we have
artificial brains...

        The trend towards parallelism now is more like
        the human social system of having a company work on a problem. Many
        brains, each talking to each other when they have questions or
        results, each working on different aspects of a problem. Some people
        have breakdowns, but the organization keeps going. Eventually it comes
        up with a product, although it may not really solve the problem posed
        at the beginning, it may have solved a related problem or found a
        better problem to solve.

Again, working in parallel at this level doesn't mean everything is
parallel.

                Another copyrighted excerpt from my not yet finished book on
        computer engineering modified for the network bboards, I am ever
        yours,
                                                Fred


All comments welcome.

        Rik Verstraete <rik@UCLA-CS>

PS: It may sound like I am convinced that parallelism is the only way to
go.  Parallelism is indeed very important, but still, I believe sequential
processing plays an important role too, even in brains.  But that's a
different issue...

brucec@orca.UUCP (Bruce Cohen) (10/06/83)

-------
Re the article posted by Rik Verstraete <rik@UCLA-CS>:

In general, I agree with your statements, and I like the direction of
your thinking.  If we conclude that each level of organization in a
system (e.g. a conscious mind) is based in some way on the next lower
level, it seems reasonable to suppose that there is in some sense a
measure of detail, a density of organization if you will, which has a
lower limit for a given level before it can support the next level.
Thus there would be, in the same sense, a median density for the
levels of the system (mind), and a standard deviation, which I
conjecture would be bounded in any successful system (only the top
level is likely to be wildly different in density, and that lower than
the median).

	Maybe the distinction between the words learning and
	self-organization is only a matter of granularity too. (??)

I agree.  I think that learning is simply a sophisticated form of
optimization of a self-organizing system in a *very* large state
space.  Maybe I shouldn't have said "simply."  Learning at the level of
human beings is hardly trivial.

	Certainly, there are not physically two types of memories, LTM
	and STM.  The concept of LTM/STM is only a paradigm (no doubt a
	very useful one), but when it comes to implementing the concept,
	there is a large discrepancy between brains and machines.

Don't rush to decide that there aren't two mechanisms.  The concepts of
LTM and STM were developed as a result of observation, not from theory.
There are fundamental functional differences between the two.  They
*may* be manifestations of the same physical mechanism, but I don't
believe there is strong evidence to support that claim.  I must admit
that my connection to neurophysiology is some years in the past
so I may be unaware of recent research.  Does anyone out there have
references that would help in this discussion?