[net.arch] Computational ability of houseflies

jer@peora.UUCP (J. Eric Roskos) (04/29/86)

> With all the talk about performance metrics, consider this:
>
> How many MIPS does a single brain neuron have?
>
>
> I ask this because it seems we don't need to compute faster, but
> to compute better.  After all, brain cells have switching times in
> the MILLIsecond range.  How does the brain do it?  We should probably
> start small, so how about the question:
>
> Does any hardware currently exist that matches the real-time computational
> ability of a housefly?

Funny you should ask this... it's a very interesting subject.

The thing is, the brain doesn't seem to do computing the way current-day
machines do; in particular, it seems to contradict a lot of the
logic-based approaches to artificial intelligence.  Think about how people
do arithmetic operations, for example... they do it by table lookup!  Of
course, they also follow algorithms (add this column of 1-digit numbers,
put the "carry" on top of the next column, etc.), but the basic arithmetic
operations don't work the way they do in computers; at some early time
they memorized "two times two is four; three times two is six," etc., and
now recall these discrete facts whenever they do arithmetic.

Recent research seems to suggest that in general a lot of human
"computation" also works this way, with the interesting enhancement that,
if you think of it in terms of a table, table entries tend to "attract"
nearby guesses, so that from an approximation you get pulled into the
memorized answer. (Likewise, if you make an initial guess that is nearer
to another (wrong) answer, you may get pulled to that one instead and have
trouble finding the right answer as a result.) Very simple published
algorithms (albeit slow ones on a sequential machine) exist for modelling
simple forms of this operation, although other research has suggested that
a variety of specialized "functional units" exist in the brain which
aren't covered by that model. (Incidentally, some very interesting
research in cognitive psychology shows that some classes of problem
solving can be modeled in terms of n-dimensional spaces, and you can even
produce surprisingly unexpected artifacts of this spatiality -- for
example, people categorizing things using attributes of the objects that
are highly nonobvious, seemingly based entirely on this spatial distance
-- which of course really isn't spatial per se., probably, but is probably
an artifact of the number of partitions of the "bits" present which
are used to store the data.)

On the other hand, what does it mean for something to "compute better"?  A
lot of the things current-day computers do, people don't do so well -- for
example, memorizing and organizing extremely large numbers of very similar
things very quickly, performing fast numerical computation, etc.
Likewise, human beings tend to be more inexact, but also more
fault-tolerant (which is a property of the above model) and able to
perceive abstract properties of things (which in fact may be the result of
non-consequential thinking -- which runs somewhat counter to the way
people describe their thought processes, actually.  How many
mathematicians really admit "I was just sitting eating lunch and idly
thinking about how to prove this theorem, and suddenly it occurred to me
out of nowhere."?)

Nevertheless, these new devices based on neural research (there is an
article now almost every week on the subject in EE Times) are one of the
more interesting things going on today. (my opinion, of course!)
-- 
E. Roskos

carl%ci-dandel@ci-dandel.UUCP (05/02/86)

In article <2121@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
>if you think of it in terms of a table, table entries tend to "attract"
>nearby guesses, so that from an approximation you get pulled into the
>memorized answer. (Likewise, if you make an initial guess that is nearer
>to another (wrong) answer, you may get pulled to that one instead and have
>trouble finding the right answer as a result.) Very simple published
>algorithms (albeit slow ones on a sequential machine) exist for modelling
>simple forms of this operation, although other research has suggested that
>a variety of specialized "functional units" exist in the brain which
>aren't covered by that model. (Incidentally, some very interesting
>research in cognitive psychology shows that some classes of problem
>solving can be modeled in terms of n-dimensional spaces, and you can even
>produce surprisingly unexpected artifacts of this spatiality -- for
>example, people categorizing things using attributes of the objects that
>are highly nonobvious, seemingly based entirely on this spatial distance
>-- which of course really isn't spatial per se., probably, but is probably
>an artifact of the number of partitions of the "bits" present which
>are used to store the data.)

For more on this subject, I would recommend _Parallel Models of
Associative Memory_ by Geoffrey Hinton and James Anderson.  Some of
the chapters include:

	"Models of Information Processing in the Brain"
	"A Connectionist Model of Visual Memory"
		by J>A> Feldman
	"Holography, Associative Memory, and Inductive Generalization"
		by David Willshaw
	"Implementing Semantic Networks in Parallel Hardware"
	"Catagorization and Selective Neurons"
		by James Anderson and Michael Mozer

	The book is published by Lawrence Erlbaum Associates(1981),
and is available in most moderately disreputable bookstores.

=================================================================================
UUCP: ...mit-eddie!ci-dandelion!carl	
BITNET: CARL@BROWNVM
=================================================================================

peters%cubsvax@cubsvax.UUCP (05/02/86)

In article <peora.2121> jer@peora.UUCP (J. Eric Roskos) writes:
>> With all the talk about performance metrics, consider this:
>>
>> How many MIPS does a single brain neuron have?
>>
>>
>> I ask this because it seems we don't need to compute faster, but
>> to compute better.  After all, brain cells have switching times in
>> the MILLIsecond range.  How does the brain do it?  We should probably
>> start small, so how about the question:
>>
>> Does any hardware currently exist that matches the real-time computational
>> ability of a housefly?
>
>The thing is, the brain doesn't seem to do computing the way current-day
>machines do;...  				... Think about how people
>do arithmetic operations, for example... they do it by table lookup!...
>
>if you think of it in terms of a table, table entries tend to "attract"
>nearby guesses, so that from an approximation you get pulled into the
>memorized answer. (Likewise, if you make an initial guess that is nearer
>to another (wrong) answer, you may get pulled to that one instead and have
>trouble finding the right answer as a result.) Very simple published
>algorithms (albeit slow ones on a sequential machine) exist for modelling
>simple forms of this operation...

Which brings to mind the question:  if we designed a computer as good as a
brain, would it also be as bad as a brain?

>							...How many
>mathematicians really admit "I was just sitting eating lunch and idly
>thinking about how to prove this theorem, and suddenly it occurred to me
>out of nowhere."?)

The chemist Kekule several times described his 1857 discovery of the structure
of benzene as having come to him in a a vision, while gazing at a fire.  
(Benzene is a ring;  he "saw" the ancient alchemical symbol of the ourobouros,
a snake swallowing its tail.)  Recently, John Wotiz, a chemistry professor at
Southern Illinois University, has ridiculed the idea that this is the way it
happened, claiming that it Kekule derived the structure by "hard work" instead 
of mysical insight.  (Personally, I see no contradiction between the two;  
answers to hard questions usually occur to me while I'm driving home after
a hard-working, frustrating day of getting nowhere with the problem.)

Peter S. Shenkin	 Columbia Univ. Biology Dept., NY, NY  10027
{philabs,rna}!cubsvax!peters		cubsvax!peters@columbia.ARPA

msb@lsuc.UUCP (Mark Brader) (05/04/86)

Peter S. Shenkin (peters@cubsvax.UUCP) writes:
> The chemist Kekule several times described his 1857 discovery of the structure
> of benzene as having come to him in a a vision, while gazing at a fire.  
> (Benzene is a ring;  he "saw" the ancient alchemical symbol of the ourobouros,
> a snake swallowing its tail.)

I can't let this go uncorrected.  It was 1865, and more important,
he wasn't gazing at a fire; he was riding a bus.  Really now!

Mark Brader, transit fan

david@ztivax.UUCP (05/05/86)

>...Think about how people
>do arithmetic operations, for example... they do it by table lookup!  

>Recent research seems to suggest that in general a lot of human
>"computation" also works this way, with the interesting enhancement that,
>if you think of it in terms of a table, table entries tend to "attract"
>nearby guesses, so that from an approximation you get pulled into the
>memorized answer. (Likewise, if you make an initial guess that is nearer
>to another (wrong) answer, you may get pulled to that one instead and have
>trouble finding the right answer as a result.) 

I was wondering:  How do I detect errors in thinking?  By seeing by
what other paths the same conclusion can be reached, and seeing if these
"conditions" are also "true".  

Now, lets say we can implement a state machine (software) which can do
these table look ups (perhaps the table is associative to enable
"guesses").  By remembering the state of the local processing (assuming
parallel processing), it should be possible to check the result while
letting the "reasoning" carry on.  If a fault is detected, the
reasoning which has subsequently occurred MIGHT be able to be pulled
back, but certainly not always and not too reliably (side effects would
be difficult).

This seems to be similar to how people reason.  Side effects are often
difficult to eradicate, even if the basis which originally started the
line of reasoning is later found to be false.

Also, this models the way the brain has no central PC, and how
processing on different fronts proceeds as long as new "inferences"
are drawn, and "reasonable" concepts are coelesced into conclusions.

Limiting things to something like current software technology, and to
an organism like a fly which has a known finite set of responses (does
not "create"), lets say the state machine is described using a
optimizable grammar, and built using some kind of hyper-yacc, which
collapses states which are equivalent, and keeps information with the
states which points back at the read/push/reduce tables so all the
"reasons" for reaching this state can be seen if only the state is
known.

On input of stimuli, state transitions occur.  On every state
transition, a new process is spawned to perform a reasonableness
check, if multiple transitions could have caused this state.  If the
reasonableness check fails, then the process group of the
reasonableness check gets killed (all the subsequent processing and
reasonableness checks).  Now, probing around spatially close states
may find a state for which the reasonableness checks will succeed, and
the state is then changed, and processing continues from this point.

But how are states arranged spatially in a nice way?  Guessing does
not work, because the flies will not survive long enough to "evolve"
the correct spatial orientation of states.  In humans, (as was
mentioned in the article I am responding to), "attributes" are used,
although they may be obscure.  Any ideas?

	- David
seismo!unido!ztivax!david

jer@peora.UUCP (J. Eric Roskos) (05/05/86)

> Which brings to mind the question:  if we designed a computer as good as a
> brain, would it also be as bad as a brain?

This reminds me of a colleague of mine back when I briefly worked for an AI
company while I was in graduate school; he maintained that it would be a bad
thing to make an artificially-intelligent computer really work like the
human brain, because it would then also have the shortcomings -- as an
example, he cited some AI systems that were prone to "superstition," i.e.,
incorrectly assuming causality from random events (the post hoc ergo
propter hoc fallacy, that event A caused event B because A occurred
just before B).

> The chemist Kekule several times described his 1857 discovery of the
> structure of benzene as having come to him in a a vision, while gazing at
> a fire. (Benzene is a ring; he "saw" the ancient alchemical symbol of the
> ourobouros, a snake swallowing its tail.) Recently, John Wotiz, a
> chemistry professor at Southern Illinois University, has ridiculed the
> idea that this is the way it happened, claiming that it Kekule derived the
> structure by "hard work" instead of mysical insight.

Actually, Kekule's description would seem to me to be in keeping with
these spatial or "dimensional" models of memory -- thinking of the snake
swallowing its tail might have essentially created a "guess" (in terms of
the image of the ring) sufficiently close to the information he had
collected in his mind on benzene that the guess then gravitated towards
the "correct" structure for benzene in the way the model describes (recall
that it says that if you make a guess sufficiently close to a memorized
item, then the memorized item will draw your guess to it -- furthermore
the models from cognitive psychology say that if you give a person a piece
of information that is related in nonobvious ways to other things they
already know of, they will tend to "discover" the nonobvious relations
eventhough there is no evident, rational reason for their doing so).
-- 
E. Roskos

hsu@eneevax.UUCP (Dave Hsu) (05/06/86)

In article <1196@lsuc.UUCP> msb@lsuc.UUCP (Mark Brader) writes:
>Peter S. Shenkin (peters@cubsvax.UUCP) writes:
>> The chemist Kekule ... described his 1857 discovery of the structure
>> of benzene as having come to him in a a vision, while gazing at a fire.  
>> ...  he "saw" the ancient alchemical symbol of the ourobouros,
>> a snake swallowing its tail.)
>
>I can't let this go uncorrected.  It was 1865, and more important,
>he wasn't gazing at a fire; he was riding a bus.  Really now!
>
>Mark Brader, transit fan

This is all and well and not unlike the peculiar mathematical solutions by
Ramanujan that Douglas Hofstatder relates in GEB:an EGB.  But then again,
what does this have to do with his computational ability?  Did he suspect
that benzene was a ring?  Is this really closer to saying "the brothers
Montgolfier discovered the hot-air ballon while watching clothes dry over
a fire" than it is to say, "I discovered the structure of the modern high-
performance jet fighter by gazing at golf-balls", or maybe "I saw the
structure of the 32-bit processor while gazing at a 1960 map of Manhattan"?

-dave
-- 
David Hsu  (301)454-1433 || -8798  <insert fashionably late disclaimer here>
Communication & Signal Processing Lab / Engineering Computer Facility
The University of Maryland   -~-   College Park, MD 20742
ARPA:hsu@eneevax.umd.edu  UUCP:[seismo,allegra,rlgvax]!umcp-cs!eneevax!hsu

"No way, eh?  Radiation has made me an enemy of civilization!"

abc@brl-smoke.ARPA (Brint Cooper ) (05/06/86)

In article <2121@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
>>
>> Does any hardware currently exist that matches the real-time computational
>> ability of a housefly?
>
>Funny you should ask this... it's a very interesting subject.
>
>The thing is, the brain doesn't seem to do computing the way current-day
>machines do; in particular, it seems to contradict a lot of the
>logic-based approaches to artificial intelligence.  Think about how people
>do arithmetic operations, for example... they do it by table lookup!  Of
>course, they also follow algorithms (add this column of 1-digit numbers,
>put the "carry" on top of the next column, etc.), but the basic arithmetic
>operations don't work the way they do in computers; at some early time
>they memorized "two times two is four; three times two is six," etc., and
>now recall these discrete facts whenever they do arithmetic.
>
>Recent research seems to suggest that in general a lot of human
>"computation" also works this way, with the interesting enhancement that,
>if you think of it in terms of a table, table entries tend to "attract"
>nearby guesses, so that from an approximation you get pulled into the
>memorized answer.

This is a fascinating idea.  A variant of it may be to consider that the
discrete facts which we as children memorized form the 'primitives' of
our CPU in a manner analogous to the primitive operations (add, carry,
store, test, set) in digital computer hardware.  Obviously, if the
primitives are at a 'higher level,' we can afford for them to take
longer if they are the proper set for solving our more complex problems.

Perhaps computer designers need to consider more imaginatively just what
their hardware primitives should do.

-- 
Brint Cooper

	 ARPA:  abc@brl-bmd.arpa
	 UUCP:  ...{seismo,unc,decvax,cbosgd}!brl-bmd!abc

jqj@gvax.cs.cornell.edu (J Q Johnson) (05/06/86)

In article <171@ci-dandelion.UUCP> carl@ci-dandelion.UUCP writes:
>For more on this subject, I would recommend _Parallel Models of
>Associative Memory_ by Geoffrey Hinton and James Anderson.  

The interested reader should also see more recent work by Anderson.  It
should be noted that these models are the subject of substantial debate
in cognitive psychology, and should not be taken as gospel.  It is not
even clear that they are Turing-complete.  In general, my view is that
they probably do provide a plausible model for memory and for some
types of cognition, but do not really address the issues of perception
at all (one gets the impression that perception depends more on
mode-specific hardwired processes, "special purpose I/O firmware", if
you will).

Further discussion in the above vein might better move to a different
news group.  It belongs here only to the extent that it offers specific
computer architectural ideas.  Note, however, that the mind is far from
the only (or most accessible) source for such novel ideas; perhaps we
should study more carefully the information processing mechanisms and
communications patterns in hive animals such as bees to see if we can
find any useful ideas THERE for multiprocessor systems!

jer@peora.UUCP (J. Eric Roskos) (05/07/86)

> Now, lets say we can implement a state machine (software) which can do
> these table look ups (perhaps the table is associative to enable
> "guesses").

That's correct, I think... these are associative memories we are talking
about (as someone else pointed out)...

> But how are states arranged spatially in a nice way?  Guessing does
> not work, because the flies will not survive long enough to "evolve"
> the correct spatial orientation of states.  In humans, (as was
> mentioned in the article I am responding to), "attributes" are used,
> although they may be obscure.  Any ideas?

That is something I have wondered a lot about.  I asked a cognitive
psychologist (who is in fact somewhere on the Usenet, but probably not
reading net.arch) about this, because I was wondering whether people come
"preconfigured" with something that causes the initial inputs they
receive to get stored in a spatially satisfactory manner -- i.e., in a
way such that different categories are spread uniformly through the state
space rather than being lumped together in one place, where adjacent
memories would tend to interact and confuse one another.  I don't think
the person I asked ever answered the question exactly, though, other than
to mention that the first few categories people are exposed to do seem to
have an influence on the way that they categorize other later things.

I presently tend to suspect (but haven't yet reached any real opinion)
that possibly in humans there is a hierarchical arrangement of information
storage, such that some "top-level" set of remembered states (maybe some
way of looking at an input and categorizing it based on some salient
attributes) is used to determine how to encode the things about the input
that will be remembered.  For example, I have a tendency not to be able to
remember people in terms of what they look like; I've decided that this is
because the set of things I tend to automatically remember about a person
when I first see them are not particularly good distinguishing features
(the color of their hair, how tall they are, etc; for some reason I never
remember whether or not a person has a beard, for example) -- I
hypothesize that this is because when I see "A Person", the way I encode
their attributes is in terms of hair color and height.  Probably, I
suspect, there would also be nonobvious pieces of information involved
about how to encode this information -- for example, the set of distinct
hair colors, a set of height-classifiers ("As tall as Alf"*, "real tall,"
"kind of tall," "about average," "short," "as short as Sarah"*, <heights
based on ages of children>), etc. -- which might also be managed by this
top-level information-encoding (categorizing) system.

Obviously, that is only a guess.

---------
* Notice how the two items marked by stars -- which I noticed yesterday
  seem to be real attributes I apply to people -- suggest that the categories
  are *not* predefined, since obviously "Alf" and "Sarah" mean something
  different to you than to me.  However, the names might actually be just
  convenient tags stuck on the predefined categories by association. At
  present I tend to doubt this, however.  (The names have been changed
  to protect the category-representatives.)
-- 
E. Roskos

Eat your orts!