[comp.ai.neural-nets] Neuron Digest V4 #25

neuron-request@HPLABS.HP.COM (Neuron-Digest Moderator Peter Marvit) (11/18/88)

Neuron Digest   Thursday, 17 Nov 1988
                Volume 4 : Issue 25

Today's Topics:
                           Response to Handelman
                             Neuron Resolution
            Re: Neuron Resolution (warning: this baby is long!)
          E. Tzanakou to speak on ALOPEX: an optimization method
                           Relaxation labelling


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: Response to Handelman
From:    bharucha@eleazar.Dartmouth.EDU (Jamshed Bharucha)
Date:    Tue, 15 Nov 88 15:57:02 -0500 

This is a response to Eliot Handelman's recent outburst about work on
neural net modeling of music.

Mr. Handelman makes a number of allegations that have more the tone of the
recent Presidential campaign than of a debate about research.  I'm not sure
what nerve I have touched that has caused him to lash out with sarcasm.
Whatever it is, his allegations and insinuations are false.

He alleges that "Mr. Bharucha is content to inform us of the power of his
various architectures and is apparently unwilling to let us judge for
ourselves.... if he is doing some sort of music research, he might credit
us with some curiosity as to the basis of his claims."

No, Mr. Mendelman, I am not simply content to inform you of the power of
these architectures. Yes, please judge for yourself, but do so after
reading my entire corpus of work and after making a genuine attempt to
understand what I am doing. The details of my constraint satisfaction
network are given in the proceedings of the Cognitive Science Society,
1987, published by Erlbaum. The equations given there are sufficient to
impliment the model if you wish to do so.  Some further perspective on this
model is given in the journal Music Perception (vol 5, no.1).

I infer from some of Mr. Handelman's comments that what irked him was the
paper in the proceedings of the AAAI workshop on Music and AI. This paper
reports some work beyond the constraint satisfaction model, in particular,
work on sequential expectancies.  This paper was designed to summarize, for
the workshop format, the work my collaborators and I have recently been
doing, condensed into the page limit assigned to contributors. If you would
like more details, Mr. Handelman, I would be happy to provide them.
Details of the Jordan sequential architecture can be found in Jordan's
paper in the 1986 proceedings of the Cognitive Science Society, which I
cite in the AAAI paper. One of my collaborators, Peter Todd
(todd@galadriel.stanford.edu), has given some details of the musical
implementation of this architecture in the proceedings of the 1988
Connectionist Summer School.  Some of the material reported at AAAI is very
recent and it is quite the norm in science to quickly report brief
summaries of recent research pending a more expanded publication.

No, Mr. Handelman, my network has not learned Trauerzug of Act II of
Parsifal, nor is it likely to.  If you want to know what aspects of Western
music I am addressing and what predictions I am referring to vis-a-vis a
tonal context, I suggest you take the time to read ALL my work, as well as
the major sources I cite.  This includes my papers reporting psychological
experiments on specific aspects of harmony and also includes earlier work
with Krumhansl as well as Krumhansl's own well known work. We have
addressed SPECIFIC aspects of Western (as well as Indian) music, and all
the caveats are there. We cannot repeat every detail and every caveat in
every paper. We neither claim nor imply that we have captured music in all
or even a major part of its complexity and subtlety.  Simplifications have
to be made in science, and they do not necessarily reflect a belief that
nothing else is important.  Please join us in trying to make sure that the
variables we isolate are meaningful. We welcome your constructive input,
but not your venom.

Finally, Mr. Handelman alleges that: "If Mr. Bharucha is claiming to do
original research into the architecture of temporal associators, he is at
least 7 years out of date".  There are false accusations implicit in this
statement, Mr. Handelman, and intellectual integrity compels you to offer a
swift retraction and apology on this bboard.  I cite Jordan (1986), and
Jordan cites the paper you mention by Kohonen et al (1981).  Jordan's
architecture is now widely acknowledged to be one of the most important
recent contributions on sequential networks, so Jordan is indeed the most
appropriate author for me to cite.

It is not uncommon for musicians to read more into the claims made by
psychologists and computer scientists, and probably vice versa, because of
the different histories, code words and writing styles of the different
fields. Mr. Handelman's agitation is not the first of its kind. I think it
is much more constructive, when conducting an interdisciplinary debate, to
make a concerted and GENUINE attempt to understand the meaning of an author
in another field before drawing inferences that readers in that field might
not draw.  Mr. Handelman is to be applauded for taking neural net research
on music seriously, and I would welcome the benefit of his musical
expertise while we explore the possbilities - and, yes, the limitations too
- of neural net models of music.

Jamshed Bharucha
Department of Psychology
Dartmouth College
Hanover, NH 03755
bharucha@eleazar.dartmouth.edu

------------------------------

Subject: Neuron Resolution
From:    rao@enuxha.eas.asu.edu (Arun Rao)
Organization: Arizona State Univ, Tempe
Date:    10 Nov 88 15:28:28 +0000 


        Here's the only useful reply I received to my posting (actually, I
received two from Chris Lishka, and this is his second). I got the book he
talks about from the library. It is authoritative (Kuffler was apparently a
leading light in experimental neurophysiology) and has a textbook flavor to
it. It gets into more detail than I need, but is definitely readable.
However, there is no mention of the kind of information I was looking for.
The trouble seems to be that engineers need information that
neurophysiologists never think of obtaining.
        Another useful book on the human visual system is "The Eye and
the Brain" (title ?) by David Hubel. It is the most readable book 
on the subject that I have yet encountered.
        Hope you find this useful.
 - Arun Rao

ARPA: rao@enuxha.asu.edu or rao%enuxha.asu.edu@relay.cs.net.
BITNET: agaxr@asuacvax.bitnet.

P.S.: Chris Lishka's e-mail address is lishka%uwslh@cs.wisc.edu.
 ___________________________________________________________________

Subject: Re: Synaptic Strengths and Other Neurobiological Issues

                Thanks a lot for the reply.

     You're welcome!

        I was thinking after I made the posting yesterday, and I
        realized that what I really need for the kind of studies
        I'm making is the variance in synaptic strength, rather
        than in firing rate.  Conventional neural-net wisdom
        suggests that a neuron's output is binary - i.e. it either
        fires or doesn't fire. One is not supposed to have to worry
        at all about firing rates. 

     The most important lesson I learned in the Neurobiology classes which
I took was that the "conventional neural-net wisdom" that most AI
researchers keep is not very relevant to real nervous systems.  There are
too many important differences, and the AI models are still much too
different from the Neurobiological models to be effectively related.  My
semester project for a graduate level AI course was to try and find a
fairly close tie between AI and Neurobiology.  I succeeded in finding
interesting correlations in Associative Memory theories, but I was also
very disappointed to find so little in common between
Connectionism/Neural-Nets and Neurobiology besides the very thin
correlation between Connectionist neurons and real neurons. 

        I get the feeling that the
        measurement of synaptic strength is probably a considerably
        more difficult task - have people done it at all ? I'd
        appreciate any comments that you may have. In the meantime,
        I'll look into the kind of references you suggest. 
        
     Oh yeah!  This area is a big area in terms of research!  Realize that
the synaptic strength typically depends on the types of Neurotransmitters
that travel across the synapses.  One fairly standard fact that aids this
research is that neurons almost always release only *one* type of
Neurotransmitter (also called an NT).  However, many different terminals
(each possibly containing a different NT) can synapse at the same point on
the same dendrite, so there can be very interesting combinational effects.
Furthermore, different neurons will react differently to incoming NT's!
Not too mention the role of Calcium channels in regulating the release of
NT's.  Firing rate also figures into this as well.  Synaptic strength is a
real complex area!

     As for references, *the* introductory textbook which is highly
regarded here at the UW is:

        _From_Neuron_to_Brain_ by S. Kuffler, J. Nichols, and A. R. Martin
        Published by Sinauer Associates, Inc. (1984)

This is a textbook intended for biologists, so if you are not up on
biology, it may take a while to get through these chapters (don't worry, my
background is light on biology too!).  A quick browse through the chapters
yields the following ones which relate to synapses:

        Chap. Six:      Control of Membrane Permeability
        Chap. Eight:    Active Transport of Ions
**      Chap. Nine:     Synaptic Transmission
**      Chap. Ten:      Release of Chemical Transmitters
**      Chap. Eleven:   Microphysiology of Chemical Transmission
        Chap. Twelve:   The Search for Chemical Transmitters
**      Chap. Sixteen:  Transformation of Information by Synaptic Action
                        in Individual Neurons

The starred chapters (**'s) are likely to be very relevant.  Realize that
this is over 1/3'd of the book, so you can see that Synaptic Transmission
is a hot area!

     There are certainly other books.  One which I do not own and cannot
remember the name of was a huge, comprehensive book covering the entire
range of Neurobiology, with beautiful illustrations.  I would search the
nearest medical library for more information (we are blessed here with
having a wonderful medical lib.).  Also, if you can reach some Professors
in Neurobiolgy, they should certainly be able to help you.  I am but an
undergrad! 

                                                .oO Chris Oo.

------------------------------

Subject: Re: Neuron Resolution (warning: this baby is long!)
From:    lishka@uwslh.UUCP (Fish-Guts)
Organization: U of Wisconsin-Madison, State Hygiene Lab
Date:    11 Nov 88 22:15:15 +0000 

In article <183@enuxha.eas.asu.edu> rao@enuxha.eas.asu.edu (Arun Rao) writes:
>
>       Here's the only useful reply I received to my posting (actually, I 
>received two from Chris Lishka, and this is his second). 

Hello, this is Chris speaking....

>I got the
>book he talks about from the library. It is authoritative (Kuffler
>was apparently a leading light in experimental neurophysiology) 

     If I remember my neurobiologists correctly, Kuffler was one of the big
names in the study of the human visual system, especially the retina.
Hence I found that the Kuffler book was heavy on the visual system; other
books are not.

>and
>has a textbook flavor to it. It gets into more detail than I need,
>but is definitely readable. However, there is no mention of the
>kind of information I was looking for. 

     I am sorry to hear that.  You will probably need to go hunt some
actual research papers up that deal with your area more specifically.  

>The trouble seems to be that
>engineers need information that neurophysiologists never think of
>obtaining.

     Actually, this was originally my opinion, *before* I took the
neurobiology courses.  Afterwards, it dawned on me that engineers
(both in software and hardware) are looking for "information" that is
too simple and vague.  There is too much going on in the nervous
system (even in single neurons) to consider just the firing rate
variance, or just the variance in the number of synapses onto a single
neuron.  The types of information that engineers want have probably
been considered by neurobiologists many years ago. 

     One of the biggest lessons I learned was there is no such thing
as a "typical" neuron.  The nervous systems of living creatures
(especially humans) are much too varied, and the structures and types
of neurons take on many, many different forms.  If one considers this,
then much of the simple data (i.e. typical firing rates, variance in
dendrite trees, variance in the length and width of axons, etc.) is
fairly meaningless unless one is looking at a very specific area in
the nervous system.  And so much goes on in the nervous system that it
is really hard to determine whether or not a particular characteristic
of a group of neurons is a contrbuting factor in why it works the way
it does.  Simple data is usually only good for defining very general
characteristics about neurons (i.e. the fact that some axons are
myelinated allows signals to travel much faster down the axon body).

     A good example lies in the study of the retina.  From what the
professors taught us, early on the neurobiology community began to
study the structure of the retina because it was thought that it was
composed of fairly "typical" neurons.  Besides this, it was easier to
study the retina because (a) the input source which the visual system
interpretted (i.e. light!) was the easiest to measure of all the
senses and (b) it is easier to look at retinas in other animals than
try and study the inner ear or skin responses.  Since the early
studies, a great amount of work has been performed on the retina, and
much is known about the layered structure, the types of neurons, and
the variety of interconnections between retinal neurons (there is
still much to learn, though).  However, the neurobiologists also
discovered that the neurons in the retina tended to be much different
from other neurons, and were not "typical" as was once hoped.  Also,
neurobiologists now tend to believe that the retina serves as a sort
of "preprocessing" stage to the visual cortex, which is believed to
handle the "higher order" interpretations of the visual inputs
(although is is very possible that the retina serves other purposes,
such as an Associative Memory for images). 

     The "moral" of the above story is that even though research into
retinal neurobiology *has* defined much of what goes on in the visual
system, it hasn't shed all that much light on what happens in other
areas of the human brain.  The many sections of the nervous system can
be very different in structure, and vary from massive layers of highly
organized neurons in very regular connection patterns to other areas
where there are incredibly different types of neurons that are
connected in a more "random" fashion.  Do not take any area in the
nervous system to be a "typical" area; all are fairly specialized to
the function that they serve. 

     It is for these reasons that I believe current AI
"neural-network" theories are much too simple to be used as models for
biological neural-networks.  They not not be taken as such, but
instead should be respected for what they are: interesting studies
into massively connected networks of simple elements.  I feel that
Connectionism is a wonderful study of the characteristics and power
available using massively connected parallel networks.  Real nervous
systems will share some (but probably not all) of these
characteristics.  However, current neural networks are not very good
models of real biological neural networks, because the Connectionist
models are much too simplistic and "generic" to be that effective.
Therefore, I would also be careful about taking specific measurements
(such as the variance in firing rates, the number of connections in
real neural systems, etc.) and applying them to Connectionist models
and expecting them to be useful.  It seems to me that at the stage
that artificial neural-networks are at, only really basic
characteristics should be conisdered.

     I will get down off of my soapbox now!  Sorry about the length,
but it is a topic I feel somewhat strongly about.  I think that both
neurobiology and AI (especially Connectionism) are two amazing fields
of science, and each has a lot to learn from each other.  Each field
should be respected for what it is.  Here's to many more years of
fruitful research and cooperation between the two!

>P.S.: Chris Lishka's e-mail address is lishka%uwslh@cs.wisc.edu.

     This address may or may not work (isn't email great? ;-)  See my
signature below for more information.

                                        .oO Chris Oo.-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp
                                     ----
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                         - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"


------------------------------

Subject: E. Tzanakou to speak on ALOPEX: an optimization method
From:    pratt@paul.rutgers.edu (Lorien Y. Pratt)
Organization: Rutgers Univ., New Brunswick, N.J.
Date:    09 Nov 88 19:39:51 +0000 



                                 Fall, 1988  
                     Neural Networks Colloquium Series 
                                 at Rutgers  

                     ALOPEX: Another optimization method
                     -----------------------------------

                                E. Tzanakou
                   Rutgers University Biomedical Engineering

                      Room 705 Hill center, Busch Campus  
                    Friday November 18, 1988 at 11:10 am 
                      Refreshments served before the talk


                                   Abstract   


The ALOPEX process was developed in the early 70's by Harth and Tzanakou as
an automated method of mapping Visual Receptive Fields in the Visual
Pathway of animals. Since then it has been used as a "universal"
optimization method that lends itself to a number of optimization problems.

The method uses a cost function that is calculated by the simultaneous
convergence of a large number of parameters. It is iterative and stochastic
in nature and has the tendency to avoid local extrema.

Computing times largely depend on the number of iterations required for
convergence and on times required to compute the cost function. As such
they are problem dependent. On the other hand ALOPEX has a unique inherent
feature i.e it can run in a parallel manner by which the computing times
can be reduced.

Several applications of the method in physical, physiological and pattern
recognition problems will be discussed.

Lorien Y. Pratt                            Computer Science Department
pratt@paul.rutgers.edu                     Rutgers University
                                           Busch Campus
(201) 932-4634                             Piscataway, NJ  08854

------------------------------

Subject: Relaxation labelling
From:    raja@frith.egr.msu.edu ()
Organization: Michigan State University, Engineering, E. Lansing
Date:    12 Nov 88 17:15:03 +0000 

I need references to implementation of relaxation labelling algorithms on
neural networks. It seems such a thing should be possible, since relaxation
labelling is also intimately connected to optimization of a compatibility
function (Hummel & Zucker, IEEE PAMI, 1983), similar to Energy functions in
feedback networks.

Please post on the bboard, or mail to :

raja@frith.egr.msu.edu


This is URGENT !!!  Thanks,


------------------------------

End of Neurons Digest
*********************