[comp.ai.neural-nets] Neuron Digest V6 #43

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (07/14/90)

Neuron Digest   Friday, 13 Jul 1990
                Volume 6 : Issue 43

Today's Topics:
                        Re: Neuron Digest V6 #42
                        Re: Neuron Digest V6 #42
                            neural inhibition
                  Re: A questin about neural inhibition
                            Kohonen's network
                       Protein analysis using ANN
                               References
                           Answer to K. Morse
                      how our genes give us brains
                     Anybody in Poland working in NN
             Cohen, Dunbar, & McClelland already published?
                               job opening
                       Evolving Networks - New TR
                         Tech Reports Available
                    Report on Non Linear Optimization
   Call for Participation in Connectionist Natural Language Processing
                   Apologies on Call for Paticipation
                    Technology Transfer Mailing List


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Re: Neuron Digest V6 #42
From:    "Meyer E. Nigri" <M.Nigri@Cs.Ucl.AC.UK>
Date:    Tue, 10 Jul 90 10:04:50 +0100


David McKee writes:

<Digressing a moment, I have been thinking about creating a standard for a
<software simulation/hardware description language for neural nets (I work
<with simulation languages and HDL's , specifically VHDL). 

I would like to inform you that such an idea has already been
done in the PYGMALION project.

Meyer.
+--------------------------+-----------------------------------------------+
|Meyer Elias Nigri         |   JANET:mnigri@uk.ac.ucl.cs                   |
|Dept. of Computer Science |  BITNET:mnigri%uk.ac.ucl.cs@UKACRL            |
|University College London |Internet:mnigri%cs.ucl.ac.uk@nsfnet-relay.ac.uk|
|Gower Street              | ARPANet:mnigri@cs.ucl.ac.uk                   |
|London WC1E 6BT           |    UUCP:...!mcvax!ukc!ucl-cs!mnigri           |
+--------------------------+-------------------------+---------------------+
|Tel: +44 (071)-387-7050   | Fax: +44 (071)-387-1397 |    Telex: 28722     |
|               ext. 3701  |                         |                     |
+--------------------------+-------------------------+---------------------+

[[ Editor's Note: Perhaps someone would care to explain what the
PYGMALION project is or was? -PM ]]

------------------------------

Subject: Re: Neuron Digest V6 #42
From:    J. P. Letellier <jp@radar.nrl.navy.mil>
Date:    Tue, 10 Jul 90 17:50:02 -0400

[[ In previous Digest, "DAVE MCKEE" <mckee@tisss.radc.af.mil> writes: ]]

> I would like to pose a question to the list about inhibiting neurons.  

Dave,

        Why don't you try your simulations in VHDL?  (-: It is optimized
for interconnects and duplication of identical parts.  That makes it only
necessary to embed whatever non-linear function you intend to demonstrate
inside the cell.  You could actually build a library of various cells,
and compare (or even mix) their actions on different functions and
problems.  Since VHDL is event driven, it should be optimum for this
task!!  Sounds like fun.

        If you really don't want to work with VHDL, then I would suggest
doing your simulation language in either Ada or C++.  In either case,
that would allow you to focus better on the objects and their actions.  C
may be powerful, but you easily get lost in the code and miss the
intuition you are trying to establish.

jp 

------------------------------

Subject: neural inhibition
From:    tony@helmholtz.sdsc.edu (Tony Bell)
Date:    Tue, 10 Jul 90 23:50:06 +0100


Dave McKee's question about inhibition that can veto firing is an
interesting one. What he is talking about is a non-linear form of
inhibition called 'shunting' or 'silent' inhibition. This has been
postulated to be a component in direction selectivity in the retinal
ganglion cell by Christof Koch [1] though Rodney Douglas et al cast doubt
on this being important in visual cortex [2]. Anatomically, though, it is
interesting that supposedly inhibitory cortical synapses tend to dominate
on the cell body, thick dendrites and the start of the axon, where they
can have maximal effect on vetoing or at least modulating the results of
excitatory input.

  I agree with Dave that this is the kind of thing that should
be in the connectionist repetoire, especially if we want to take clues
from the brain. You can look up my attempt in [3] if you like. It
includes a learning algorithm able to deal with non- linearities at
synapse or dendritic branchpoint - called Artificial Dendritic Learning.
Unfortunately it's static.

  If anyone is interested, 2 good books approach these biological
issues from an introductory (but very detailed) viewpoint [4], and from a
computational [5] viewpoint.

Tony Bell

Refs.

[1] Koch et al, Retinal Ganglion Cells: a functional interpretation
of dendritic morphology. Phil. Trans. R. Soc. London [Biol] 298:
227-264.
[2] Douglas et al, Nature 332:642-644 (1988)
[3] Bell T, Higher-order learning in 'Artificial Dendritic Trees', in
Touretzky (ed), Advances in Neural Information Processing Systems 2
(1990)
[4] The Synaptic Organisation of the Brain, 3rd edition, ed. Gordon
Shepherd, 1990. Now in paperback (~$30), Oxford Univ. Press
[5] Methods in Neuronal Modeling, eds, Koch C & Segev I, MIT press,
1989


------------------------------

Subject: Re: A questin about neural inhibition
From:    Jonathan Delatizky <delatizk@BBN.COM>
Date:    Wed, 11 Jul 90 10:21:38 -0400

David McKee asks some interesting questions about inhibitory mechanisms
in the dendritic fields of real neurons.

I'm no longer involved in work on these topics, so there may be newer
work about which I'm unaware.  For the same reason, I won't be able to
give precise citations.

Wilfred Rall developed an enormous body of material on models of cable
conduction in dendritic fields.  Both excitatory and inhibitory inputs
were supported in his methodology.  The models are detailed, so that the
computational complexity of solutions for realistic spatial
configurations takes a lot of CPU power.  In addition to papers in the
biophysical and neurobiological literature, he published at least one
monograph on the subject.

The idea that some forms of inhibibitory inputs may act to short-circuit
a region of the dendritic tree is also well established, though I can
give you no names or citations to back this up.

Finally, Jerry Lettvin's group at MIT investigated the influence of the
configuration of axon terminal branches and their prior history of
activation on failure of conduction at axonal terminal branch points.
Not quite the same mechanism, and a very different location, but also
relevant to the general nature of the query.

There is no question, of course, that the "neurons" included in almost
all so called neural network models ignore the complex computations that
are certainly performed by subthreshold interactions in dendritic fields.
This does not invalidate the results of such work, rather suggesting that
the field has made some poor terminological choices.

------------------------------

Subject: Kohonen's network
From:    JJ Merelo <jmerelo@ugr.es>
Date:    06 Jul 90 12:49:00 +0200


        I am working on Kohonen's network. I was using previously a 15
input, 8x8 output layer network, with parameters stated in Kohonen's 1984
paper ( also in Aleksander book ). Y switched to a 16 input, 9x9 layer
output, with the same parameters, and everythi n g got screwed up. Should
I use the same parameters? How should them be changed? What do they
depend on?

        Please, somebody answer, or i'll cast my Sun ws thru the window


                                JJ

------------------------------

From:    BRUNAK@nbivax.nbi.dk
Date:    Sun, 08 Jul 90 17:02:00 +0200
Subject: Protein analysis using ANN

Title:

``Analysis of the Secondary Structure of the Human Immunodefieciency
Virus (HIV) Proteins p17, gp120, and gp41 by Computer Modeling Based on
Neural Network Methods''

H. Andreassen, H. Bohr, J. Bohr, S. Brunak, T. Bugge, 
R.M.J. Cotterill, C. Jacobsen, P. Kusk, B. Lautrup, 
S.B. Petersen, T. Saermark, and K. Ulrich, in 
     
Journal of Acquired Immune Deficiency Syndromes (AIDS),
vol. 3, 615-622, 1990.


------------------------------

Subject: references
From:    demelerb@bionette.CGRB.ORST.EDU (Borries Demeler - Biochem)
Date:    Tue, 10 Jul 90 12:54:52 -0700


Hi,

I would like to find out if neural networks have been, in any way, used for
sequence analysis of nucleic acids, i.e., RNA and DNA. If you have some 
information or references, I would greatly appreciate if you could share
them with me by e-mailing to:

demelerb@bionette.cgrb.orst.edu

(Borries Demeler, Dept. of Biochemistry and Biophysics, Oregon State
University, Corvallis, Or. 97331-6503)

Thank you very much,
 -Borries-

------------------------------

Subject: Answer to K. Morse
From:    JJ Merelo <jmerelo@ugr.es>
Date:    11 Jul 90 13:19:00 +0200


        As a test-answer to your last question, I think ( and it comes from 
"The magic loom", by R. Jastrow ) that the brain structure comes from the
evolutionary past of man. The brain is composed of several layers, the
inner corresponding to the further past ( as reptiles ) and the outer to
the present ( as men ). Thus, we would have a reptilian brain layer (
that would roughly correspond to the cerebellum ), a mammalian brain
layer ( to some part of the brain, I don't quite remember ) and a outer
"human" layer ( that would be the gray matter, I think ). Anyways, if you
can get that book, it's quite interesting, as it proposes to add another
cybernetic layer.

        About the other question, I think that the dificult thing is to
measure the quality of the brain. You cannot say, as is the case with
peas, that the brain is yellow, or big, or small. The only thing you can
do is to give some benchmarks, like the several IQ. So, the first thing
before measuring genetic inheritance in the brain is to establish a set
of parameters to measure it.

        I don't know if this answer to any of your questions, but in any
case it may also be a matter of discussion.

                        JJ Merelo
                        JMERELO@UGR.ES


------------------------------

Subject: how our genes give us brains
From:    Stephen Smoliar <smoliar@vaxa.isi.edu>
Date:    Thu, 12 Jul 90 11:38:35 -0700

Kingsley Morse's question of how our genes ultimately provide us with
brains structured the way they are is one which has been pursued by
Gerald Edelman.  His book NEURAL DARWINISM makes a case for the argument
that ALL physiological structure arises from a variety of selective
processes.  His concrete examples are concerned with chick feathers, but
he extrapolates the argument to include not only the general architecture
of the brain but also the neuronal wiring therein.  This material is
apparently discussed at greater length in his subsequent book,
TOPOBIOLOGY; but I have not yet had a chance to look at that one.


------------------------------

Subject: Anybody in Poland working in NN
From:    JJ Merelo <jmerelo@ugr.es>
Date:    11 Jul 90 16:07:00 +0200


        I am going to Poland very soon on a tourist visit, and I would
like to contact with anybody working on the same subject. If there is
anybody in Cracow or Warszaw working on any aspect of Neural Networks,
please e-mail to

                        JJ Merelo
                        JMERELO@UGR.ES



------------------------------

Subject: Cohen, Dunbar, & McClelland already published?
From:    Sven Blankenberger <I3160903%DBSTU1.BITNET@CUNYVM.CUNY.EDU>
Organization: Dept. of Psycholoy, University of Braunschweig
Date:    Wed, 11 Jul 90 11:17:14 -0500


In their 1989 Psychological Review article Seidenberg & McClelland
cited the following:

Cohen, J., Dunbar, K., & McClelland, J. L. (1989) On the control
   of automatic processes: A parallel distributed processing model
   of the stroop task. Manuskript submitted for publication.

Does anyone know where to find this article?  If yes, would you please
e-mail the complete reference, otherwise would you please e-mail the
e-mail address of one of the authers.

Many thanks
   Sven

Sven Blankenberger       (e-mail: i3160903@dbstu1.bitnet)
Dept. of Psychology
University of Braunschweig
Spielmannstr. 19
3300 Braunschweig, West Germany

------------------------------

Subject: job opening
From:    Mike Cohen <mike@speech.sri.com>
Date:    Wed, 11 Jul 90 14:06:18 -0700

Their is a job opening on the research staff at SRI to participate
in research in neural nets applied to computer speech recognition.
The qualifications, in order of priority, include:

   Background in neural nets
   Strong C programming skills
   Background in speech recognition
   MS/PhD

For information, contact:

  Dr. Michael H. Cohen
  SRI International, Rm EK182
  333 Ravenswood Ave.
  Menlo Park, CA 94025

  (415) 859-5977

  mcohen@speech.sri.com



------------------------------

Subject: Evolving Networks - New TR
From:    rbelew@UCSD.EDU (Rik Belew)
Date:    Tue, 26 Jun 90 05:26:18 -0700

                          EVOLVING NETWORKS:
                     USING THE GENETIC ALGORITHM
                     WITH CONNECTIONIST LEARNING
                                   
                           Richard K. Belew
                            John McInerney
                         Nicolaus Schraudolf
                                   
              Cognitive Computer Science Research Group
               Computer Science & Engr. Dept. (C-014)
                    Univ. California at San Diego
                          La Jolla, CA 92093
                           rik@cs.ucsd.edu
                                   
                    CSE Technical Report #CS90-174
                              June, 1990
                                   
                               ABSTRACT

It is appealing to consider hybrids of neural-network learning algorithms
with evolutionary search procedures, simply because Nature has so
successfully done so.  In fact, computational models of learning and
evolution offer theoretical biology new tools for addressing questions
about Nature that have dogged that field since Darwin.  The concern of
this paper, however, is strictly artificial: Can hybrids of connectionist
learning algorithms and genetic algorithms produce more efficient and
effective algorithms than either technique applied in isolation?  The
paper begins with a survey of recent work (by us and others) that
combines Holland's Genetic Algorithm (GA) with connectionist techniques
and delineates some of the basic design problems these hybrids share.
This analysis suggests the dangers of overly literal representations of
the network on the genome (e.g., encoding each weight explicitly).  A
preliminary set of experiments that use the GA to find unusual but
successful values for BP parameters (learning rate, momentum) are also
reported.  The focus of the report is a series of experiments that use
the GA to explore the space of initial weight values, from which two
different gradient techniques (conjugate gradient and back propagation)
are then allowed to optimize.  We find that use of the GA provides much
greater confidence in the face of the stochastic variation that can
plague gradient techniques, and can also allow training times to be
reduced by as much as two orders of magnitude.  Computational trade-offs
between BP and the GA are considered, including discussion of a software
facility that exploits the parallelism inherent in GA/BP hybrids.  This
evidence leads us to conclude that the GA's GLOBAL SAMPLING
characteristics compliment connectionist LOCAL SEARCH techniques well,
leading to efficient and reliable hybrids.

        --------------------------------------------------

If possible, please obtain a postscript version of this technical report
from the pub/neuroprose directory at cheops.cis.ohio-state.edu.
Here are the directions:

/***    Note:  This file is not yet in place.  Give us a few days,      ***/
/***    say until after 4th of July weekend, before you try to get it.  ***/

unix> ftp cheops.cis.ohio-state.edu          # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): neuron
ftp> cd pub/neuroprose
ftp> type binary
ftp> get
(remote-file) evol-net.ps.Z
(local-file) foo.ps.Z
ftp> quit
unix> uncompress foo.ps.Z
unix> lpr -P(your_local_postscript_printer) foo.ps

/***    Note:  This file is not yet in place.  Give us a few days,      ***/
/***    say until after 4th of July weekend, before you try to get it.  ***/

If you do not have access to a postscript printer, copies of this
technical report can be obtained by sending requests to:

        Kathleen Hutcheson
        CSE Department (C-014)
        Univ. Calif. -- San Diego
        La Jolla, CA 92093

Ask for CSE Technical Report #CS90-174, and enclose $3.00 to cover
the cost of publication and postage.




------------------------------

Subject: Tech Reports Available
From:    Eduardo Sontag <sontag@hilbert.RUTGERS.EDU>
Date:    Wed, 27 Jun 90 16:27:35 -0400

The following report is now available:

          "On the recognition capabilities of feedforward nets"
         by Eduardo D. Sontag, SYCON Center, Rutgers University.

ABSTRACT: In this note we deal with the recognition capabilities of
various feedforward neural net architectures, analyzing the effect of
direct input to output connections and comparing Heaviside (threshold)
with sigmoidal response units.  The results state, roughly, that allowing
direct connections or allowing sigmoidal responses doubles the
recognition power of the standard architecture (no connections, Heaviside
responses) which is often assumed in theoretical studies.  Recognition
power is expressed in terms of various measures, including worst-case and
VC-dimension, though in the latter case, only results for subsets of the
plane are proved (the general case is still open).  There is also some
discussion of Boolean recognition problems, including the example of
computing N-bit parity with about N/2 sigmoids.

 ---------------------------------------------------------------------------
To obtain copies of the postscript file, please use Jordan Pollack's service:

Example:
unix> ftp cheops.cis.ohio-state.edu          # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): <ret>
ftp> cd pub/neuroprose
ftp> binary
ftp> get
(remote-file) sontag.capabilities.ps.Z
(local-file) foo.ps.Z
ftp> quit
unix> uncompress foo.ps
unix> lpr -P(your_local_postscript_printer) foo.ps

 ----------------------------------------------------------------------------
If you have any difficulties with the above, please send e-mail to
sontag@hilbert.rutgers.edu.   DO NOT "reply" to this message, please.


------------------------------

Subject: Report on Non Linear Optimization
From:    Manoel Fernando Tenorio <tenorio@ee.ecn.purdue.edu>
Date:    Mon, 02 Jul 90 11:06:05 -0500


This report is now available from Purdue University. There is a fee for
overseas hardcopies. An electronic copy will be soon available at the
Ohio database for ftp. Please send your request to Jerry Dixon
 (jld@ee.ecn.purdue.edu). Do not reply to this message.

 
                    COMPUTATIONAL PROPERTIES OF
                   GENERALIZED HOPFIELD NETWORKS
                 APPLIED TO NONLINEAR OPTIMIZATION
 
                      Anthanasios G. Tsirukis
                               and
                       Gintaras V. Reklaitis
                   School of Chemical Engineering
 
                         Manoel F. Tenorio
                  School of Electrical Engineering
 
                    Technical Report TREE 89-69
                Parallel Distributed Structures Laboratory
                  School of Electrical Engineering
                         Purdue University
 
 
                              ABSTRACT
 
     A nonlinear neural framework, called the Generalized Hopfield
Network, is proposed, which is able to solve in a parallel distributed
manner systems of nonlinear equations.  The method is applied to the
general optimization problem.  We demonstrate GHNs implementing the three
most important optimization algorithms, named the Augmented Lagrangian,
Generalized Reduced Gradient and Successive Quadratic Programming
methods.
 
     The study results in a dynamic view of the optimization problem
and offers a straightforward model for the parallelization of the
optimization computations, thus significantly extending the practical
limits of problems that can be formulated as an optimization problem and
which can gain from the introduction of nonlinearities in their structure
(eg. pattern recognition, supervised learning, design of
content-addressable memories).

------------------------------

Subject: Call for Participation in Connectionist Natural Language Processing
From:    cpd@aic.hrl.hac.com
Date:    Tue, 10 Jul 90 10:52:42 -0700


                        AAAI Spring Symposium
            Connectionist Natural Language Processing

Recent results have lead some researchers to propose that connectionism 
is an alternative to AI/Linguistic approaches to natural language 
processing, both as a cognitive model and for practical applications.  
This symposium will bring together both critics and proponents of 
connectionist NLP to discuss its strengths and weaknesses.  

This symposium will cover a number of areas, spanning from new phonology 
models to connectionist treatments of anaphora and discourse issues.  
Participants should address what is new that connectionism brings to the 
study of language. The purpose of the symposium is to examine this issue 
from a range of perspectives including: 

        Spoken language understanding/generation
        Parsing
        Semantics
        Pragmatics
        Language acquisition
        Linguistic and representational capacity issues
        Applications

Some of the questions expecting to be addressed include:

        What mechanisms/representations from AI/Linguistics
        are necessary for connectionist NLP?  Why?

        Can connectionism help integrate signal processing 
        with knowledge of language?

        What does connectionism add to other theories
        of semantics?

        Do connectionist theories have implications for 
        psycholinguistics?

Prospective participants are encouraged to contact a member of the 
program committee to obtain a more detailed description of the 
symposium's goals and issues.  Those interested in participating in this 
symposium are asked to submit a 1-2 page position paper abstract and a 
list of relevant publications.  Abstracts of work in progress are 
encouraged, and potential participants may also include 3 copies of a 
full length paper describing previous work. Submitted papers or 
abstracts will be included in the symposium working notes, and 
participants will be asked to participate in panel discussions.

Three (3) copies of each submission should be sent to arrive by November 
16, 1990 to:

Charles Dolan, Hughes Research Laboratories, RL96, 3011 Malibu Canyon 
Road, Malibu CA, 90265

All submissions will be promptly acknowledged.

E-Mail inquiries may be sent to:
cpd@aic.hrl.hac.com

Program Committee: Robert Allen, Charles Dolan (chair),
James McClelland, Peter Norvig, and Jordan Pollack.

------------------------------

Subject: Apologies on Call for Paticipation
From:    cpd@aic.hrl.hac.com
Date:    Tue, 10 Jul 90 16:52:56 -0700


My previous message left out some information.

                        AAAI Spring Symposium
                            March 26-28
                        Stanford University
                Connectionist Natural Language Processing

... from previous message...

All submissions will be promptly acknowledged and registration materials
will be sent to authors who submit abstracts.

The Spring Symposium Series is an annual event of AAAI.  Members of AAAI will
receive a mailing from them.

Attendance is by submission and acceptance of materials only, except
that AAAI will fill  available spaces if the program committee does
not select enough people.  Enough people is 30-40.

No proceedings will not be published, but working papers, with submissions from
attendees, will be distributed at the Symposium.

Sorry for the momentary confusion

 -Charlie Dolan

------------------------------

Subject: Technology Transfer Mailing List
From:    Bill Hefley <weh@SEI.CMU.EDU>
Date:    Tue, 10 Jul 90 21:13:39 -0400


The Technology Applications group of the Software Engineering Institute is
pleased to announce the creation of a new electronic mailing list:
technology-transfer-list.  This mailing list, focused on technology transfer
and related topics, is intended to foster discussion among researchers and
practitioners from government and industry who are working on technology
transfer and innovation.  

Relevant topics include:

 -- organizational issues (structural and behavioral) 

 -- techno-economic issues

 -- business and legal issues, such as patents, licensing, copyright, and
   commercialization

 -- technology transfer policy

 -- technology maturation to support technology transition

 -- lessons learned

 -- domestic and international technology transfer

 -- transition of technology from R&D to practice

 -- planning for technology transition

 -- models of technology transfer

 -- studies regarding any of these topics

The technology-transfer-list is currently not moderated, but may be
moderated or digested in the future if the volume of submissions warrants.
The electronic mail address for submissions is:

        technology-transfer-list@sei.cmu.edu

To request to be added to or dropped from the list, please send mail to:

        technology-transfer-list-request@sei.cmu.edu

Please include the words "ADD" or "REMOVE" in your subject line.

Other administrative matters or questions should also be addressed to:

        technology-transfer-list-request@sei.cmu.edu

The SEI is pleased to provide the facilities to make this mailing list
possible.  The technology-transfer-list is the result of two SEI activities:

 -- transitioning technology to improve the general practice of software
   engineering 

 -- collaborating with the Computer Resource Management Technology program
   of the U.S. Air Force to transition technology into Air Force practice

The SEI is a federally funded research and development center sponsored by
the U.S. Department of Defense under contract to Carnegie Mellon University.


------------------------------

End of Neuron Digest [Volume 6 Issue 43]
****************************************