[comp.ai.neural-nets] Neuron Digest V5 #30

neuron-request@HPLABS.HP.COM ("Neuron-Digest Moderator Peter Marvit") (07/14/89)

Neuron Digest	Thursday, 13 Jul 1989
		Volume 5 : Issue 30

Today's Topics:
			  Gradient descent updates
			 Pointer to Bill Polkingham
		References wanted for non-iterative training
			   Request for ART2 info
	   request for literature on visualization of neural nets
       Special Interest Group Meetings in Winter IJCNN 90 Conference
		       Spin Glass and Neural Networks
		     Re: Spin Glass and Neural Networks
		     Re: Spin Glass and Neural Networks
		     Re: Spin Glass and Neural Networks
		Submission - List of Neural Network Methods
	Summary: Help: Neural Nets/Cell-Automata/Dynamic Systems ...
	  Re: Help: Neural Nets/Cell-Automata/Dynamic Systems ...
	  RE: Help: Neural Nets/Cell-Automata/Dynamic Systems ...


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).

------------------------------------------------------------

Subject: Gradient descent updates
From:    leary@Luac.Sdsc.Edu (Bob Leary)
Date:    Fri, 07 Jul 89 23:03:54 +0000 

With regard to Francesco Camargo's recent question concerning whether in
gradient descent procedures such as back-propagation it is better to cycle
through all input-output pairs before making updates, or to update after
each pair:

There is a special case where precise analytical results are known.
Consider the solution of n linear equations in n unknowns where the
coefficient matrix is diagonally dominant - i.e. the coefficient of the ith
variable in the ith equation is much larger in magnitude than the other
coefficients in that equation (how much is "much" can be found in any
numerical analysis text).  Two so-called iterative methods called Jacobi and
Gauss-Seidel are essentially gradient descent techniques that minimize the
sum of the squares of the errors contributed by each equation, where the
error is the difference between the desired right hand side and what you
actually get by plugging the current approximate solution into the left hand
side. Each equation is presented in turn in a cyclic pattern.  When the ith
equation is presented, an adjustment is made in the ith variable so as to
make the error for that equation equal to zero (of course, this adjustment
also affects the errors contributed by all the other equations).  The only
difference between the two methods is that with Jacobi, the errors are only
updated after a complete cycle, while with Gauss-Seidel, the updating is
done after each presentation.  The upshot of all this is that both methods
converge under exactly the same conditions, and that the convergence rates
are known (again, consult that numerical analysis text) - the Gauss-Seidel
method wins by a landslide.
 
 
Bob Leary
San Diego Supercomputer Center
leary@sds.sdsc.edu

------------------------------

Subject: Pointer to Bill Polkingham
From:    cditi!sdp@uunet.UU.NET (Steve Poling)
Date:    Tue, 11 Jul 89 10:57:53 -0400 


Peter,
  Many thanx for your neural networks digest.  You refered to a quarterly
service where recently awarded patents about neural networks are forwarded
to subscribers.  The service is offered by a Bill Polkinghorn son of the
fellow who was running the INNS sign-up desk.  The company is called "MIPPS"
in capital greek letters.  The address of this service is:

Bill Polkinghorn
P.O. Box 15226
Arlington, VA 22215

Aside to a similarity in last names (Poling and Polkinghorn) I have no
connection or commercial interest in this company.

Cheers,
Steve Poling
CDI Technologies

------------------------------

Subject: References wanted for non-iterative training
From:    Kemp@DOCKMASTER.NCSC.MIL
Date:    Fri, 07 Jul 89 20:33:00 -0400 

A recent trade publication (EE Times, June 26, p.36) had an article about a
commercial NN product that trains "1000 times quicker" than iterative
methods, using techniques from applied mathematics to solve for weights
directly.  From the article (without permission):

"The first pass throught the data builds a giant matrix mapping all of the
inputs.  The second pass performs a non-linear inversion of that matrix,
resulting in the weight matrix for the neural network." .  .  .

"Back propogation networks are only a little bit non-linear, and we were
able to find several non-linear variants of matrix inversion that do the job
for most cases."

 Is there any published work describing applications of non-linear algebra
to NN training, and the types of problems to which it might be suited.

 Dave Kemp <Kemp@dockmaster.ncsc.mil>

------------------------------

Subject: Request for ART2 info
From:    Jeff Kowing <kowing%nasa-jsc.csnet@RELAY.CS.NET>
Date:    Mon, 12 Jun 89 16:55:28 -0500 

I am looking for literature analyzing, either mathematically or emperically,
the performance of Grossberg's ART2 paradigm.  If anyone is aware of such
literature please send references to

          kowing@nasa.jsc.gov

Thanks alot, I appreciate your time and help!
                                Jeff Kowing
                                NASA/Johnson Space Center

P.S.  I already have the ART2 article from the Dec. 1987 issue of Applied
Optics.



------------------------------

Subject: request for literature on visualization of neural nets
From:    mcvax!ethz!mr@uunet.uu.net
Date:    28 Jun 89 18:22:00 -0800 

I'm looking for any work done in the field of visualization of neural nets,
ie. any help to intuitively see what a neural net is doing, how it learns,
what features it detects etc.

Please e-mail to me; I'll summarize if there is enough interest.  Thanks in
advance.

Marc Raths, Swiss Federal Institute of Technology  UUCP/EUNET: mr@ethz.uucp or 
Rigistrasse 53                 ...!{uunet,seismo,ukc}!mcvax!cernvax!ethz!mr
CH-8006 Zurich, Switzerland  CSNET/ARPA : mr%ethz.uucp%ifi.ethz.ch@RELAY.CS.NET
Voice : +41-1-361 5575       BITNET/EARN: mr%ethz.uucp@cernvax.BITNET
                             

------------------------------

Subject: Special Interest Group Meetings in Winter IJCNN 90 Conference
From:    fong@sun.soe.clarkson.edu
Organization: Clarkson University, Potsdam, NY
Date:    Tue, 27 Jun 89 13:41:13 +0000 

In the upcoming International Joint Neural Network Conferenece in January in
Washington DC, there will be a time set aside for Special Interest Group
meetings. (During one of the dinner time or lunch time).

The purpose of this posting is to call the attention of all the SIG points
of contact who have formed a SIG during or after the Boston ISNN Annual
meeting to consider this meeting, and desseminate the information to your
members.

1. Please submit a one-page proposal of activity for your SIG meeting to
Harold Szu, Local Organizing Chair, IJCNN 90.  His FAX numbers are:
             202-767-4277 or 202-767-1494.

2. You can contact Maureen Caudill at 619-485-1809 for room arrangement for
your group meeting.  If you have difficulty contacting Maureen, relay the
information to Harold Szu.

3. Please remind your group member and colleagues that during this meeting,
several Nobel Lureates are invited to give talks, and there are several
technical tracks (application, biology and theory) for paper submission 
and presentation.

Please plan to participate this exciting conference.


                                                David Yushan Fong

------------------------------

Subject: Spin Glass and Neural Networks
From:    hitomi@ogccse.ogc.edu (Hitomi Ohkawa)
Organization: Oregon Graduate Center, Beaverton, OR
Date:    Fri, 09 Jun 89 19:26:33 +0000 



I am interested in finding out if there are any on-going research efforts
that take a field-theoretic approach to studying behaviors of neural
networks.  I recently had a chance to look through a book titled "Spin
Glasses and Other Frustrated Systems" by D. Chowdhury, and intrigued by a
wide applicability of a SG system, from a travelling salesman problem to
neural networks.  I used to study physics, and once tried to apply the Ising
model to a certain physical system.  Now being in the field of computer
science (though not a neural network specialist), it is very interesting to
see a concept from physics familiar to me being applied to a certain form of
computer architecture.  Any information regarding this particular area
(research efforts, references, good textbooks on the subject, etc.) is
greatly appreciated.  Below is my address.

Hitomi Ohkawa
Dept. of Computer Science and Engineering
Oregon Graduate Center
19600 N.W. Von Neumann Drive
Beaverton, OR. 97006-1999
(503) 690-1151

hitomi@cse.ogc.edu (CSNET)

Thank you very much in advance.
 

------------------------------

Subject: Re: Spin Glass and Neural Networks
From:    giant@lindy.Stanford.EDU (Buc Richards)
Organization: Stanford University
Date:    Fri, 09 Jun 89 20:51:30 +0000 

The analogy to Spin Glasses is used in the Stochastic neural network of
Boltzmann machines.  I do not how to give a quick explanation, but this is
discussed in Emile Aarts and Jan Kort's excellent new book, "Simulated
Annealing and Boltzmann Machines" starting on page 148.  However, depending
on your previous knowledge of Simulated Annealing, Boltzmann Machines, and
Spin Glasses it may take more than a few pages of reading to appreciate the
analogy.  The book has just been published by John Wiley & Sons (New York),
so it may not yet be available at your library.

        Rob Richards
        Stanford University

------------------------------

Subject: Re: Spin Glass and Neural Networks
From:    bph@buengc.BU.EDU (Blair P. Houghton)
Organization: Boston Univ. Col. of Eng.
Date:    Sat, 10 Jun 89 18:04:11 +0000 


In addition, you could investigate the articles by Amit, Gutfreund, and
Sompolinski, ca. 1984 (I think, could be anywhere 1982-1987) which appeared
(I can't remember where.  Could have been IEEE Systems, Man, and
Cybernetics, or it could have been a physics journal.  The Science Citation
Index would defininitely have them, since that's where I originally found
the reference.)

There were a few of them, and were inspired by Hopfield's paper on the TSP
problem (which means it couldn't be 1982...)

				--Blair
				  "...I have it here _somewhere_..."

------------------------------

Subject: Re: Spin Glass and Neural Networks
From:    wine@maui.cs.ucla.edu (David Wine)
Organization: UCLA Computer Science Department
Date:    Wed, 14 Jun 89 00:34:00 +0000 

reference:

Haim Sompolinsky, Statistical Mechanics of Neural Networks,
Physics Today, December, 1988

------------------------------

Subject: Submission - List of Neural Network Methods
From:    David Kanecki <kanecki@vacs.uwp.wisc.edu>
Date:    Fri, 07 Jul 89 22:20:04 -0500 

Dear Peter,

Enclosed is article summarizing the neural network methods in use. This list
was composed from articles I received and responses received from e-mail.

Also, I found the articles of neural networks in chemistry from Chemical and
Engineering New and Proceeding of the National Academy of science the most
informative as to method, set up, and results. I think its great the neuron
digest can be the sentinel, source, and sound post for new ideas and
concepts.

Keep up the good work.

David Kanecki
kanecki@vacs.uwp.wisc.edu
P.O. Box 93
Kenosha, WI 53141

Article Enclosure:

 From information received from articles and responses
 for information from various people I have compiled a
 list of neural network methods currently in use.
 
 The methods in use are:
   1. Generalized Delta Rule, digital neuron
          B(I)= A(J)*W(I,J)
 
   2. Delta with transfer function 1, digital and
      analog neuron
 
          B2(I) = A(J)*W(I,J)
          B(I)  = 1/(1+exp(-B2(I))
 
   3. Skeletonization, digital and analog neuron
      The neuron that does not change the state of
      the output neuron is turned off in the update
      procedure. Thus, only neurons that affect the
      state of the output neurons are included in 
      the error E(I) used in the update procedure
      to produce a new matrix W(I,J).
 
   4. Genetic methods, 9 classes, digital and analog
      Based  on test using 1 of the 9 methods the
      genetic method modifies the value in the 
      error vector using a specific criteria. This is
      analogous to the proof reading done by DNA 
      polyomerase in the cell. 
 
      Based on experiments I have performed using 1 of the
      9 genetic methods, the genetic method causes a noisy
      /conflicting network to converge faster  than compared 
      to using the generalized delta rule.
 
      But, on a network where the noise or conflict  is 
      minimal the genetic method is slower than the delta
      rule on the rate of convergence. I considered a network
      to have the minimal noise or conflict if the error rate
      after 50 trails was less than 0.8 percent.   
 
   5. Winner take all, digital and analog
      The output neuron with the biggest output is
      allowed to fire and be updated. The other neurons
      are turned off.
 
      This method has been used by a group of researchers
      to model biological neural activity in the olfactory
      region of the brain.
 
   
   6. Back propogation, Digital and analog
      This method uses the delta rule but updates the W(I,J)
      matrix by using the primary error vector E(I) and the first 
      derivative dW(I,J) of the neural matrix.
 
    
  If  you  would  like  to discuss  these  methods  or  have 
  comments please contact me by e-mail or write.
 
  I  offer custom analysis services and programming.  If one 
  is interested in this type of work send me the details and
  overview for a free estimate.
 
  Currently, I am a recent graduate of the University of
  Wisconsin with degrees in Applied Computer Science and 
  Biological Science. I am looking for work in computer science,
  mathematics, biology, statistics, artificial intelligence,
  or neural networks.  
 
  Lastly,  I would be interested in making a survey of which 
  neural  network  methods are  being  used.  And,  I  would 
  provide a tabulation of the results as they are received.
 
  I can be contacted at the address below:
 
  David H. Kanecki
  P.O. Box 93
  Kenosha, WI 53141
 
  Bitnet: kanecki@vacs.uwp.wisc.edu
    

P.S. I have developed a analog and digital neural network programmer and
simulator available for mainframes, mini, and micro computers [[...]].  For
data sheets, product information [[ ,PRICE LIST, AND CREDIT INFORMATION ]]
please write to the address above. 

[[ Editor's Note: This Digest, in keeping with ARPANET guidelines, attempts
to be as non-commercial as possible while still providing information which
will benefit the greatest number of people.  As I've mentioned before, I
exercise some limited editing on "commericials," and have trimmed some of
the text above accordingly.  Please contact David for more information about
his product and services.  If anyone has comments on my (admittedly
arbitrary) editing policy, eitehr pro or con, please send e-mail. -PM ]]

------------------------------

Subject: Summary: Help: Neural Nets/Cell-Automata/Dynamic Systems ...
From:    Darrell Schiebel <unmvax!aplcen!haven!h.cs.wvu.wvnet.edu!cerc.wvu.wvnet.edu!drs@UCBVAX.BERKELEY.EDU>
Date:    06 Jul 89 16:19:58 +0000 


Some time ago I posted a message asking for help:

>I am interested in the Following:
>
>           Computing with:
>              Neural Networks
>              Cellular Automata
>              Dynamic Systems
>              Self Organizing Systems
> etc ...

I received several responses, and several people ask for a posting of a 
summary.

I would like to thank the people who responded with the insightful
information which follows; It was a great help to me, and perhaps, it will
aid others.

	Tim Swenson (tswenson@daitc.mil)
        Charlie Sorsby (sorsby@hi.unm.edu)
        Tony Meadors (meadors@cogsci.ucsd.edu)
        Hal Hardenbergh (236) (hal@vicom.com)
        Cliff Joslyn (vu0112@bingvaxu.cc.binghamton.edu)
        Dave Hiebeler (hiebeler@cs.rpi.edu)
        Sue Worden (worden@ut-emx.UUCP)
        Russell Beale (russell@minster.york.ac.uk)

- --------------------------------------------------------------------

The EECE Dept. at the Univeresity of New Mexico is doing some work in the NN
area. The two professors who are doing NN stuff in the Dept.  are Dr. Don
Hush and Dr. Victor Bolie.

- --------------------------------------------------------------------

These areas, are quite broad and differ widely in character depending on the
particular projects they are applied. What I mean is that the relevance of
particular books, classes, or mathematical methods is almost entirely
dependent on what you intend to UNDERSTAND or ENGINEER.

The practical reason for including your reseach intention so early in the
process is this; if you go to someone in say the applied engineering
department and ask how you should learn about control systems they will
gladly point the direction, the physics department another, the psycholgy
department differently yet, and so on. And even within those disciplines,
say psychology, control system principles (just my example) underlie models
which otherwise have little or no relation to one another (say the study of
reaching for moving targets vs. models of motivation).

The bottom line...go as directly as possible to the literature relevant to
what you wish to accomplish and from that you will learn what background and
related topics you need to master.

- --------------------------------------------------------------------

This is the best available practical book on neural nets:
   "Neural Computing: Theory and Practice,"  Philip D. Wasserman
    Van Nostrand-Reinhold 1989

This slender paperback contains Richard Lippmann' tutorial (Apr 1987 IEEE ASSP)
reprint, and is recommended for that reason:
   "Artificial Neural Networks:  Theoretical Concepts,"  V. Vemuri editor
   (IEEE) Computer Society Press, 1988
   Computer Society order #855;  IEEE Catalog # EH 0279-0

The next book is a huge $55 volume, and is utterly invaluable if you are
interested in the historical background of anns.  If you aren't interested
in the historical background, don't buy it.
   "Neurocomputing:  Foundations of Research,"  Anderson and Rosenfeld, editors
   MIT Press, 1988

Once you decide you are serious about anns, these two collections of technical
papers will collectively set you back about $60, and are worth it:
   "Proceedings of the 1988 Connectionist Models Summer School
   (Carnegie-Mellon)" Morgan Kaufmann Publishers 1989  Paperback

   "Advances in Neural Information Processing Systems, Vol 1,"  
   D.S. Touretzky ed  Morgan Kaufmann Publishers 1989  Hardcover

This is a $10 paperback which is a reprint of the (1988?) MIT house organ
"DAEDALUS" magazine special issue on AI.  The leadoff article by Papert is
hilarious if you are a backprop fan.  Many, and diverse, opinions, some of them
frankly hostile to AI:

   "The Artificial Intelligence Debate," Stephen R. Graubard editor
   MIT Press 1988

There is the hardware and there is the wetware.  The dividing line between
anns and cognitive sciences is not well defined.  The papers in this book
lean in the direction of wetware.

   "Neural Networks and Natural Intelligence,"  Stephen Grossberg editor

- --------------------------------------------------------------------

Two good books about cellular automata:
  Cellular Automata Machines: A New Environment for Modeling
  by T. Toffoli and N. Margolus,  MIT Press, 1986 (or maybe 1987)
  -- good intro to CA applied to physical modeling, and CA in general

  Theory and Applications of Cellular Automata
  edited by S. Wolfram, Scientific Press, 1984  (not sure of
  publisher/date)
  -- a collection of articles (many by Wolfram) about just what the
     title says.  Not as light reading as the first book.

- --------------------------------------------------------------------

I hardly ever see the following book referenced, but I think it might
provide a reasonable introduction to some of your areas of interest:

  Glorioso, Robert M. and Fernando C. Colon Osorio
  ENGINEERING INTELLIGENT SYSTEMS : Concepts, Theory, and Applications
  Digital Equipment Corporation; Bedford, Massachusetts; 1980
  ISBN 0-932376-06-1; 472 pages

  Abbreviated Table of Contents

  Chap  1 : Computers and Intelligence
  Chap  2 : Game Playing and Machines
  Chap  3 : Reason, Logic, and Mathematics
  Chap  4 : Computers and Automata
  Chap  5 : Adaption, Learning, Self-Repair, and Self-Organization
  Chap  6 : Stochastic Automata Models
  Chap  7 : Adaptive, Learning, and Self-Organizing Controllers
  Chap  8 : Cybernetic Techniques in Communication Systems
  Chap  9 : Stochastic Automata Models in Computer and Communication Networks
  Chap 10 : Reliability and Repair
  Chap 11 : Neurons and Neural Models
  Chap 12 : Threshold Logic
  Chap 13 : Pattern Recognition
  Chap 14 : Computer Vision
  Chap 15 : Robotics

From your posting, I gather that your orientation is toward a blend of
computer science, computer engineering, and linear/non-linear systems theory
and engineering.  That in itself indicates that you are probably seeking a
university with faculty/student/program crossovers between appropriate
academic departments.  The University of Texas at Austin is one such
university. For details, write:

   Dean of Graduate Studies
   Main Building 101
   The University of Texas at Austin
   Austin, Texas  78712

Finally, in whatever graduate program you finally choose, I encourage you to
set aside a few course hours for psychology (cognitive science),
neuroanatomy/neurophysiology, et cetera.  The organic perspective gained on
our technological pursuits is invaluable.

- --------------------------------------------------------------------

 Parallel Distributed Processing: 
          D. E. Rumelhart, J. L. McClelland
          Explorations in the Microstructure of Cognition
          MIT Press, Cambridge Mass., 1986.
          3 vols, excellent.

 An Introduction to Computing with Neural Nets
          Richard P. Lippmann
          IEEE ASSP Magazine
          4-22
          April 1987
          Hopfield hamming carpenter grossberg perceptron organizing
                 kohonen introduction
          Good clear introduction, with many references.

 An Introduction to Neural Computing
          Teuvo Kohonen
          Neural Networks 
          V 1 
          N 1
          P 3-16
          D 1988

 Perceptrons
          M Minsky, S. Papert
          1969
          MIT Press
          Contains criticisnm of single layered networks.

- --------------------------------------------------------------------

Sci American  Either Computer Rec or Math column.
	August 1988,  May 1985,  Mar 1984,  Feb 1971, Oct. 1970
Wolfram, Stephen Los Alamos Science fall 1983 "Cellular Automata"
Cooper, Necia Los Alamos Science Fall 1983,
	"From Turing and Von Neuman to the Present"
Buckingham, David, "some facts on life", Byte Dec. 1979

The Wolfram article is very good.

------------------------------

End of Neurons Digest
*********************