[comp.ai.neural-nets] Neuron Digest V7 #21

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (04/21/91)

Neuron Digest   Saturday, 20 Apr 1991
                Volume 7 : Issue 21

Today's Topics:
                            Retina Simulator
                       generalization power of NNs
 Re: Rigorous results on Fault Tolerance and Robustness (Are there any?)
                         lack of generalization
                 Neural Nets in Autonomous Land Vehicles
       Proceedings of Third NN and PDP Conference, Indiana-Purdue
                     Neural Compuation Vol 3 Issue 1
                  TR available: Catastrophic forgetting
         prepint by Lumer & Huberman: "Binding Hierarchies:..."
                         Neural Network Seminar
                     Neural Nets Workshop (IWANN 91)


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Retina Simulator
From:    Robert Siminoff <siminoff@ifado.uucp>
Date:    Wed, 27 Mar 91 10:31:32 -0800

I can make available for the cost of duplicating and postage ($20) a
computer simulation of the retina, written in Fortran 77. The model has
been accepted for publication in BIOLOGICAL CYBERNETICS and a preprint of
the paper can be provided.  For more information send an E-mail message
to siminoff@ifado.uucp.

                          Robert Siminoff

------------------------------

Subject: generalization power of NNs
From:    phil neal <phil@iris.iphc.washington.edu>
Date:    Tue, 02 Apr 91 12:58:59 -0800

Dear NN people,

I posted this to comp.ai.neural-nets as well. But I thought I might reach
more/different people through this venue.

I have a problem with the ability of a neural net to generalize.  I have
600 observations of a 6 predictor variable input vector to classify these
observations into 1 of 4 groups.

I break the data into a 400 observation training set and a 200
observation test set.

When I use a simple linear discriminant function with seperate covariance
matrices and compare that against a NN with 6 input, 12 hidden and 4
output nodes. Here's what I get for correct classification rates:

                        LDF     NN
train                   48.5    59.0
test                    42.0    37.0

And no matter how long I let the NN run, and no matter what number of
hidden layer nodes, I always get about the same results.

So, what's the deal ? Is my sample size too small ? Are there any good
papers that cover this kind of problem ?

I know I am violating the rule of thumb to have 10 times more training
data than nodes in the net. But hey, data is expensive.

Thanks,

Phil Neal
phil@iris.iphc.washington.edu direct to my workstation

------------------------------

Subject: Re: Rigorous results on Fault Tolerance and Robustness (Are there any?)
From:    mamisra%pollux.usc.edu@usc.edu (Manavendra Misra)
Date:    Tue, 02 Apr 91 14:52:09 -0800


L. Belfore at the Dept of Elec and Comp Engg, Marquette Univ, Milwaukee,
WI 53233 and B. Johnson at the Dept of Elec Engg, Univ of Virginia,
Charlottesville, VA 22901 are doing work in this field. Unfortunately, I
do not have their email addresses.

Manav.

------------------------------

Subject: lack of generalization
From:    phil neal <phil@iris.iphc.washington.edu>
Date:    Tue, 02 Apr 91 16:00:01 -0800


Dear NN people,

This is a follow up to my posting on the lack of generalization of NNs.
One cure for this problem that I have heard about is the following:

I have heard of workers creating synthetic data from the data set they
had. From what I understand, it goes something like this:

        1. for each predictor variable in the training set
                a. Assume a distribution
                b. find the empirical parameters for that distribution
                   using all the records in the training set.
                   i.e a mean and s.d. on a normal or uniform


        2. for as many passes through the training data as you
           want/deem necessary:

                a. Take a record, add noise using a random
                   number generator, to (some/all/one?)  predictor 
                   variable, based on the mean and s.d.
                b. train on this imaginary record

Now, I am not too sure that this is "statistically" acceptable, and I
haven't tried it myself, but it seems to me that this would cause the nn
to train on a more diverse range of data. Thus, whatever system of
weights it came up with would be better able to handle the unknown data
coming at it in the test data.  It also might be a form of "smoothing" in
the nn weight space. Alas, I can't remember where I saw this idea. Nor do
I remember the results.

Half the time, for me, implementation in code takes up more time than the
experiment. So I always look with trepidation on new ideas. It's the old
"up to your ass in alligators" problem.


So, has anybody done this , or read any reports on anybody doing this ?
Please let me know.

Thanks,
Phil Neal
phil@iris.iphc.washington.edu

------------------------------

Subject: Neural Nets in Autonomous Land Vehicles
From:    kagan@computervision.bristol.ac.uk
Date:    Fri, 05 Apr 91 15:53:33 +0100

 I am looking for all kinds of information, software etc. about the neural
network applications in autonomous land vehicles. I am particularly
interested in steering and velocity control of the vehicle.
 Thanks in advance.

 Kagan Ozerhan <K.Ozerhan@bristol.ac.uk>
 University of Bristol
 University Walk
 Queens Building
 Room 1.65
 Bristol BS8 1TR
 U.K.

------------------------------

Subject: Proceedings of Third NN and PDP Conference, Indiana-Purdue
From:    SAYEGH@CVAX.IPFW.INDIANA.EDU
Date:    Wed, 10 Apr 91 20:41:22 -0400

The proceedings of the THIRD conference on Neural Networks and Parallel
Distributed Processing held in April 1990 at Indiana-Purdue University in
Ft Wayne can be obtained by writing to:

Ms. Sandra Fisher
Physics Department 
Indiana University-Purdue University
Ft Wayne, IN 46805

and including $5 + $1 for mailing and handling.  Checks should be made payable
to The Indiana-Purdue Foundation.

The 109 page proceedings contain the following articles:


INTEGRATED AUTONOMOUS NAVIGATION BY ADAPTIVE
NEURAL NETWORKS

Dean A. Pomerleau
Department of Computer Science
Carnegie Mellon University

APPLYING A HOPFIELD-STYLE NETWORK TO DEGRADED
PRINTED TEXT RESTORATION
 
Arun Jagota
Department of Computer Science
State University of New York at Buffalo

RECENT STUDIES WITH PARALLEL, SELF-ORGANIZ-
ING, HIERARCHICAL NEURAL NETWORKS
 
O.K. Ersoy & D. Hong
School of Electrical Engineering
Purdue University

INEQUALITIES, PERCEPTRONS AND ROBOTIC PATH-
PLANNING

Samir I. Sayegh
Department of Physics 
Indiana University-Purdue University 

GENETIC ALGORITHMS FOR FEATURE SELECTION FOR
COUNTERPROPAGATION NETWORKS

F.Z. Brill & W.N. Martin
Department of Computer Science
University of Virginia

MULTI-SCALE VISION-BASED NAVIGATION ON DIS-
TRIBUTED-MEMORY MIMD COMPUTERS

A.W. Ho & G.C. Fox
Caltech Concurrent Computation Program
California Institute of Technology

A NEURAL NETWORK WHICH ENABLES SPECIFICATION
OF PRODUCTION RULES

N. Liu & K.J. Cios
The University of Toledo

PIECE-WISE LINEAR ESTIMATION OF MECHANICAL
PROPERTIES OF MATERIALS WITH NEURAL NETWORKS

I.H. Shin, K.J. Cios, A. Vary* & H.E. Kautz*
The University of Toledo & NASA Lewis Re-
search Center*


INFLUENCE OF THE COLUMN STRUCTURE ON INTRA-
CORTICAL LONG RANGE INTERACTIONS

E. Niebur & F. Worgotter
California Institute of Technology


LEARNING BY GRADIENT DESCENT IN FUNCTION
SPACE

Ganesh Mani
University of Wisconsin-Madison


REAL TIME DYNAMIC RECOGNITION OF SPATIAL
TEMPORAL PATTERNS

M. F. Tenorio
School of Electrical Engineering
Purdue University


A NEURAL ARCHITECTURE FOR COGNITIVE MAPS

Martin Sonntag
Cognitive Science & Machine Intelligence Lab
University of Michigan



P.S. The Fourth Conference is scheduled to start April 11, 91 at 6pm in the
Classroom Medical Building, CM159, of the Fort Wayne Campus of Indiana and 
Purdue.  A previous announcement of this conference was made on the list.      


------------------------------

Subject: Neural Compuation Vol 3 Issue 1
From:    Terry Sejnowski <tsejnowski@UCSD.EDU>
Date:    Sat, 13 Apr 91 22:57:22 -0700

NEURAL COMPUTATION - Volume 3 Issue 1 - Spring 1991

Review:

Deciphering the Brain's Codes
        Masakazu Konishi

Letters:

Synchronization of Bursting Action Potential Discharge in a Model Network 
of Neocortical Neurons
        Paul Bush and Rodney Douglas

Parallel Activation of Memories in an Oscillatory Neural Network
        D. Horn and M. Usher

Organization of Binocular Pathways:  Modeling and Data Related to Rivalry
        Sidney R. Lehky

Dynamics and Formation of Self-Organizing Maps
        Jun Zhang

A Method for Reducing Computation in Networks with Separable Radial 
Basis Functions
        Terrence D. Sanger

Adaptive Mixtures of Local Experts
        Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and
        Geoffrey E. Hinton

Efficient Training of Artificial Neural Networks for Autonomous Navigation
        Dean A. Pomerleau

Sequence Manipulation Using Parallel Mapping
        David S. Touretzky and Deirdre W. Wheeler

Parsing Complex Sentences with Structured Connectionist Networks
        Ajay N. Jain

Rules and Variables in Neural Nets
        Venkat Ajjanagadde and Lokendra Shastri

TAG:  A Neural Network Model for Large-Scale Optical Implementation
        Hyuek-Jae Lee, Soo-Young Lee, and Sang-Yung Shin


SUBSCRIPTIONS - VOLUME 3 

______ $35     Student
______ $55     Individual
______ $110    Institution

Add $18. for postage and handling outside USA

(Back issues are available for $28 each.)

MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142.
        (617) 253-2889.

MasterCard and VISA accepted

------------------------------

Subject: TR available: Catastrophic forgetting
From:    Bob French <french@cogsci.indiana.edu>
Date:    Fri, 29 Mar 91 22:03:45 -0500

The following brief technical report is available from the Center for
Research on Concepts and Cognition at Indiana University:

    USING SEMI-DISTRIBUTED REPRESENTATIONS TO OVERCOME CATASTROPHIC 
                FORGETTING IN CONNECTIONIST NETWORKS
                         
                        Robert M. French
                          
          Center for Research on Concepts and Cognition
                      Indiana University
                       510 North Fess 
                    Bloomington, IN 47408
              e-mail: french@cogsci.indiana.edu 

In connectionist networks, newly-learned information rapidly destroys
previously-learned information unless the network is continually
retrained on the old information.  This behavior, known as catastrophic
forgetting, is unacceptable both for practical purposes and as a model of
mind.  This paper advances the claim that catastrophic forgetting is a
direct consequence of the overlap distributed representations and can be
reduced by reducing this overlap.  It is also suggested that there is an
inevitable trade-off between generalization and forgetting.  A simple
algorithm is presented that allows a standard feedforward backpropagation
network to develop "semi-distributed representations", thereby
significantly reducing the problem of catastrophic forgetting.


TO OBTAIN A COPY OF THIS PAPER:

  unix>ftp cogsci.indiana.edu        (or ftp 129.79.238.6) 
  ftp>user: anonymous
  ftp>passwd: ident 
  ftp>cd pub
  ftp>binary 
  ftp>get french.forgetting.ps.Z
  ftp>quit
  unix>uncompress french.forgetting.ps.Z 
  unix>lpr -P(your laser printer) french.forgetting.ps


Hard copies may be requested by sending e-mail to:
french@cogsci.indiana.edu 
or by writing directly to C.R.C.C. at the address indicated above.


------------------------------

Subject: prepint by Lumer & Huberman: "Binding Hierarchies:..."
From:    Andreas Weigend <andreas%psych@Forsythe.Stanford.EDU>
Date:    Wed, 10 Apr 91 21:48:16 -0700


        The following preprint is available in hardcopy form.
        It can be obtained by sending e-mail to: lumer@parc.xerox.com
        Please do NOT reply to this message.
______________________________________________________________________

                         Binding Hierarchies:
               A Basis for Dynamic Perceptual Grouping

                      E. Lumer and B. A. Huberman
                  Stanford University and Xerox PARC


                               Abstract

Since it has been suggested that the brain binds its fragmentary
representations of perceptual events via phase-locking of stimulated
neuron oscillators, it is important to determine how extended
synchronization can occur in a clustered organization of cells possessing
a distribution of firing rates. In order to answer that question, we
establish the basic conditions for the existence of a binding mechanism
based on phase-locked oscillations. In addition, we present a simple
hierarchical architecture of feedback units which not only induces robust
synchronization within and segregation between perceptual groups, but
also serves as a generic binding machine.
______________________________________________________________________



------------------------------

Subject: Neural Network Seminar
From:    noordewi@cs.RUTGERS.EDU
Date:    Fri, 12 Apr 91 13:00:24 -0400


                          RUTGERS UNIVERSITY
            Dept. of Computer Science/Dept. of Mathematics

          Neural Networks Colloquium Series --- Spring 1991

                             C. L. Giles
                        NEC Research Institute

                  Teaching Recurrent Neural Networks
                to be Finite State Machines (Digraphs)

                               Abstract

Recurrent neural networks are natural models for encoding and learning
temporal sequences. If these temporal sequences are strings from the
languages of formal grammars, then teaching a neural network to learn
these sequences is a form of grammatical inference. We demonstrate how to
train second-order recurrent networks with real-time learning algorithms
to be finite state machines. In particular, we present extensive
simulation results which show that simple regular grammars are easy to
learn. We devise and use heuristic clustering algorithms which extract
finite state machines or digraphs from recurrent neural networks during
and after training. The resultant finite state machines usually have
large numbers of states and can be reduced in complexity to minimal
finite state machines using a standard minimization algorithm. Depending
on the training method and type of training set, different minimal finite
state machines emerge. If the grammar is well learned, then identical
finite state machines are produced in the minimization process. These
finite state machines constitute an equivalence class of neural networks
which covers different numbers of neurons and different initial
conditions. This can be interpreted as a measure of how well a set of
strings and its generative grammar are learned. We present a video of the
learning process and show the emergent finite state machines during and
after training.


                            April 17, 1991
               Busch Campus --- 4:30 p.m., room 217 SEC

                 host: Mick Noordewier (201/932-3698)
   finger noordewi@cs.rutgers.edu for further schedule information

[[ Editor's Note: Ooops, this went out AFTER the talk. Sorry about that. -PM ]]

------------------------------

Subject: Neural Nets Workshop (IWANN 91)
From:    Ignacio Bellido Montes <ibm@dit.upm.es>
Date:    Mon, 08 Apr 91 13:08:01 +0200

[[ Editor's Note: In the interest of space, I deleted the LaTeX version
of the announcement. -PM ]]

There will be a Neural Nets Workshop Granada (Spain) next September. I
think this is the first international workshop of this kind that will be
held in Spain, I hope all of you can send contributions and participate.

Here I send you the electronic version of the CFP followed by the LaTeX
version. If you have any question about the workshop, please ask me or
the people of the workshop secretary. I'm not on the organization but I
can help finding people etc...

About Granada, I can tell you this is one of the most wonderful cities in
the Spain and ... in the World. It has one of the most famous and wonder
arab contributions to the culture, "La Alhambra", and small and quiet
lanes on the old side of the city, the "Albaicin", that transmit the
visitor a very good feeling.

I hope you be able to participate and enjoy... See you in Granada.

Gregorio Fernandez
Dpt. Ingenieria de Sistemas Telematicos
ETSI Telecomunicacion- UPM
Ciudad Universitaria
28040 Madrid, Spain
E-mail: gfernandez@dit.upm.es

=-----------------------------------------------------------------------------

                     INTERNATIONAL WORKSHOP
                               ON
                   ARTIFICIAL NEURAL NETWORKS

                            IWANN'91

              First Announcement and Call for Papers

                         Granada, Spain
                      September 17-19, 1991

                   ORGANISED AND SPONSORED BY
        Spanish Chapter of the Computer Society of the IEEE,
               AEIA (IEEE Affiliate Society), and
        Department of Electronic and Computer Technology.
                  University of Granada. Spain.

SCOPE
Artificial Neural Networks (ANN) were first developed as structural or
functional modelling systems of natural ones, featuring the ability to
perform problem-solving tasks. They can be thought as computing arrays
consisting of series of repetitive uniform processors (neuron-like
elements) placed on a grid.  Learning is achieved by changing the
interconnections between these processing elements. Hence, these systems
are also called connectionist models.

    ANN has become a subject of wide-spread interest: they offer an odd
scheme-based programming standpoint and exhibit higher computing speeds
than conventional von-Neumann architectures, thus easing or even enabling
handling complex task such as artificial vision, speech recognition,
information recovery in noisy environments or general pattern
recognition.

    In ANN systems, collective information management is achieved by
means of parallel operation of neuron-like elements, into which
information processing is distributed. It is intended to exploit this
highly parallel processing capability as far as possible in complex
problem-solving tasks.

    Cross-fertilization between the domains of artificial and real neural
nets is desirable. The more genuine problems of biological computation
and information processing in the nervous system still remain open and
contributions in this line are more than welcome. Methodology,
theoretical frames, structural and organizational principles in
neuroscience, self-organizing and co- operative processes and knowledge
based descriptions of neural tissue are relevant topics to bridge the gap
between the artificial and natural perspectives.

    The workshop intends to serve as a meeting place for engineers and
scientists working in this area, so that present contacts and
relationships can be further increased. The workshop will comprise two
complementary activities:
    .   scientific and technical conferences, and
    .   scientific communications sessions.

TOPICS 
The workshop is open to all aspects of artificial neural networks, including:
 1. Neural network theories. Neural models.
 2. Biological perspectives
 3. Neural network architectures and algorithms.
 4. Software developments and tools.
 5. Hardware implementations
 6. Applications.

LOCATION
Facultad de Ciencias
Campus Universitario de Fuentenueva
Universidad de Granada
18071 GRANADA. (SPAIN)

LANGUAGES
English and Spanish will be the official working languages. English is
preferable as the working language.

CALL FOR PAPERS 
The Programme Committee seeks original papers on the six above mentioned
areas. Survey papers on the various available approaches or particular
application domains are also sought.
    In their submitted papers, authors should pay particular attention to
explaining the theoretical and technical choices involved, to make clear
the limitations encountered and to describe the current state of
development of their work.

INSTRUCTIONS TO AUTHORS 
Three copies of submitted papers (not exceeding 8 pages in 21x29.7 cms
(DIN-A4), with 1,6 cm. left, right, top and bottom margins) should be
received by the Programme Chairman at the address below before June 20,
1991.
    The headlines should be centered and include:
    .   the title of paper in capitals
    .   the name(s) of author(s)
    .   the address(es) of author(s), and
    .   a 10 line abstract.
    Three blank lines should be left between each of the above items, and
four between the headlines and the body of the paper, written in English,
single-spaced and not exceeding the 8 pages limit.
    All papers received will be refereed by the Programme Committee. The
Committee will communicate their decision to the authors on July 10.
Accepted papers will be published in the proceedings to be distributed to
workshop participants.
    In addition to the paper, one sheet should be attached including the
following information:
    .   the title of the paper,
    .   the name(s) of author(s),
    .   a list of five keywords,
    .   a reference to which of the six topics the paper concerns,
        and
    .   postal address of one of the authors, with phone and fax
        numbers, and E-mail (if available).
    We intend to get in touch with various international publishers (such
as Springer-Verlag and Prentice-Hall) for the final version of the
proceedings.

Contributions to be sent to:
Prof. Jose Mira
Dpto. Informatica y Automatica
UNED
C/Senda del Rey s/n
28040 MADRID (Spain)
Tel. (34) 1 5 44 60 00
Fax: (34) 1 5 44 67 37

ACCOMMODATION 
A list of available Hotels will be sent on registration. Hotel
reservations can be made directly by each participant with the local
agency below. All request should be addressed to:
               Viajes Internacional Expreso (VIE)
                       Galerias Preciados
                     Carrera del Genil, s/n
                      18005 GRANADA (Spain)
           Tel. (34) 58-22.44.95, -22.75.86, -224944
                          Telex: 78525
We can only guarantee to accept reservation received by July 25.

REGISTRATION FEE
.   Regular fee:                                    35.000 ptas.
.   IEEE, AEIA and ATI members fee:                 30.000 ptas.
.   Scholarship holders fee:                         5.000 ptas.

Inscription payments: 
Transfer to:       IWANN'91
                   account number: 16.142.512
                   Caja Postal (Code: 2088-2037.1)
                   Camino de Ronda, 138
                   18003 GRANADA (SPAIN)
or alternatively, cheque made out to: 
               IWANN'91 (16.142.512)

Secretariat address:
Departamento de Electronica y Tecnologia de Computadores
Facultad de Ciencias
Universidad de Granada
18071 GRANADA (SPAIN)
FAX: 34-58-24.32.30 or 34-58-27.42.58
Phone: 34-58-24.32.26
E-Mail:    jmerelo@ugr.es         aprieto@ugr.es

PROGRAM AND ORGANIZATION COMMITTEE
Senen Barro                                     Univ. de Santiago
Joan Cabestany                           Univ. Pltca. de Catalunya
Jose Antonio Corrales                               Univ. Oviedo.
Gregorio Fernandez                         Univ. Pltca. de Madrid
J. Simoes da Fonseca                              Univ. de Lisboa
Antonio Lloris                                      Univ. Granada
Javier Lopez Aligue                          Univ. de Extremadura.
Jose Mira        (Programme Chairman)                UNED. Madrid
Roberto Moreno                       Univ Las Palmas Gran Canaria
Alberto Prieto   (Organization Chairman)            Univ. Granada
Francisco Sandoval                                Univ. de Malaga
Carmen Torras          Instituto de Cibernetica. CSIC. Barcelona
Elena Valderrama                 CNM- Univ. Autonoma de Barcelona

LOCAL ORGANIZING COMMITTEE  (Universidad de Granada)
Juan Julian Merelo
Julio Ortega
Francisco J. Pelayo
Begona del Pino
Alberto Prieto 


(To be completed and returned as soon as possible to:
Departamento de Electronica. Facultad de Ciencias. Univ. de Granada. 
18071 GRANADA (SPAIN). FAX (34)-58-24.32.30) 
Block letters, please

Name:

Company/Organization:

Address:

State/Country:
E-mail:                Phone:              Fax:

Please tick as appropriate:

I intend to:
   attend to the workshop
   submit a paper

      Name(s) of Author(s):

      Provisional Title:

------------------------------

End of Neuron Digest [Volume 7 Issue 21]
****************************************