[comp.ai.neural-nets] Neuron Digest V6 #58

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (10/05/90)

Neuron Digest   Thursday,  4 Oct 1990
                Volume 6 : Issue 58

Today's Topics:
                         Neural Computation 2:3
                        MLP classifiers == Bayes
                    TR - MasPar Performance Estimates
                        NIPS PROGRAM --Correction


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Neural Computation 2:3
From:    Terry Sejnowski <tsejnowski@UCSD.EDU>
Date:    Sat, 29 Sep 90 15:55:49 -0700

NEURAL COMPUTATION   Volume 2, Number 3

Review:

Parallel Distributed Approaches to Combinatorial Optimization -- Benchmark
Studies on the Traveling Salesman Problem
        Carsten Peterson

Note:

Faster Learning for Dynamical Recurrent Backpropagation
        Yan Fang and Terrence J. Sejnowski

Letters:

A Dynamical Neural Network Model of Sensorimotor Transformations
in the Leech
        Shawn R. Lockery, Yan Fang, and Terrence J. Sejnowski

Control of Neuronal Output by Inhibition At the Axon Initial Segment
        Rodney J. Douglas and Kevan A. C. Martin

Feature Linking Via Synchronization Among Distributed Assemblies:  
Results From Cat Visual Cortex and From Simulations
        R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. Dicke

Toward a Theory of Early Visual Processing
        Joseph J. Atick and A. Norman Redlich

Derivation of Hebbian Equations From a Nonlinear Model
        Kenneth D. Miller

Spontaneous Development of Modularity in Simple Cortical Models
        Alex Chernjavsky and John Moody

The Bootstrap Widrow-Hoff Rule As a Cluster-Formation Algorithm
        Geoffrey E. Hinton and Steven J. Nowlan

The Effects of Precision Constraints in a Back-Propagation Learning Network
        Paul W. Hollis, John S. Harper, and John J. Paulos

Exhaustive Learning
        D. B. Schwartz, Sarah A. Solla, V. K. Samalam, and J. S. Denker

A Method for Designing Neural Networks Using Non-Linear Multivariate
Analysis: Application to Speaker-Independent Vowel Recognition
        Toshio Irino and Hideki Kawahara

SUBSCRIPTIONS: Volume 2

______ $35     Student
______ $50     Individual
______ $100    Institution

Add $12. for postage outside USA and Canada surface mail.
Add $18. for air mail.

(Back issues of volume 1 are available for $25 each.)

MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142.
        (617) 253-2889.


------------------------------

Subject: MLP classifiers == Bayes
From:    John.Hampshire@SPEECH2.CS.CMU.EDU
Date:    Sun, 30 Sep 90 20:28:16 -0400

EQUIVALENCE PROOFS FOR MULTI-LAYER PERCEPTRON CLASSIFIERS AND
          THE BAYESIAN DISCRIMINANT FUNCTION

   John B. Hampshire II       and     Barak A. Pearlmutter
                Carnegie Mellon University

             --------------------------------

  We show the conditions necessary for an MLP classifier to
yield (optimal) Bayesian classification performance.

Background:
==========

  Back in 1973, Duda and Hart showed that a simple perceptron
trained with the Mean-Squared Error (MSE) objective function would
minimize the squared approximation error to the Bayesian discriminant
function.  If the two-class random vector (RV) being classified were
linearly separable, then the MSE-trained perceptron would produce outputs
that converged to the a posteriori probabilities of the RV, given an
asymptotically large set of statistically independent training samples of
the RV.  Since then, a number of connectionists have re-stated this proof
in various forms for MLP classifiers.


What's new:
==========

  We show (in painful mathematical detail) that the proof holds not just
for MSE-trained MLPs, it also holds for MLPs trained with any of two
broad classes of objective functions.  The number of classes associated
with the input RV is arbitrary, as is the dimensionality of the RV, and
the specific parameterization of the MLP.  Again, we state the conditions
necessary for Bayesian equivalence to hold.

  The first class of "reasonable error measures" yields Bayesian
performance by producing MLP outputs that converge to the a posterioris
of the RV.  MSE and a number of information theoretic learning rules
leading to the Cross Entropy objective function are familiar examples of
reasonable error measures.  The second class of objective functions,
known as Classification Figures of Merit (CFM), yield (theoretically
limited) Bayesian performance by producing MLP outputs that reflect the
identity of the largest a posteriori of the input RV.


How to get a copy:
=================

  To appear in the "Proceedings of the 1990 Connectionist Models Summer
School," Touretzky, Elman, Sejnowski, and Hinton, eds., San Mateo, CA:
Morgan Kaufmann, 1990.  This text will be available at NIPS in late
November.

  If you can't wait, pre-prints may be obtained from the OSU
connectionist literature database using the following procedure:

% ftp cheops.cis.ohio-state.edu  (or, ftp 128.146.8.62)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get hampshire.bayes90.ps.Z
261245 bytes sent in 9.9 seconds (26 Kbytes/s)
ftp> quit
% uncompress hampshire.bayes90.ps.Z
% lpr hampshire.bayes90.ps

------------------------------

Subject: TR - MasPar Performance Estimates
From:    Kamil A Grajski <kamil@wdl1.wdl.fac.com>
Date:    Mon, 01 Oct 90 10:53:48 -0700

To receive copy of following tech report send physical address to:
kamil@wdl1.fac.ford.com.  (TCP/IP #137.249.32.102).


   NEUROCOMPUTING USING THE MasPar MP-1 MASSIVELY PARALLEL PROCESSOR

                           Kamil A. Grajski
                            Ford Aerospace
                Advanced Development Department / MSX-22
                        San Jose, CA 95161-9041

                            (408) 473 - 4394

                               ABSTRACT

We present an evaluation of neurocomputing using the MasPar MP-1,
massively parallel processor.  Performance figures are obtained on a 2K
processor element (PE) machine.  Scaling behavior is evaluated for
certain cases on a 4K and an 8K PE machine.  Extrapolated performance
figures are for the full 16K PE machine.  Specific neural networks
evaluated are: a.) "vanilla" back-propogation, yielding approximately 10
MCUPS real-time learning, (16K machine), for a 256-128-256 network; b.)
an Elman-type recurrent network (256-128-256, 1 time delay, 16K machine)
yielding approximately 9.5 MCUPS real-time learning; and c.)  Kohonen
self-organizing feature map yielding 1335 10-dimensional patterns per
second on a 2K PE machine only (2048 units), or 27.3 MCUPS.  The
back-prop networks are mapped as one weight per processor.  The Kohonen
net is mapped as one unit per PE.  The resultant performance figures
suggest that for back-prop networks, a single copy, many weights per
processor mapping should increase performance.  Last, we present basic
data transfer and arithmetic benchmarks useful for a priori estimates of
machine performance on problems of interest in neurocomputing.

 ------------------------------------------------------------------------

If you wish to receive additional information on the machine and
benchmarks for other types of problems, e.g., image processing, please
contact MasPar directly.  Or, only if you specifically tell me, I'll
pass along your name & area of interest to the right folks over there.

------------------------------

Subject: NIPS PROGRAM --Correction
From:    jose@learning.siemens.com (Steve Hanson)
Date:    Fri, 28 Sep 90 08:04:54 -0400


[[ Editor's note: Luckily, the first version was in queue for the Digest
when the correction came in. -PM ]] 

We had inadvertently excluded some of the posters from the preliminary
program.  We apologize for any confusion that may have caused.

                --Steve Hanson

Below is a complete and correct version of the NIPS preliminary program.


================================================
NIPS 1990 Preliminary Program, November 26-29, Denver, Colorado 

Monday, November 26, 1990

12:00 PM:       Registration  Begins                            
6:30 PM:        Reception and Conference Banquet                              
8:30 PM:        After Banquet Talk,  "Cortical Memory Systems in Humans", by Antonio Damasio. 

Tuesday, November 27, 1990

7:30 AM:        Continental Breakfast
8:30 AM:        Oral Session 1:  Learning and Memory
10:30 AM:       Break                              
11:00 AM:       Oral Session 2:  Navigation and Planning
12:35 PM:       Poster Preview Session I, Demos                              
2:30 PM:        Oral Session 3:  Temporal and Real Time Processing
4:10 PM:        Break
4:40 PM:        Oral Session 4:  Representation, Learning, and Generalization I
6:40 PM:        Free 
7:30 PM:        Refreshments and Poster Session I
  
Wednesday, November 28, 1990
                          
7:30 AM:        Continental Breakfast                              
8:30AM: Oral Session 5:  Visual Processing
10:20 AM:       Break
10:50 AM:       Oral Session 6:  Speech Processing
12:20 PM:       Poster Preview Session II, Demos                              
2:30 PM:        Oral Session 7: Representation, Learning, and
                Generalization II
4:10 PM:        Break                              
4:40 PM:        Oral Session 8:  Control
6:40 PM:        Free
7:30 PM:        Refreshments and Poster Session II 

Thursday, November 29, 1990
                          
7:30 AM:        Continental Breakfast                              
8:30 AM:        Oral Session 9:  Self-Organization and Unsupervised Learning
10:20 AM:       Break
10:50 AM:       Session Continues                              
12:10 PM:       Conference Adjourns
5:00 PM Reception and Registration for Post-Conference Workshop (Keystone, CO) 

Friday, November 30 -- Saturday, December 1, 1990
Post-Conference Workshops at Keystone
                          
                          

 ----------------------------------------------------------------

ORAL PROGRAM


Monday, November 26, 1990

12:00 PM:       Registration  Begins                            

6:30 PM:        Reception and Conference Banquet                              

8:30 PM:        After Banquet Talk, "Cortical Memory Systems in Humans",
by Antonio Damasio.

Tuesday, November 27, 1990

7:30 AM:        Continental Breakfast


ORAL SESSION 1:  LEARNING AND MEMORY 
Session Chair: John Moody, Yale University.              

8:30 AM:        "Multiple Components of Learning and Memory in Aplysia",
by Thomas Carew.

9:00 AM:        "VLSI Implementations of Learning and Memory Systems: A
Review", by Mark Holler.

9:30 AM:        "A Short-Term Memory Architecture for the Learning of
Morphophonemic Rules", by Michael Gasser and Chan-Do Lee.

9:50 AM "Short Term Active Memory: A Recurrent Network Model of the
Neural Mechanism", by David Zipser.

10:10 AM        "Direct Memory Access Using Two Cues: Finding the
Intersection of Sets in a Connectionist Model", by Janet Wiles, Michael
Humphreys and John Bain.

10:30 AM        Break                              


ORAL SESSION 2:  NAVIGATION AND PLANNING
Session Chair: Lee Giles, NEC Research.

11:00 AM        "Real-Time Autonomous Robot Navigation Using VLSI Neural
Networks", by Alan Murray, Lionel Tarassenko and Michael Brownlow.

11:20 AM        "Planning with an Adaptive World Model" by Sebastian B.
Thrun, Knutt Moller and Alexander Linden .

11:40 AM        "A Connectionist Learning Control Architecture for
Navigation", by Jonathan Bachrach.

12:00 PM        Spotlight on Language: Posters La1 and La3.

12:10 PM        Spotlight on Applications: Posters App1, App6, App7,
App10, and App11.

12:35 PM        Poster Preview Session I, Demos                              


ORAL SESSION 3:  TEMPORAL AND REAL TIME PROCESSING
Session Chair: Josh Alspector, Bellcore               

2:30 PM "Learning and Adaptation in Real Time Systems", by Carver Mead.

3:00 PM "Applications of Neural Networks in Video Signal Processing", by
John Pearson.

3:30 PM "Predicting the Future: A Connectionist Approach", by Andreas S.
Weigend, Bernardo Huberman and David E. Rumelhart.

3:50 PM "Algorithmic Musical Composition with Melodic and Stylistic
Constraints", by Michael Mozer and Todd Soukup.

4:10 PM Break
                              

ORAL SESSION 4: 
REPRESENTATION, LEARNING, AND GENERALIZATION I
Session Chair: Gerry Tesauro, IBM Research Labs.                

4:40 PM "An Overview of Representation and Convergence Results for
Multilayer Feedforward Networks", by Hal White .

5:10 PM "A Simplified Linear-Threshold-Based Neural Network Pattern
Classifier", by Terrence L. Fine.

5:30 PM "A Novel approach to predicition of the 3-dimensional structures
of protein backbones by neural networks", by H. Bohr, J. Bohr, S. Brunak,
R.M.J. Cotterill, H. Fredholm, B. Lautrup and S.B. Petersen.

5:50 PM "On the Circuit Complexity of Neural Networks", by Vwani
Roychowdhury, Kai- Yeung Siu, Alon Orlitsky and Thomas Kailath .

6:10 PM Spotlight on Learning and Generalization: Posters LG2, LG3, LG8,
LS2, LS5, and LS8.

6:40 PM Free 
                             
7:30 PM Refreshments and Poster Session I
  

Wednesday, November 28, 1990
                          
7:30 AM Continental Breakfast                              

ORAL SESSION 5:  VISUAL PROCESSING
Session Chair: Yann Le Cun, AT&T Bell Labs

8:30 AM "Neural Dynamics of Motion Segmentation", by Ennio Mingolla. 

9:00 AM "VLSI Implementation of a Network for Color Constancy", by Andrew
Moore, John Allman, Geoffrey Fox and Rodney Goodman.

9:20 AM "Optimal Filtering in the Salamander Retina", by Fred Rieke,
Geoffrey Owen and William Bialek.

9:40 AM "Grouping Contour Elements Using a Locally Connected Network", by
Amnon Shashua and Shimon Ullman.

10:00 AM        Spotlight on Visual Motion Processing: Posters VP3, VP6,
VP9, and VP12.
 
10:20 AM        Break


ORAL SESSION 6:  SPEECH PROCESSING
Session Chair: Richard Lippmann, MIT Lincoln Labs

10:50 AM        "From Speech Recognition to Understanding: Development of
the MIT, SUMMIT, and VOYAGER Systems", by James Glass.

11:20 PM        "Speech Recognition using Connectionist Approaches", by
K.Chouki, S.  Soudoplatoff, A. Wallyn, F. Bimbot and H. Valbret.
 
11:40 AM        "Continuous Speech Recognition Using Linked Predictive
Neural Networks", by Joe Tebelskis, Alex Waibel and Bojan Petek.
 
12:00 PM        Spotlight on Speech and Signal Processing: Posters Sig1,
Sig2, Sp2, and Sp7.

12:20 PM        Poster Preview Session II, Demos                              


ORAL SESSION 7:
REPRESENTATION, LEARNING AND GENERALIZATION II
Session Chair: Steve Hanson, Siemens Research.                

2:30 PM "Learning and Understanding Functions of Many Variables Through
Adaptive Spline Networks", by Jerome Friedman.

3:00 PM "Connectionist Modeling of Generalization and Classification", by
Roger Shepard.

3:30 PM "Bumptrees for Efficient Function, Constraint, and Classification
Learning", by Stephen M.Omohundro.

3:50 PM "Generalization Properties of Networks using the Least Mean
Square Algorithm", by Yves Chauvin.

4:10 PM Break                              

ORAL SESSION 8:  CONTROL
Session Chair: David Touretzky, Carnegie-Mellon University.

4:40 PM "Neural Network Application to Diagnostics and Control of Vehicle
Control Systems", by Kenneth Marko.

5:10 PM "Neural Network Models Reveal the Organizational Principles of
the Vestibulo- Ocular Reflex and Explain the Properties of its
Interneurons", by T.J. Anastasio.

5:30 PM "A General Network Architecture for Nonlinear Control Problems",
by Charles Schley, Yves Chauvin, Van Henkle and Richard Golden.

5:50 PM "Design and Implementation of a High Speed CMAC Neural Network
Using Programmable CMOS Logic Cell Arrays", by W. Thomas Miller, Brain A.
Box, Erich C. Whitney and James M. Glynn.

6:10 PM Spotlight on Control: Posters CN2, CN6, and CN7.

6:25 PM Spotlight on Oscillations: Posters Osc1, Osc2, and Osc3.
                             
6:40 PM Free
                              
7:30 PM Refreshments and Poster Session II 

Thursday, November 29, 1990
                          
                            
7:30 AM Continental Breakfast                              


ORAL SESSION 9:
SELF ORGANIZATION AND UNSUPERVISED LEARNING
Session Chair: Terry Sejnowki, The Salk Institute.                

8:30 AM "Self-Organization in a Developing Visual Pattern", by Martha
Constantine-Paton.

9:00 AM "Models for the Development of Eye-Brain Maps", by Jack Cowan.

9:20 AM "VLSI Implementation of TInMANN", by Matt Melton, Tan Pahn and
Doug Reeves.
 
9:40 AM "Fast Adaptive K-Means Clustering", by Chris Darken and John
Moody.

10:00 AM        "Learning Theory and Experiments with Competitive
Networks", by Griff Bilbro and David Van den Bout.

10:20 AM        Break
                              
10:50 AM        "Self-Organization and Non-Linear Processing in
Hippocampal Neurons", by Thomas H. Brown, Zachary Mainen, Anthony Zador
and Brenda Claiborne.
 
11:10 AM                "Weight-Space Dynamics of Recurrent Hebbian
Networks", by Todd K. Leen.
    
11:30 AM        "Discovering and Using the Single Viewpoint Constraint",
by Richard S. Zemel and Geoffrey Hinton.

11:50 AM        "Task Decompostion Through Competition in A Modular
Connectionist Architecture: The What and Where Vision Tasks", by Robert
A. Jacobs, Michael Jordan and Andrew Barto.

12:10 PM         Conference Adjourns

5:00 PM Post-Conference Workshop Begins (Keystone, CO)                           

  -----------------------------------------------------------------


POSTER PROGRAM

POSTER SESSION I 
Tuesday, November 27
(* denotes poster spotlight)


APPLICATIONS

App1*   "A B-P ANN Commodity Trader", by J.E. Collard.

App2    "Analog Neural Networks as Decoders", by Ruth A. Erlanson and
Yaser Abu- Mostafa.

App3    "Proximity Effect Corrections in Electron Beam Lithography Using
a Neural Network", by Robert C. Frye, Kevin Cummings and Edward Rietman.

App4    "A Neural Expert System with Automated Extraction of Fuzzy
IF-THEN Rules and Its Application to Medical Diagnosis", by Yoichi
Hayashi.

App5    "Integrated Segmentation and Recognition of Machine and
Hand--printed Characters", by James D. Keeler, Eric Hartman and Wee-Hong
Leow.

App6*   "Training Knowledge-Based Neural Networks to Recognize Genes in
DNA Sequences", by Michael O. Noordewier, Geoffrey Towell and Jude
Shavlik.

App7*   "Seismic Event Identification Using Artificial Neural Networks",
by John L. Perry and Douglas Baumgardt.

App8    "Rapidly Adapting Artificial Neural Networks for Autonomous
Navigation", by Dean A. Pomerleau.

App9    "Sequential Adaptation of Radial Basis Function Neural Networks
and its Application to Time-series Prediction", by V. Kadirkamanathan, M.
Niranjan and F. Fallside.

App10*  "EMPATH: Face, Emotion, and Gender Recognition Using Holons", by
Garrison W. Cottrell and Janet Metcalf.

App11*  "Sexnet: A Neural Network Identifies Sex from Human Faces", by
B. Golomb, D. Lawrence and T.J. Sejnowski.


EVOLUTION AND LEARNING

EL1     "Using Genetic Algorithm to Improve Pattern Classification
Performance", by         Eric I. Chang and Richard P. Lippmann.

EL2     "Evolution and Learning in Neural Networks: The Number and
Distribution of Learning Trials Affect the Rate of Evolution", by Ron
Kessing and David Stork.


LANGUAGE

La1*    "Harmonic Grammar", by Geraldine Legendre, Yoshiro Miyata and
Paul Smolensky.

La2     "Translating Locative Prepostions", by Paul Munro and Mary Tabasko. 

La3*    "Language Acquisition via Strange Automata", by Jordon B. Pollack. 

La4     "Exploiting Syllable Structure in a Connectionist Phonology
Model", by David S.  Touretzky and Deirdre Wheeler.


LEARNING AND GENERALIZATION

LG1     "Generalization Properties of Radial Basis Functions", by Sherif
M.Botros and C.G. Atkeson.

LG2*    "Neural Net Algorithms That Learn In Polynomial Time From
Examples and Queries", by Eric Baum.

LG3*    "Looking for the gap: Experiments on the cause of exponential
generalization", by David Cohn and Geral Tesauro.

LG4     "Dynamics of Generalization in Linear Perceptrons ", by A. Krogh
and John Hertz.

LG5     "Second Order Properties of Error Surfaces, Learning Time, and
Generalization", by Yann LeCun, Ido Kanter and Sara Solla.

LG6     "Kolmogorow Complexity and Generalization in Neural Networks", by
Barak A.  Pearlmutter and Ronal Rosenfeld.
 
LG7     "Learning Versus Generalization in a Boolean Neural Network", by
Johathan Shapiro.

LG8*    "On Stochastic Complexity and Admissible Models for Neural
Network Classifiers", by Padhraic Smyth.

LG9     "Asympotic slowing down of the nearest-neighbor classifier", by
Robert R. Snapp, Demetri Psaltis and Santosh Venkatesh.

LG10    "Remarks on Interpolation and Recognition Using Neural Nets", by
Eduardo D.       Sontag.

LG11    "Epsilon-Entropy and the Complexity of Feedforward Neural
Networks", by Robert C. Williamson.


LEARNING SYSTEMS

LS1     "Analysis of the Convergence Properties of Kohonen's LVQ", by
John S. Baras and Anthony LaVigna.

LS2*    "A Framework for the Cooperation of Learning Algorithms", by Leon
Bottou and Patrick Gallinari.

LS3     "Back-Propagation is Sensitive to Initial Conditions", by John F.
Kolen and Jordan Pollack.

LS4     "Discovering Discrete Distributed Representations with Recursive
Competitive Learning", by Michael C. Mozer.

LS5*    "From Competitive Learning to Adaptive Mixtures of Experts", by
Steven J.  Nowlan and Geoffrey Hinton.

LS6     "ALCOVE: A connectionist Model of Category Learning", by John K.
Kruschke.

LS7     "Transforming NN Output Activation Levels to Probability
Distributions", by John S. Denker and Yann LeCunn.

LS8*    "Closed-Form Inversion of Backropagation Networks: Theory and
Optimization Issues", by Michael L. Rossen.


LOCALIZED BASIS FUNCTIONS

LBF1    "Computing with Arrays of Bell Shaped Functions Bernstein
Polynomials and the Heat Equation", by Pierre Baldi.

LBF2    "Function Approximation Using Multi-Layered Neural Networks with
B-Spline Receptive Fields", by Stephen H. Lane, David Handelman, Jack
Gelfand and Marshall Flax.

LBF3    "A Resource-Allocating Neural Network for Function Interpolation"
by John Platt.

LBF4    "Adaptive Range Coding", by B.E. Rosen, J.M. Goodwin and J.J.
Vidal.

LBF5    "Oriented Nonradial Basis Function Networks for Image Coding and
Analysis", by Avi Saha, Jim christian, D.S. Tang and Chuan-Lin Wu.

LBF6    "A Tree-Structured Network for Approximation on High-Dimensional
Spaces", by T. Sanger.

LBF7    "Spherical Units as Dynamic Reconfigurable Consequential Regions
and their Implications for Modeling Human Learning and Generalization",
by Stephen Jose  Hanson and Mark Gluck.

LBF8    "Feedforward Neural Networks: Analysis and Synthesis Using
Discrete Affine Wavelet Transformations", by Y.C. Pati and P.S.
Krishnaprasad.

LBF9    "A Network that Learns from Unreliable Data and Negative
Examples", by Fredico    Girosi, Tomaso Poggio and Bruno Caprile.

LBF10   "How Receptive Field Parameters Affect Neural Learning", by
Bartlett W. Mel and Stephen Omohundro.


MEMORY SYSTEMS

MS1     "The Devil and the Network: What Sparsity Implies to Robustness
and Memory", by Sanjay Biswas and Santosh Venkatesh.

MS2     "Cholinergic modulation selective for intrinsic fiber synapses
may enhance associative memory properties of piriform cortex", by Michael
E. Hasselmo, Brooke Anderson and James Bower.  MS3      "Associative
Memory in a Network of 'Biological' Neurons", by Wulfram Gerstner.

MS4     "A Learning Rule for Guaranteed CAM Storage of Analog Patterns
and Continuous Sequences in a Network of 3N^2 Weights", by William Baird.



VLSI IMPLEMENTATIONS

VLSI1   "A Highly Compact Linear Weight Function Based on the use of
EEPROMs", by A. Krammer, C.K. Sin, R. Chu and P.K. Ko.

VLSI2   "Back Propagation Implementation on the Adaptive Solutions
Neurocomputer Chip", Hal McCartor.

VLSI3   "Analog Non-Volatile VLSI Neural Network Chip and
Back-Propagation Training", by Simon Tam, Bhusan Gupta, Hernan A. Castro
and Mark Holler.
 
VLSI4   "An Analog VLSI Splining Circuit", by D.B. Schwartz and V.K.
Samalam.

VLSI5   "Reconfigurable Neural Net Chip with 32k Connections", by
H.P.Graf and D.  Henderson.

VLSI6   "Relaxation Networks for Large Supervised Learning Problems", by
Joshua Alspector, Robert Allan and Anthony Jayakumare.
        
 

POSTER SESSION II
Wednesday, November 28
(* denotes poster spotlight)


CONTROL AND NAVIGATION

CN1     "A Reinforcement Learning Variant for Control Scheduling", by
Aloke Guha.

CN2*    "Learning Trajectory and Force Control of an Artificial Muscle
Arm by Parallel- Hierarchical Neural Network Model", by
Masazumi        Katayama and Mitsuo Kawato.

CN3     "Identification and Control of a Queueing System with Neural
Networks", by Rodolfo A. Milito, Isabelle Guyon and Sara Solla.

CN4     "Conditioning And Spatial Learning Tasks", by Peter Dayan.

CN5     "Reinforcement Learning in Non-Markovian Environments", by Jurgen
Schmidhuber.

CN6*    "A Model for Distributed Sensorimotor Control of the Cockroach
Escape Turn", by Randall D. Beer, Gary Kacmarcik, Roy Ritzman and Hillel
Chiel.

CN7*    "Flight Control in the Dragonfly: A Neurobiological Simulation",
by W.E. Faller and M.W. Luttges.

CN8     "Integrated Modeling and Control Based on Reinforcement Learning
and Dynamic Programming", by Richard S. Sutton.


DEVELOPMENT

Dev2    "Interaction Among Ocular Dominance, Retinotopic Order and
On-Center/Off- Center Pathways During Development", by Shiqeru Tanaka.

Dev3    "Simple Spin Models for the development of Ocular Dominance and
Iso-Orientation Columns", by Jack Cowan.

NEURODYNAMICS

        
ND1     "Reduction of Order for Systems of Equations Describing the
Behavior of Complex Neurons", by        T.B.Kepler, L.F. Abbot and E.
Marder.

ND2     "An Attractor Neural Network Model of Recall and Recognition", by
E. Ruppin, Y.  Yeshurun.
        
ND3     "Stochastic Neurodynamics", by Jack Cowan.

ND4     "A Method for the Efficient Design of Boltzman Machines for
Classification Problems", by Ajay Gupta and Wolfgang Maass.

ND5     "Analog Neural Networks that are Parallel and Stable", by C.M.
Marcus, F.R.  Waugh and R.M. Westervelt.

ND6     "A Lagrangian Approach to Fixpoints ", by Eric Mjolsness and
Willard Miranker.

ND7     "Shaping the State Space Landscape in Recurrent Networks", by
Patrice Y.       Simard, Jean Pierre Raysz and Bernard Victorri.

ND8     "Adjoint-Operators and non-Adiabatic Learning Algorithms in
Neural Networks", by N. Toomarian and J. Barhen.


OSCILLATIONS

Osc1*   "Connectivity and Oscillations in Two Dimensional Models of
Neural Populations", by Daniel M. Kammen, Ernst Niebur and Christof Koch.


Osc2*   "Oscillation Onset in Neural Delayed Feedback", by      Andre
Longtin.

Osc3*   "Analog Computation at a Critical Point: A Novel Function for
Neuronal Oscillations? ", by Leonid Kruglyak.


PERFORMANCE COMPARISONS

PC1     "Comparison of three classification techniques, Cart, C4.5 and
multi-layer perceptions", by A.C. Tsoi and R.A. Pearson.

PC2     "A Comparative Study of the Practical Characteristics of Neural
Network and Conventional Pattern Classifiers", by Kenny Ng and  Richard
Lippmann.

PC3     "Time Trials on Second-Order and Variable-Learning-Rate
Algorithms", by Richard Rohwer.

PC4     "Kohonen Networks and Clustering: Comparative Performance in
Color Clusterng", by Wesley Snyder, Daniel Nissman, David Van den Bout
and Griff Bilbro.




SIGNAL PROCESSING

Sig1*   "Natural Dolphin Echo Recognition Using An Integrator Gateway
Network", by H.  L. Roitblat, P.W.B. Moore, R.H. Penner and P.E.
Nachtigall.

Sig2*   "Signal Processing by Multiplexing and Demultiplexing in
Neurons", by David C.    Tam.


SPEECH PROCESSING

Sp1     "A Temporal Neural Network for Word Identification from
Continuous Phoneme Strings", by Robert B. Allen and Candace Kamm.

Sp2*    "Connectionist Approaches to the use of Markov Models for Speech
Recognition", by H.Bourlard and N. Morgan.

Sp3     "The Temp 2 Algorithm: Adjusting Time-Delays by Supervised
Learning", by Ulrich Bodenhausen.

Sp4     "Spoken Letter Recognition", by Mark Fanty and Ronald A.Cole.

Sp5     "Speech Recognition Using Demi-Syllable Neural Prediction Model",
by Ken-ichi      Iso and Takao Watanabe.

Sp6     "RECNORM: Simultaneous Normalisation and Classification Applied
to Speech Recognition", by John S. Bridle and Steven Cox.

Sp7*    "Exploratory Feature Extraction in Speech Signals", by Nathan
Intrator.

SP8     "Detection and Classification of Phonemes Using
Context-Independent Error Back- Propagation", by Hong C. Leung, James R.
Glass, Michael S. Phillips and Victor W. Zue.

TEMPORAL PROCESSING

TP1     "Modeling Time Varying Systems Using a Hidden Control Neural
Network Architecture", by Esther Levin.

TP2     "A New Neural Network Model for Temporal Processing", by Bert
de Vries and Jose Principe.

TP3     "ART2/BP architecture for adaptive estimation of dynamic
processes", by Einar Sorheim.

TP4     "Statistical Mechanics of Temporal Association in Neural Networks
with Delayed Interaction", by Andreas V.M. Herz, Zahoping Li, Wulfram
Gerstner and J. Leo van Hemmen.

TP5     "Learning Time Varying Concepts", by Anthony Kuh and Thomas Petsche.

TP6     "The Recurrent Cascade-Correlation Architecture" by Scott E. Fahlman. 


VISUAL PROCESSING

VP1     "Steropsis by Neural Networks Which Learn the Constraints", by
Alireza Khotanzad and Ying-Wung Lee.

VP2     "A Neural Network Approach for Three-Dimensional Object
Recognition", by         Volker Tresp.

VP3*    "A Multiresolution Network Model of Motion Computation in
Primates", by H.  Taichi Wang, Bimal Mathur and Christof Koch.

VP4     "A Second-Order Translation, Rotation and Scale Invariant Neural
Network ", by Shelly D.D.Goggin, Kristina Johnson and Karl Gustafson.

VP5     "Optimal Sampling of Natural Images: A Design Principle for the
Visual System?", by William Bialek, Daniel Ruderman and A. Zee.

VP6*    "Learning to See Rotation and Dilation with a Hebb Rule", by
Martin I. Sereno and Margaret E. Sereno.

VP7     "Feedback Synapse to Cone and Light Adaptation", by Josef
Skrzypek.

VP8     "A Four Neuron Circuit Accounts for Change Sensitive Inhibition
in Salamander Retina", by J.L. Teeters, F. H. Eeckman, G.W. Maguire, S.D.
Eliasof and F.S.  Werblin.

VP9*    "Qualitative structure from motion", by Daphana Weinshall.

VP10    "An Analog VLSI Chip for Finding Edges from Zero-Crossings", by
Wyeth Bair.

VP11    "A CCD Parallel Processing Architecture and Simulation of CCD
Implementation of the Neocognitron", by Michael Chuang.

VP12*   "A Correlation-based Motion Detection Chip", by Timothy Horiuchi,
John Lazzaro, Andy Moore and Christof Koch.


------------------------------

End of Neuron Digest [Volume 6 Issue 58]
****************************************