[comp.ai.neural-nets] Neuron Digest V4 #17

neuron-request@HPLABS.HP.COM (Neuron-Digest Moderator Peter Marvit) (10/26/88)

Neuron Digest	Tuesday, 25 Oct 1988
		Volume 4 : Issue 17

Today's Topics:

			       Administrivia
		    Congress on Cybernetics and Systems
	     Report: Markov Models and Multilayer Perceptrons
				 AAAIC '88
		3rd Intl. Conference on Genetic Algorithms
		 Abstract: ANNS and Radial Basis Functions
      Abstract: A Dynamic Connectionist Model For Phoneme Recognition
	 tech report: Laerning Algorithm for Fully Recurrent ANNS
		       Paper from nEuro'88 in Paris
     Cary Kornfeld to speak on neural networks and bitmapped graphics
		   Neural Network Symposium Announcement
			NIPS Student Travel Awards


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: Administrivia
From:    "Neuron-Digest Moderator -- Peter Marvit" <neuron@hplms2>
Date:    Tue, 25 Oct 88 15:00:08 -0700 

[[ As you all noticed, nearly everyone received a duplicate of the last issue
of the Digest.  I've traced it to a machine which has a too few CPU cycles
available.  We're waiting for a faster one, but in the mean time I'm sending
this through a different route.  Thanks you all who sent me headers so I
could trace the problem.

This issue contains all paper and conference announcements.  I'll send the
next one of discussions and requests.  I'm saving the issue of the
discussion of Consciousness till the one after that.

Keep those cards and letters coming. -PM ]]

------------------------------

Subject: Congress on Cybernetics and Systems
From:    SPNHC@CUNYVM.CUNY.EDU (Spyros Antoniou)
Organization: The City University of New York - New York, NY
Date:    08 Oct 88 03:28:19 +0000 


             WORLD ORGANIZATION OF SYSTEMS AND CYBERNETICS

         8 T H    I N T E R N A T I O N A L    C O N G R E S S

         O F    C Y B E R N E T I C S    A N D   S Y S T E M S

 JUNE 11-15, 1990 at Hunter College, City University of New York, USA

     This triennial conference is supported by many international
groups  concerned with  management, the  sciences, computers, and
technology systems.

      The 1990  Congress  is the eighth in a series, previous events
having been held in  London (1969),  Oxford (1972), Bucharest (1975),
Amsterdam (1978), Mexico City (1981), Paris (1984) and London (1987).

      The  Congress  will  provide  a forum  for the  presentation
and discussion  of current research. Several specialized  sections
will focus on computer science, artificial intelligence, cognitive
science, biocybernetics, psychocybernetics  and sociocybernetics.
Suggestions for other relevant topics are welcome.

      Participants who wish to organize a symposium or a section,
are requested  to submit a proposal ( sponsor, subject, potential
participants, very short abstracts ) as soon as possible, but not
later  than  September 1989.  All submissions  and correspondence
regarding this conference should be addressd to:

                    Prof. Constantin V. Negoita
                         Congress Chairman
                   Department of Computer Science
                           Hunter College
                    City University of New York
             695 Park Avenue, New York, N.Y. 10021 U.S.A.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|   Spyros D. Antoniou  SPNHC@CUNYVM.BITNET  SDAHC@HUNTER.BITNET    |
|                                                                   |
|      Hunter College of the City University of New York U.S.A.     |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

------------------------------

Subject: Report: Markov Models and Multilayer Perceptrons
From:    prlb2!welleken@uunet.UU.NET (Wellekens)
Date:    Sat, 08 Oct 88 18:00:24 +0100 

The following report is available free of charge from 

Chris.J.Wellekens, Philips Research Laboratory Brussels,
2 Avenue van Becelaere, B-1170 Brussels,Belgium.
Email wlk@prlb2.uucp

      LINKS BETWEEN MARKOV MODELS AND MULTILAYER PERCEPTRONS
                   H.Bourlard and C.J.Wellekens    
                Philips Research Laboratory Brussels

                               ABSTRACT

Hidden Markov models are widely used for automatic speech recognition.  They
inherently incorporate the sequential character of the speech signal and are
statistically trained.  However, the a priori choice of a model topology
limits the flexibility of the  HMM's. Another drawback of these models is
their weak discriminating power.

Multilayer perceptrons are now promising tools in the connectionist approach
for classification problems and have already been successfully tested on
speech recognition problems.  However, the sequential nature of the speech
signal remains difficult to handle in that kind of machine.

In this paper, a discriminant hidden Markov model is defined and it is shown
how a particular multilayer perceptron with contextual and extra feedback
input units can be considered as a general form of such Markov models.
Relations with other recurrent networks commonly used in speech recognition
are also pointed out.

                                           Chris

------------------------------

Subject: AAAIC '88
From:    wilsonjb%avlab.dnet@AFWAL-AAA.ARPA
Organization: The Internet
Date:    08 Oct 88 18:20:00 +0000 

Aerospace Applications of Artificial Intelligence (AAAIC) '88 

		    Special Emphasis
 		           On
 	      Neural Network Applications 



LOCATION:	Stouffer Dayton Plaza Hotel
 		Dayton, OH 

DATES:		Monday, 24 Oct - Friday, 28 Oct 88 


PLENARY SESSION		Tuesday Morning

	Lt General John M. Loh,
	  Commander, USAF Aeronautical Systems Division

	Dr. Stephen Grossberg,
	  President, Association of Neural Networks


TECHNICAL SESSIONS	Tuesday - Thursday  (in paralell)

 	I.  Neural Network Aerospace Applications
		Integrating Neural Netorks and Expert Systems
 		Neural Networks and Signal Processing
 		Neural Networks and Man-Machine Interface Issues 
		Parallel Processing and Neural Networks
		Optical Neural Networks
 		Back Propogation with Momentum, Shared Weights and Recurrency
 		Cybernetics 

	II. AI Aerospace Applications
 		Developmental Tools and Operational and Maintenance Issues
		  Using Expert Systems 
		Real Time Expert Systems
		Automatic Target Recognition
		Data Fusion/Sensor Fusion
		Combinatorial Optimaztion for Scheduling and Resource Control
		Machine Learining, Cognition, and Avionics Applications
		Advanced Problem Solving Techniques
		Cooperative and Competitive Network Dynamics in Aerospace

Tutorials

	I.   Introduction to Neural Nets		Mon 8:30 - 11:30
	II.  Natural LAnguage Processing		    8:30 - 11:30
	III. Conditioned Response in Neural Nets	    1:30 -  4:30
	IV.  Verification and Validation of Knowledge	    1:30 -  4:30
		Based Systems

Workshops

	I.   Robotics, Vision, and Speech		Fri 8:30 - 11:30
	II.  AI and Human Engineering Issues		    8:30 - 11:30
	III. Synthesis of Intelligence			    1:30 -  4:30
	IV.  A Futurists View of AI			    1:30 -  4:30


REGISTRATION INFORMATION
					(after 30 Sept)

	Conference			$225 
	Individual Tech Session (ea)	$ 50
	Tutorials (ea)			$ 50
	Workshops (ea)			$ 50
	

Conference Reistration includes:	Plenary Session
					Tuesday Luncheon
					Wednesday Banquet
					All Technical Sessions
					Proceedings

Tutorials and Workshops are extra.

For more information, contact:

	AAAIC '88
	Dayton SIGART
	P.O. Box 31434
	Dayton, OH 45431

	Darrel Vidrine
	(513) 255-2446

Hotel information:

	Stouffer Dayton Plaza Hotel
	(513) 224-0800

	Rates:		Govt		Non-Govt

		Single	$55		$75

		Double	$60		$80

------------------------------

Subject: 3rd Intl. Conference on Genetic Algorithms
From:    gref@AIC.NRL.NAVY.MIL
Organization: The Internet
Date:    08 Oct 88 18:20:00 +0000 


                              Call for Papers

         The Third International Conference on Genetic Algorithms
                                 (ICGA-89)


     The Third International Conference on Genetic Algorithms (ICGA-
     89), will be held on June 4-7, 1989 at George Mason University
     near Washington, D.C.  Authors are invited to submit papers on
     all aspects of Genetic Algorithms, including: foundations of
     genetic algorithms, search, optimization, machine learning using
     genetic algorithms, classifier systems, apportionment of credit
     algorithms, relationships to other search and learning paradigms.
     Papers discussing specific applications (e.g., OR, engineering,
     science, etc.) are encouraged.


     Important Dates:

             10 Feb 89:      Submissions must be received by program chair
             10 Mar 89:      Notification of acceptance or rejection
             10 Apr 89:      Camera ready revised versions due
             4-7 Jun 89:     Conference Dates


     Authors are requested to send four copies (hard copy only) of a
     full paper by February 10, 1989 to the program chair:


                            Dr. J. David Schaffer
                            Philips Laboratories
                            345 Scarborough Road
                            Briarcliff Manor, NY 10510
                            ds1@philabs.philips.com
                            (914) 945-6168


     Conference Committee:

     Conference Chair:       Kenneth A. De Jong, George Mason University
     Local Arrangements:     Lashon B. Booker, Naval Research Lab
     Program Chair:          J. David Schaffer, Philips Laboratories
     Program Committee:      Lashon B. Booker
                             Lawrence Davis, Bolt, Beranek and Newman, Inc.
                             Kenneth A. De Jong
                             David E. Goldberg, University of Alabama
                             John J. Grefenstette, Naval Research Lab
                             John H. Holland, University of Michigan
                             George G. Robertson, Xerox PARC
                             J. David Schaffer
                             Stephen F. Smith, Carnegie-Melon University
                             Stewart W. Wilson, Rowland Institute for Science

------------------------------

Subject: Abstract: ANNS and Radial Basis Functions
From:    "M. Niranjan" <niranjan%digsys.engineering.cambridge.ac.uk@NSS.Cs.Ucl.AC.UK>
Date:    Mon, 10 Oct 88 11:59:27 -0000 

Here is an extended summary of a Tech report now available. Apologies for
the incomplete de-TeXing.

niranjan

PS: Remember, reprint requests should be sent to
    "niranjan@dsl.eng.cam.ac.uk"

=============================================================================


	     NEURAL NETWORKS AND RADIAL BASIS FUNCTIONS
		IN CLASSIFYING STATIC SPEECH PATTERNS

		   Mahesan Niranjan & Frank Fallside

		      CUED/F-INFENG/TR 22

University Engineering Department
Cambridge, CB2 1PZ, England
Email: niranjan@dsl.eng.cam.ac.uk

SUMMARY

This report compares the performances of three non-linear pattern classifiers
in the recognition of static speech patterns. Two of these classifiers are
neural networks (Multi-layered perceptron and the  modified Kanerva model
(Prager & Fallside, 1988)). The third is the method of radial basis functions
(Broomhead & Lowe, 1988).

The high performance of neural-network based pattern classifiers shows
that simple linear classifiers are inadequate to deal with complex patterns
such as speech. The Multi-layered perceptron (MLP) gives a mechanism to
approximate an arbitrary classification boundary (in the feature space) to a
desired precision. Due to this power and the existence of a simple learning
algorithm (error back-propagation), this technique is in very wide use
nowadays.

The modified Kanerva model (MKM) for pattern classification is derived from
a model of human memory (Kanerva, 1984). It attempts to take advantage of
certain mathematical properties of binary spaces of large dimensionality.
The modified Kanerva model works with real valued inputs. It compares an
input feature vector with a large number of randomly populated `location
cells' in the input feature space; associated with every cell is a `radius'.
Upon comparison, the cell outputs value 1 if the input vector lies within
a volume defined by the radius; its output is zero otherwise. The discrimi-
nant function of the Modified Kanerva classifier is a weighted sum of these
location-cell outputs. It is trained by a gradient descent algorithm.

The method of radial basis functions (RBF) is a technique for non-linear
discrimination. RBFs have been used by Powell (1985) in  multi-variable
interpolation. The non-linear discriminant function in this method is of the
form,

g( x) = sum_j=1^m lambda_j phi (||x - x_j||)

Here, x is the feature vector. lambda_j  are weights associated with each of
the given training examples x_j. phi is a  kernel function defining the
range of influence of each data point on the class boundary. For a particular
choice of the phi function, and a set of training data {x_j,f_j}, j=1,...,N,
the solution for the lambda_j s is closed-form. Thus this technique is
computationally simpler than most neural networks. When used as a non-
parametric technique, each computation at classification stage involves the
use of all the training examples. This, however, is not a disadvantage since
much of this computing can be done in parallel.

In this report, we compare the performance of these classifiers on speech
signals.  Several techniques similar to the method of radial basis functions
are reviewed. The properties of the class boundaries generated by the MLP,
MKM and RBF are derived on simple two dimensional examples and an experimental
comparison with speech data is given.

============================================================================

------------------------------

Subject: Abstract: A Dynamic Connectionist Model For Phoneme Recognition
From:    Tony Robinson <ajr@DSL.ENG.CAM.AC.UK>
Date:    Wed, 12 Oct 88 11:29:55 -0000 

For those people who did not attend the nEuro'88 connectionists conference
in Paris, our contribution is now available, abstract included below.

Tony Robinson

PS: Remember, reprint requests should be sent to
    "ajr@dsl.eng.cam.ac.uk" 

==============================================================================

          A DYNAMIC CONNECTIONIST MODEL FOR PHONEME RECOGNITION

                         A J Robinson, F Fallside
               Cambridge University Engineering Department
                  Trumpington Street, Cambridge, England
                          ajr@dsl.eng.cam.ac.uk

                                 ABSTRACT

This paper describes the use of two forms of error propagation net trained
to ascribe phoneme labels to successive frames of speech from multiple
speakers.  The first form places a fixed length window over the speech and
labels the central portion of the window.  The second form uses a dynamic
structure in which the successive frames of speech and state vector
containing context information are used to generate the output label.  The
paper concludes that the dynamic structure gives a higher recognition rate
both in comparison with the fixed context structure and with the
established k nearest neighbour technique.

============================================================================

------------------------------

Subject: tech report: Laerning Algorithm for Fully Recurrent ANNS
From:    farrelly%ics@ucsd.edu (Kathy Farrelly)
Date:    Wed, 12 Oct 88 14:58:00 -0700 

If you'd like a copy of the following tech report, please write, call,
or send e-mail to:

Kathy Farrelly
Cognitive Science, C-015
University of California, San Diego
La Jolla, CA 92093-0115
(619) 534-6773
farrelly%ics@ucsd.edu


Report Info:

         A LEARNING ALGORITHM FOR CONTINUALLY RUNNING 
                FULLY RECURRENT NEURAL NETWORKS

          Ronald J. Williams, Northeastern University
       David Zipser, University of California, San Diego

The exact form of a  gradient-following  learning  algorithm  for
completely recurrent networks running in continually sampled time
is derived. Practical learning algorithms based  on  this  result
are shown to learn complex tasks requiring recurrent connections.
In the recurrent networks studied here, any unit can be connected
to  any  other,  and  any  unit can receive external input. These
networks run continually in the  sense  that  they  sample  their
inputs  on  every  update cycle, and any unit can have a training
target on any cycle. The storage required and computation time on
each  step  are independent of time and are completely determined
by the size of the network, so no prior knowledge of the temporal
structure of the task being learned is required. The algorithm is
nonlocal in the sense that each unit must have knowledge  of  the
complete  recurrent weight matrix and error vector. The algorithm
is computationally intensive in sequential computers, requiring a
storage  capacity  of  order the 3rd power of the number of units
and computation time on each cycle of order  the  4th  power  the
number  of  units.  The  simulations  include  examples  in which
networks are taught tasks not possible with tapped  delay  lines;
that  is,  tasks that require the preservation of state. The most
complex example of this kind is  learning  to  emulate  a  Turing
machine  that  does a parenthesis balancing problem. Examples are
also given of networks  that  do  feedforward  computations  with
unknown  delays,  requiring  them  to  organize  into the correct
number of layers. Finally, examples are given in  which  networks
are  trained  to  oscillate in various ways, including sinusoidal
oscillation.


------------------------------

Subject: Paper from nEuro'88 in Paris
From:    Orjan Ekeberg <mcvax!bion.kth.se!orjan@uunet.UU.NET>
Date:    Thu, 13 Oct 88 09:48:35 +0100 

The following paper, presented at the nEuro'88 conference in Paris,
has been sent for publication in the proceedings. Reprint requests
can be sent to orjan@bion.kth.se

===============

AUTOMATIC GENERATION OF INTERNAL REPRESENTATIONS IN A
PROBABILISTIC ARTIFICIAL NEURAL NETWORK

Orjan Ekeberg, Anders Lansner

Department of Numerical Analysis and Computing Science
The Royal Institute of Technology, S-100 44 Stockholm, Sweden

ABSTRACT

In a one layer feedback perceptron type network, the connections can be
viewed as coding the pairwise correlations between activity in the
corresponding units. This can then be used to make statistical inference
by means of a relaxation technique based on bayesian inferences.

When such a network fails, it might be because the regularities are not
visible as pairwise correlations. One cure would then be to use a different
internal coding where selected higher order correlations are explicitly
represented. A method for generating this representation automatically is
presented with a special focus on the networks ability to generalize properly.

------------------------------

Subject: Cary Kornfeld to speak on neural networks and bitmapped graphics
From:    pratt@zztop.rutgers.edu (Lorien Y. Pratt)
Organization: Rutgers Univ., New Brunswick, N.J.
Date:    13 Oct 88 19:22:14 +0000 


			   Fall, 1988  
	       Neural Networks Colloquium Series 
			   at Rutgers  

	       Bitmap Graphics and Neural Networks
	       -----------------------------------

			  Cary Kornfeld
		     AT&T Bell Laboratories

	       Room 705 Hill center, Busch Campus  
	       Monday October 31, 1988 at 11:00 AM        
	    NOTE DAY AND TIME ARE DIFFERENT FROM USUAL
	       Refreshments served before the talk


        From the perspective of system architecture and  hardware
design,  bitmap  graphics  and  neural  networks are surprisingly
alike.

        I will  describe  two  key components   of   a   graphics
processor,  designed and fabricated at Xerox PARC, this engine is
based on Leo Guiba's Bitmap Calculus.   While implementing   that
machine  I  got  interested  in  building tiny, experimental flat
panel displays.  In  the  second   part  of  this  talk,  I  will
describe  a  few  of the early prototypes and (if facilities per-
mit), will show a short video clip of their operation.
        When I arrived at Bell  Labs  three  years  ago  I  began
building   larger  display   panels   using   amorphous  silicon,
thin film transistors on glass substrates.   It was this  display
work that gave  birth  to the idea of  fabricating  large  neural
networks using light sensitive synaptic elements.  In May of this
year we demonstrated working prototypes of these arrays in an ex-
perimental neuro-computer at the Atlanta COMDEX show.

        This is one of the first  neuro-computers  built  and  is
among  the largest.  Each of its 14,000 synapses is independently
programmable over a continuous range of connection strength  that
can  theoretically  span  more  than  five  orders  of  magnitude
(we've  measured about three  in  our  first-generation  arrays).
The  computer  has an animated, graphical user interface that en-
ables the operator to both monitor  and  control  its  operation.
This  machine  is  "programmed" to solve a pattern reconstruction
problem.  (Again, facilities permitting) I will show a video tape
of  its  operation  and  will demonstrate the user interface on a
color SUN 3.
- -- 
- -------------------------------------------------------------------
Lorien Y. Pratt                            Computer Science Department
pratt@paul.rutgers.edu                     Rutgers University
                                           Busch Campus
(201) 932-4634                             Piscataway, NJ  08854

------------------------------

Subject: Neural Network Symposium Announcement
From:    RCE1@APLVM.BITNET (RUSS EBERHART)
Organization: The Internet
Date:    15 Oct 88 17:10:39 +0000 


               ANNOUNCEMENT AND CALL FOR ABSTRACTS

    SYMPOSIUM ON THE BIOMEDICAL APPLICATIONS OF NEURAL NETWORKS
    ***********************************************************
                      Saturday, April 22, 1989
                         Parsons Auditorium
      The Johns Hopkins University Applied Physics Laboratory
                          Laurel, Maryland

The study and application of neural networks has increased significantly
in the past few years.  This applications-oriented symposium focuses on
the use of neural networks to solve biomedical tasks such as the
classification of biopotential signals.

Abstracts of not more than 300 words may be submitted prior to January
31, 1989.  Accepted abstracts will be allotted 20 minutes for oral
presentation.

Registration fee is $20.00 (U.S.); $10.00 for full-time students.
Registration fee includes lunch.  For more information and/or to
register, contact Russ Eberhart (RCE1 @ APLVM), JHU Applied Physics
Lab., Johns Hopkins Road, Laurel, MD 20707.

The Symposium is sponsored by the Baltimore Chapter of the IEEE Engineering
in Medicine and Biology Society.  Make check for registration fee payable
to "EMB Baltimore Chapter".

------------------------------

Subject: NIPS Student Travel Awards
From:    terry@cs.jhu.edu (Terry Sejnowski <terry@cs.jhu.edu>)
Date:    Tue, 18 Oct 88 18:00:21 -0400 

We have around 80 student applications for travel awards for
the NIPS meeting in November.  All students who are presenting
an oral paper or poster will receive $250-500 depending on
their expenses.  Other students that have applied will very
likely receive at least $250 --- but this depends on what
the registration looks like at the end of the month.
Official letters will go out on November 1.

The deadline for 30 day supersaver fares is coming up soon.
There is a $200+ savings for staying over Saturday night,
so students who want to go to the workshop can actually
save travel money by doing so.

Terry

- -----

------------------------------

End of Neurons Digest
*********************