[comp.ai.neural-nets] Neuron Digest V5 #25

neuron-request@HPLABS.HP.COM ("Neuron-Digest Moderator Peter Marvit") (06/02/89)

Neuron Digest	Friday,  2 Jun 1989
		Volume 5 : Issue 25

Today's Topics:
			     Pinker and Prince
       Bruce McNaughton on Neural Net for Spatial Rep. in Hippocampus
       TR - Choosing Computational Architectures for Text Processing
		      Call for paper -- hybrid systems
	  Workshop on Neural Representation of Visual Information
			   Tech reports available
		    conference announcement - EMCSR 1990
	       TD Model of Conditioning -- Paper Announcement
	   Updated program info for: NEURAL NETWORKS for DEFENSE
			Watrous to speak at GTE Labs


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).

------------------------------------------------------------

Subject: Pinker and Prince
From:    marchman@amos.ling.ucsd.edu (Virginia Marchman)
Date:    Wed, 26 Apr 89 05:11:31 +0000 



I heard that there was a conversation going on about the Pinker & Prince
article, and thought that I would pass along an abstract from a recent Tech
Report.  Requests for hard copy should be sent to yvonne@amos.ucsd.edu.
(ask for TR #8902).

 -virginia marchman



	Pattern Association in a Back Propagation Network:
	   Implications for Child Language Acquisition

       Kim Plunkett                      Virginia Marchman
University of Aarhus, Denmark     University of California, San Diego


			Abstract

A 3-layer back propagation network is used to implement a pattern
association task which learns mappings that are analogous to the present and
past tense forms of English verbs, i.e., arbitrary, identity, vowel change,
and suffixation mappings.  The degree of correspondence between
connectionist models of tasks of this type (Rumelhart & McClelland, 1986;
1987) and children's acquisition of inflectional morphology has recently
been highlighted in discussions of the general applicability of PDP to the
study of human cognition and language (Pinker & Mehler, 1988).  In this
paper, we attempt to eliminate many of the shortcomings of the R&M work and
adopt an empirical, comparative approach to the analysis of learning (i.e.,
hit rate and error type) in these networks.  In all of our simulations, the
network is given a constant 'diet' of input stems -- that is,
discontinuities are not introduced into the learning set at any point.

Four sets of simulations are described in which input conditions (class size
and token frequency) and the presence/absence of phonological
subregularities are manipulated.  First, baseline simulations chart the
initial computational constraints of the system and reveal complex
"competition effects" when the four verb classes must be learned
simultaneously.  Next, we explore the nature of these competitions given
different type (class sizes) and token frequencies (# of repetitions).
Several hypotheses about input to children are tested, from dictionary
counts and production corpora.  Results suggest that relative class size
determines which "default" transformation is employed by the network, as
well as the frequency of overgeneralization errors (both "pure" and
"blended" overgeneralizations).  A third series of simulations manipulates
token frequency within a constant class size, searching for the set of token
frequencies which results in "adult-like competence" and "child-like" errors
across learning. A final series investigates the addition of phonological
sub-regularities into the identity and vowel change classes.  Phonological
cues are clearly exploited by the system, leading to overall improved
performance.  However, overgeneralizations, U-shaped learning and
competition effects continue to be observed in similar conditions.  These
models establish that input configuration plays a role in detemining the
types of errors produced by the network - including the conditions under
which "rule-like" behavior and "U-shaped" development will and will not
emerge.

The results are discussed with reference to behavioral data on children's
acquisition of the past tense and the validity of drawing conclusions about
the acquisition of language from models of this sort.

------------------------------

Subject: Bruce McNaughton on Neural Net for Spatial Rep. in Hippocampus
From:    Mark Gluck <netlist@PSYCH.STANFORD.EDU>
Date:    Thu, 27 Apr 89 08:50:52 -0700 


[[ Editor's Note: Although this talk has past, I continue the practice of
including talks so readers (from all over the world) know what's going on
and who is doing it.  -PM ]]

            Stanford University Interdisciplinary Colloquium Series:
                   Adaptive Networks and their Applications

                      May 2nd (Tuesday, 3:30pm):

                           Room 380-380C

******************************************************************************

   Hebb-Steinbuch-Marr Networks and the Role of Movement in Hippocampal
                   Representations of Spatial Relations

                         Bruce L. McNaughton

                         Dept. of Psychology
                        University of Colorado
                           Campus Box 345
                         Boulder, CO  80309

******************************************************************************
     
                               Abstract

  Over 15 years ago, Marr proposed models for associative learning and
pattern completion in specific brain regions. These models incorporated
Hebb's postulate, the "learning matrix" concept of Steinbuch, recurrent
excitation, and the assumptions that a few excitatory synapses are
disproportionately powerful, and that inhibitory synapses divide
postsynaptic excitation by the total input. These ideas provide a basis for
understanding much of the circuitry and physiology of the hippocampus, and
will be used to suggest how spatial relationships are coded there by forming
conditional associations between location and movement representations
originating in the inferotemporal and parietal cortical systems
respectively.

References:
- -----------
McNaughton, B. L.  & Morris R.G.M. (1988). Hippocampal synaptic enhancement and 
  information storage within a distributed memory system. Trends in Neurosci. 
  10:408-415.

McNaughton, B. L. & Nadel, L. (in press, 1989). Hebb-Marr networks and
  the neurobiological representation of action in space. To appear in 
  M. Gluck & D. Rumelhart (Eds.), Neuroscience and Connectionist
  Models, Erlbaum: Hillsdale, NJ


Additional Information:
- ----------------------

Location: Room 380-380C, which can be reached through the lower level
 between the Psychology and Mathematical Sciences buildings. 
Level: Technically oriented for persons working in related areas.
Mailing lists: To be added to the network mailing list, netmail to
 netlist@psych.stanford.edu with "addme" as your subject header.
 For additional information, contact Mark Gluck (gluck@psych.stanford.edu).

------------------------------

Subject: TR - Choosing Computational Architectures for Text Processing
From:    Mike Oaksford <mike%epistemi.edinburgh.ac.uk@NSFnet-Relay.AC.UK>
Date:    Fri, 05 May 89 09:34:17 +0100 

Note, that on this advertisment we cleaned up our communicative act, thanks
for being so tolerant of our strange driving habits.

      Choosing Computational Architectures for Text Processing

                Keith Stenning and Mike Oaksford

                 Centre for Cognitive Science,
                   University of Edinburgh

                   Tech Report EUCCS/RP-28

   (A shorter version to appear in "Connectionist Approaches to Language",
                Reilly, R., and Sharkey, N. (eds)

In this paper we investigate various criteria which bear on the choice of
computational architectures for text processing.  The principle role of the
computational or cognitive architecture is to provide mechanisms for
inference.  In the study of text processing two forms of inference are
fundamental, (i) the implicit elaborative inferences required for
interpretation and (ii) explicit inferences which can be the subject of a
text.  We suggest that the decision of what architecture to employ in
accounting for these inferential modes can not be made *a priori*.  We argue
that classical cognitive architectures based on logic and proof theory
although eminently suited to (ii) fail to provide tractable theories of (i),
while more recent proposals like PDP (Rumelhart & McClelland, 1986) and
Classifier systems (Holland, Holyoak, Nisbett & Thagard, 1986), seem to
offer new insights into (i) while leaving (ii) untouched.  We examine the
computational issues involved in a review of recent candidate architectures
beginning at one extreme with PROLOG, going through ACT* and Classifier
systems, ending with PDP.  We then examine the empirical work from verbal
reasoning tasks involving conditional and syllogistic reasoning in arguing
that the grounds upon which to choose between architectures are largely *a
posteriori* and empirical and moreover indicate that satisfactory
explanations of this data must invoke both (i) and (ii).  In the process we
shall proffer novel interpretations both of conditional reasoning
experiments as being largely inductive (and hence of scant relevance to
assessing our facility for logical thought) and of Johnson-Laird's theory of
syllogisms as providing a heuristic theorem prover along the lines of any
other practical *implementation* of logic.  We believe that this allows the
explanatory burden for a lot of this data to be correctly located at the
*implemenational* rather than the cognitive level.


Requests for this Tech Report to:

                   Betty Hughs,
                   Technical Report Librarian,
                   Centre for Cognitive Science,
                   University of Edinburgh,
                   1 & 2, Buccleuch Place,
                   Edinburgh, EH8 9LW,
                   Scotland, UK.

  e-mail: betty%epistemi.ed.ac.uk@nsfnet-relay.ac.uk



------------------------------

Subject: Call for paper -- hybrid systems
From:    hendler@icsi.berkeley.edu (James Hendler)
Organization: International Computer Science Institute
Date:    Tue, 09 May 89 20:34:48 +0000 


			CALL FOR PAPERS

		      CONNECTION SCIENCE
	    (Journal of Neural Computing, Artificial 
	      Intelligence and Cognitive Research)

		        Special Issue -- 
	   HYBRID SYMBOLIC/CONNECTIONIST SYSTEMS


Connectionism has recently seen a major resurgence of interest among both
artificial intelligence and cognitive science researchers.  The spectrum of
connectionist approaches is quite large, ranging from structured models, in
which individual network units carry meaning, through distributed models of
weighted networks with learning algorithms.  Very encouraging results,
particularly in ``low-level'' perceptual and signal processing tasks, are
being reported across the entire spectrum of these models.  Unfortunately,
connectionist systems have had more limited success in those ``higher
cognitive'' areas where symbolic models have traditionally shown promise:
expert reasoning, planning, and natural language processing.

While it may not be inherently impossible for purely connectionist
approaches to handle complex reasoning tasks someday, it will require
significant breakthroughs for this to happen.  Similarly, getting purely
symbolic systems to handle the types of perceptual reasoning that
connectionist networks perform well would require major advances in AI.  One
approach to the integration of connectionist and symbolic techniques is the
development of hybrid reasoning systems in which differing components can
communicate in the solving of problems.

This special issue of the journal Connection Science will focus on the state
of the art in the development of such hybrid reasoners.  Papers are
solicited which focus on:

	Current artificial intelligence systems which use
	connectionist components in the reasoning tasks they 
	perform.

	Theoretical or experimental results showing how symbolic
	computations can be implemented in, or augmented by,
	connectionist components.

	Cognitive studies which discuss the relationship between
	functional models of higher level cognition and the ``lower
	level'' implementations in the brain.

The special issue will give special consideration to papers sharing the
primary emphases of the Connection Science Journal which include:

	1) Replicability of Results: results of simulation models
	should be reported in such a way that they are repeatable by
	any competent scientist in another laboratory.
	The journal will be sympathetic to the problems that 
	replicability poses for large complex artificial intelligence 
	programs.

	2) Interdisciplinary research: the journal is by nature
	multidisciplinary and will accept articles from a variety of
	disciplines such as psychology, cognitive science, computer
	science, language and linguistics, artificial intelligence,
	biology, neuroscience, physics, engineering and philosophy.
	It will particularly welcome papers which deal with issues
	from two or more subject areas (e.g. vision and language).

Papers submitted to the special issue will also be considered for
publication in later editions of the journal. All papers will be refereed.
The expected publication date for the special issue is Volume 2(1), March,
1990.

DEADLINES:
	Submission of papers	June 15, 1989
	Reviews/decisions	September 30, 1989
	Final rewrites due	December 15, 1989.

Authors should send four copies of the article to:
	Prof. James A. Hendler
	Associate Editor, Connection Science 
	Dept. of Computer Science
	University of Maryland
	College Park, MD 20742
	USA

Those interested in submitting articles are welcome to contact the editor
via e-mail (hendler@brillig.umd.edu - US Arpa or CSnet) or in writing at the
above address.

------------------------------

Subject: Workshop on Neural Representation of Visual Information
From:    rapaport@CS.BUFFALO.EDU (William J. Rapaport)
Organization: The Internet
Date:    Tue, 16 May 89 15:25:36 +0000 


                STATE UNIVERSITY OF NEW YORK AT BUFFALO

                            UB VISION GROUP
                                  and
   GRADUATE RESEARCH INITIATIVE IN COGNITIVE AND LINGUISTIC SCIENCES

                    invite you to attend a workshop:

              NEURAL REPRESENTATION OF VISUAL INFORMATION

                        June 9, 8:30 am to 10 pm
                        June 10, 8:30 am to 4 pm

              Lipschitz Room, CFS 126, Main Street Campus

Speakers:

    Dana Ballard, Computer Science, Rochester
    Robet Boynton, Psychology, UC San Diego
    Ennio Mingola, Center for Adaptive Systems, Boston U.
    Ken Naka, National Inst. for Basic Biology, Japan, and NYU
    Hiroka Sakai, National Inst. for Basic Biology, Japan, and NYU
    Members of the UB Vision Group

If you are interested in attending, send your name and address with a check
for $40 to cover the cost of the five meals to:

    Dr. Deborah Walters
    Department of Computer Science
    SUNY Buffalo
    Buffalo, NY 14260

Graduate students may apply for a waiver of the meal fee.

For further information, contact Dr.  Walters, 636-3187, email:
walters@cs.buffalo.edu or walters@sunybcs.bitnet.

------------------------------

Subject: Tech reports available
From:    GINDI%GINDI@Venus.YCC.Yale.Edu
Date:    Fri, 19 May 89 09:54:00 -0400 


The following two tech reports are now available. Please send requests to
GINDI@VENUS.YCC.YALE.EDU or by physical mail to:

		Gene Gindi
		Yale University
		Department of Electrical Engineering
		P.O. Box 2157 , Yale Station
		New Haven, CT 06520

______________________________________________________________________

		Yale University, Dept. Electrical Engineering
		Center for Systems Science
		TR- 8903


	Neural Networks for Object Recognition within Compositional 
		Hierarches: Initial Experiments


        Joachim Utans, Gene Gindi *
	Dept. Electrical Engineering
	Yale University
	P.O. Box 2157, Yale Station
	New Haven CT 06520
       *(to whom correspondence should be addressed)	

        Eric Mjolsness, P. Anandan
	Dept. Computer Science
	Yale University
	New Haven CT 06520

			Abstract

We describe experiments with TLville, a neural-network for object
recognition.  The task is to recognize, in a translation-invariant manner,
simple stick figures. We formulate the recognition task as the problem of
matching a graph of model nodes to a graph of data nodes. Model nodes are
simply user-specified labels for objects such as "vertical stick" or
"t-junction"; data nodes are parameter vectors, such as (x,y,theta), of
entities in the data. We use an optimization approach where an appropriate
objective function specifies both the graph-matching problem and an analog
neural net to carry out the optimization.  Since the graph structure of the
data is not known a priori; it must be computed dynamically as part of the
optimization. The match metrics are model-specific and are invoked
selectively, as part of the optimization, as various candidate matches of
model-to-data occur. The network supports notions of abstraction in that the
model nodes express compositional hierarchies involving object-part
relationships. Also, a data node matched to an whole object contains a
dynamically computed parameter vector which is an abstraction summarizing
the parameters of data nodes matched to the constituent parts of the whole.
Terms in the match metric specify the desired abstraction. In addition, a
solution to the problem of computing a transformation from retinal to
object-centered coordinates to support recognition is offered by this kind
of network; the transformation is contained as part of the objective
function in the form of the match metric. In experiments, the network
usually succeeds in recognizing single or multiple instances of a single
composite model amid instances of non-models, but it gets trapped in
unfavorable local minima of the 5th-order objective when multiple composite
objects are encoded in the database.


______________________________________________________________________

		Yale University, Dept. Electrical Engineering
		Center for Systems Science
		TR- 8908

         Stickville: A Neural Net for Object Recognition via                   
                       Graph Matching 


        Grant Shumaker 
	School of Medicine, Yale University, New Haven, CT 06510

        Gene Gindi
	Department of Electrical Engineering, Yale University 
	P.O. Box 2157 Yale Station, New Haven,CT 06520
        (to whom correspondence should be addressed)

        Eric Mjolsness, P.Anandan
	Department of Computer Science, Yale University, New Haven, CT 06510

			Abstract

An objective function for model-based object recognition is formulated and
used to specify a neural network whose dynamics carry out the optimization,
and hence the recognition task.  Models are specified as graphs that capture
structural properties of shapes to be recognized.  In addition,
compositional (INA) and specialization (ISA) hierarchies are imposed on the
models as an aid to indexing and are represented in the objective function
as sparse matrices. Data are also represented as a graph.  The optimization
is a graph-matching procedure whose dynamical variables are ``neurons''
hypothesizing matches between data and model nodes.  The dynamics are
specified as a third-order Hopfield-style network augmented by hard
constraints implemented by ``Lagrange multiplier'' neurons.  Experimental
results are shown for recognition in Stickville, a domain of 2-D stick
figures.  For small databases, the network successfully recognizes both an
object and its specialization.


------------------------------

Subject: conference announcement - EMCSR 1990
From:    mcvax!ai-vie!georg@uunet.UU.NET (Georg Dorffner)
Date:    Fri, 19 May 89 14:35:03 -0100 





                        Announcement and Call for Papers


                                    EMCSR 90
                             TENTH EUROPEAN MEETING
                                       ON
                        CYBERNETICS AND SYSTEMS RESEARCH

                               April 17-20, 1990
                         University of Vienna, Austria


                                   Session M:
                        Parallel Distributed Processing
                               in Man and Machine

                                    Chairs:
                 D.Touretzky (Carnegie Mellon, Pittsburgh, PA)
                          G.Dorffner (Vienna, Austria)



        Other Sessions at the meeting will be: 

        A: General Systems Methodology
        B: Fuzzy Sets, Approximate Reasoning and Knowledge-based Systems
        C: Designing and Systems
        D: Humanity, Architecture and Conceptualization
        E: Cybernetics in Biology and Medicine
        F: Cybernetics in Socio-Economic Systems
        G: Workshop: Managing Change: Institutional  Transition  in  the
           Private and Public Sector
        H: Innovation Systems in Management and Public Policy
        I: Systems Engineering and  Artificial  Intelligence  for  Peace
           Research
        J: Communication and Computers
        K: Software Development for Systems Theory
        L: Artificial Intelligence
        N: Impacts of Artificial Intelligence


        The  conference  is  organized  by  the  Austrian  Society   for
        Cybernetic Studies (chair: Robert Trappl).


        SUBMISSION OF PAPERS:

        For symposium  M,  all  contributions  in  the  fields  of  PDP,
        connectionism, and neural networks are welcome.

        Acceptance of contributors will be determined on  the  basis  of
        Draft Final Papers. These papers must not exceed 7 single-spaced
        A4 pages (maximum 50 lines, final size will be 8.5 x 6  inches),
        in  English.   They  have  to  contain  the  final  text  to  be
        submitted,  however,  graphs  and  pictures  need  not   be   of
        reproducible quality.

        The Draft Final Paper must carry the title,  author(s)  name(s),
        and  affiliation  in this order. Please specify the symposium in
        which you would like to present the paper (one  of  the  letters
        above).  Each scientist shall submit only 1 paper.

        Please send  t h r e e  copies of the Draft Final Paper to:

                       EMCSR 90 - Conference Secretariat

                    Austrian Society for Cybernetic Studies
                                Schottengasse 3
                             A-1010 Vienna, Austria

        Deadline for submission: Oct 15, 1989

        Authors will be notified about acceptance no later than Nov  20,
        1989.  They will then be provided with the detailed instructions
        for the preperation of the Final Paper.

        Proceedings containing all accepted papers will be printed.


        For further information write to the above address, call +43 222
        535 32 810, or send email to: sec@ai-vie.uucp

        Questions   concerning   symposium   M   (Parallel   Distributed
        Processing)  can  be directed to Georg Dorffner (same address as
        secretariat), email: georg@ai-vie.uucp  


------------------------------

Subject: TD Model of Conditioning -- Paper Announcement
From:    Rich Sutton <rich@gte.com>
Date:    Fri, 19 May 89 15:01:15 -0400 


Andy Barto and I have just completed a major new paper relating
temporal-difference learning, as used, for example, in our pole-balancing
learning controller, to classical conditioning in animals.  The paper will
appear in the forthcoming book ``Learning and Computational Neuroscience,''
edited by J.W. Moore and M. Gabriel, MIT Press.  A preprint can be obtained
by emailing to rich%gte.com@relay.cs.net with your physical-mail address.
The paper has no abstract, but begins as follows:


	   TIME-DERIVATIVE MODELS OF PAVLOVIAN REINFORCEMENT

			   Richard S. Sutton
		     GTE Laboratories Incorporated

			    Andrew G. Barto
		      University of Massachusetts

This chapter presents a model of classical conditioning called the
temporal-difference (TD) model.  The TD model was originally developed as a
neuron-like unit for use in adaptive networks (Sutton & Barto, 1987; Sutton,
1984; Barto, Sutton & Anderson, 1983).  In this paper, however, we analyze
it from the point of view of animal learning theory.  Our intended audience
is both animal learning researchers interested in computational theories of
behavior and machine learning researchers interested in how their learning
algorithms relate to, and may be constrained by, animal learning studies.

We focus on what we see as the primary theoretical contribution to animal
learning theory of the TD and related models: the hypothesis that
reinforcement in classical conditioning is the time derivative of a
composite association combining innate (US) and acquired (CS) associations.
We call models based on some variant of this hypothesis ``time-derivative
models'', examples of which are the models by Klopf (1988), Sutton & Barto
(1981a), Moore et al (1986), Hawkins & Kandel (1984), Gelperin, Hopfield &
Tank (1985), Tesauro (1987), and Kosko (1986); we examine several of these
models in relation to the TD model.  We also briefly explore relationships
with animal learning theories of reinforcement, including Mowrer's
drive-induction theory (Mowrer, 1960) and the Rescorla-Wagner model
(Rescorla & Wagner, 1972).

In this paper, we systematically analyze the inter-stimulus interval (ISI)
dependency of time-derivative models, using realistic stimulus durations and
both forward and backward CS--US intervals.  The models' behaviors are
compared with the empirical data for rabbit eyeblink (nictitating membrane)
conditioning.  We find that our earlier time-derivative model (Sutton &
Barto, 1981a) has significant problems reproducing features of these data,
and we briefly explore partial solutions in subsequent time-derivative
models proposed by Moore et al.  (1986), Klopf (1988), and Gelperin et al.
(1985).

The TD model was designed to eliminate these problems by relying on a
slightly more complex time-derivative theory of reinforcement.  In this
paper, we motivate and explain this theory from the point of view of animal
learning theory, and show that the TD model solves the ISI problems and
other problems with simpler time-derivative models.  Finally, we demonstrate
the TD model's behavior in a range of conditioning paradigms including
conditioned inhibition, primacy effects (Egger & Miller, 1962), facilitation
of remote associations, and second-order conditioning.



------------------------------

Subject: Updated program info for: NEURAL NETWORKS for DEFENSE
From:    marvit@hplabs.hp.com
Date:    Wed, 24 May 89 08:51:26 -0700 

[[ Editor's Note: Again, the originator's asked me to forward this message.
Note Citizenship requirements. Please contact the folks listed at the end
for further info.  -PM ]]


UPDATED PROGRAM INFORMATION FOR:

                       ---------------------------
                       NEURAL NETWORKS for DEFENSE
                          A One-day Conference:
                       ---------------------------

              Saturday, June 17, 1989 (the day before IJCNN)
                              Washington, DC

          Conference Chair: Prof. Bernard Widrow (Stanford Univ.)
          -------------------------------------------------------

          A one-day conference on defense needs, applications, and
     opportunities for computing with neural networks, featuring
     key representatives from government and industry. It will
     take place in Washington, DC, right before the IEEE and INNS's
     International Joint Conference on Neural Networks (IJCNN).


INDUSTRY SESSION:

          The industry session will feature presentations of the
     current status of defense-oriented research, development, and
     applications of neural network technology from industry leaders.
     They will discuss their current, past, and future involvement in
     neural networks and defense technology, as well as the kinds of
     cooperative ventures in which they might be interested.

* Patrick Castelaz. HUGHES AEROSPACE, Command and Control:
     "Signal Processing Applications of Neural Networks"

* Richard Elsley. ROCKWELL INTERNATIONAL, Knowledge Systems Division:
     "Neural Network Research at Rockwell"

* Lawrence Seidman. FORD AEROSPACE, Advanced Technology:
     "Overview of Neural Network Applications at Ford Aerospace"

* James Anderson. BROWN UNIV., Dept. of Cognitive & Linguistic Sciences:
     "Adventures with TEXAS INSTRUMENTS and the U.S. Air Force in
      Radar Trasnmitter Categorization and Identification"

* Robert Dawes. MARTINGALE RESEARCH:
     "Adaptive Bayesian Estimation and Control with Neural Networks"

* Harold Stoll. NORTHROP, Integrated Optics Laboratory:
     "Optical Neural Networks for Automatic Target Recognition"

* Michael Buffa. NESTOR CORP.:
     "Military Target Recognition in Sonar, Radar, and Image-Based
      Systems with Actual Results"

* Fred Weingard. BOOZ, ALLEN, & HAMILTON, Neural Network Applications Group  
     "The Adaptive Network Sensor Procesor Program at Wright-Patterson
      Air Force Base"

* Patrick Simpson. GENERAL DYNAMICS:
     "Defense Applications of Neural Networks"

* John Dishon. SAIC, Emerging Technologies Division
     "Non-Linear Adaptive Control Systems in Neural Networks"

* Monndy Eshera. MARTIN MARIETTA: 
     "Systolic Array Neurocomputers: Synaptic Level Parallelism"

* Robert Hecht-Nielsen. HNC:
     "Near Term Defense Payoffs with Neurocomputing Technology"

* John Leonard. HUGHES AEROSPACE, Electro-Optical & Data Systems Group:
     "Neural Networks for Tactical Awareness & Target Recognition"

* Robert Willstadter. BOEING, Computer Services - AI Systems:
     "Defense & Aerospace Applications of Neural Networks at Boeing"

DEFENSE DEPARTMENT SESSION:

          The defense-department session will include program managers 
     from Department of Defense (DoD) agencies funding and conducting
     Neural Network research and development:

* Thomas Mckenna. Scientific Officer, Cognitive & Neural Sciences:
      OFFICE OF NAVAL RESEARCH (ONR)

* David Andes, Chief of AI & Neural Network Research Programs:
      NAVAL WEAPONS CENTER, CHINA LAKE: 

* Edward Gliatti, Chief of Information Processing Technology Branch 
      WRIGHT-PATTERSON AIR FORCE BASE:

* Major Robert L. Russel, Jr. Assistant Division Chief for Image Systems
      ROME AIR DEVELOPMENT CENTER:

            ...plus others to be announced later.

KEY NOTE ADDRESS:

        The meeting chairmain, and the keynote speaker at the
     conference, is Professor Bernard Widrow who directed the 
     recent DARPA study evaluating the military and commercial
     potential of neural networks. He is a professor of EE at
     Stanford University, the current president of the INNS,
     co-inventor of the LMS algorithm (Widrow & Hoff, 1960), and 
     the president of Memistor Corp, the oldest neural network
     applications and development company, which Prof. Widrow
     founded in 1962.
     
WHO SHOULD COME:

     High-technology research & development personnel, research
     directors, and management from defense-oriented R&D companies
     and divisions.

     Defense Department personnel in involved with, or directing,
     Neural Network research in areas of possible Neural Network
     applications, including: Automatic Target Recognition;
     Speach, Sonar, & Radar Classification, and Real-time Sensorimotor
     Control for Autonomous Robotic Applications.

     Neural Network researchers who wish to be aware of current and
     future defense needs and opportunities for technology
     transfer to applications.
                      *          *         *

        Program Committee: Mark Gluck (Stanford) & Ed Rosenfeld 
  Note: Attendance at "N. N. for Defense" is limited to U.S. Citizens Only

 ----------------------------------------------------------------------------
  For registration and information, call Anastasia Mills at (415) 995-2471 
        or FAX: (415) 543-0256, or write to: Neural Network Seminars,
 Neural Network Seminars, Miller-Freeman, 500 Howard St., San Fran., CA 94105
 ----------------------------------------------------------------------------


------------------------------

Subject: Watrous to speak at GTE Labs
From:    rich@GTE.COM (Rich Sutton)
Organization: GTE Laboratories, Waltham, MA
Date:    Thu, 25 May 89 14:57:28 +0000 


			 Seminar Announcement

	  PHONEME DISCRIMINATION USING CONNECTIONIST NETWORKS

			       R. Watrous

	      Dept. of Computer Science, Univ. of Toronto
	      Siemens Research and Technology Laboratories

The application of connectionist networks to speech recognition is assessed
using a set of representative phonetic discrimination problems chosen with
respect to a theory of phonetics. A connectionist network model called the
Temporal Flow Model is defined which represents temporal relationships using
delay links and permits general patterns of connectivity. It is argued that
the model has properties appropriate for time varying signals such as
speech.  Networks are trained using gradient descent methods of iterative
nonlinear optimization to reduce the mean squared error between the actual
and the desired response of the output units.

Separate network solutions are demonstrated for all eight phonetic
discrimination problems for one male speaker. The network solutions are
analyzed carefully and are shown in every case to make use of known acoustic
phonetic cues. The network solutions vary in the degree to which they make
use of context dependent cues to achieve phoneme recognition. The network
solutions were tested on data not used for training and achieved an average
accuracy of 99.5%. It is concluded that acoustic phonetic speech recognition
can be accomplished using connectionist networks.

- ---------------------------------------------------------------------------
The talk will be at 11am on May 31 in the GTE Labs Auditorium.  For further
information contact Rich Sutton (Rich%gte.com@relay.cs.net or 617-466-4133).
Non-GTE people should arrive early to be escorted to the auditoriumm.

------------------------------

End of Neurons Digest
*********************