[comp.simulation] SIMULATION DIGEST V13 N1

simulation@uflorida.cis.ufl.edu (Moderator: Paul Fishwick) (12/19/89)

Volume: 13, Issue: 1, Tue Dec 19 09:22:12 EST 1989

+----------------+
| TODAY'S TOPICS |
+----------------+

(1) Yet Another GVT Algorithm
(2) RE: Overview of DMOD
(3) Discrete Event Simulation
(4) Wang Institute Conference

* Moderator: Paul Fishwick, Univ. of Florida
* Send topical mail to: simulation@bikini.cis.ufl.edu OR
  post to comp.simulation via USENET
* Archives available via FTP to bikini.cis.ufl.edu, login as
  'ftp', use your last name as the password, change
  directory to pub/simdigest.
* Simulation Tools available by doing above and changing the
  directory to pub/simdigest/tools.



-----------------------------------------------------------------------------

Via:        UK.AC.EX.CS; 14 DEC 89  0:34:35 GMT
From: Ming Chiang Xu <mxu%CS.EXETER.AC.UK@nervm.nerdc.ufl.edu>
Date:       Thu, 14 Dec 89 00:36:24 GMT
To: simulation@BIKINI.CIS.UFL.EDU, mxu@CS.EXETER.AC.UK
Subject:    Yet Another GVT Algorithm


Usually, GVT (Global Virtual Time) can be computed 
by determining the minimum of:

(a) all local clocks
(b) the send times of all messages "in transit"
(c) the send times of all messages in input queues which have not yet
    been processed.

It always  horrifies me to  think that  I  have to go through all this
hustle  only  to calculate a  "imprecise" GVT. Therefore, for  ages, I
have had   a negative attitude  towards   implementing a  "decent" GVT
finding algorithm.  Instead, I only looked at the all the local clocks
and circulate a  token carrying a minimum  local  clock value  several
times before discarding the old "dead" states.

Sometimes, it worries me, as  I told Jason Lin,  that this way  is not
robust.   I  can  be in    real trouble if    I  don't  get it  right.
Incidentally, I  found the applicability  of the  Dijkstra termination
detection algorithm in finding a GVT, or rather, bidding a GVT.

First of all, circulate  a  token to obtain the  minimum   local clock
value.  This value  may not be GVT  right now but  may later on be so.
We therefore apply the Dijkstra Algorithm to detect when it is so.

The  following discussion  assumes that  processes are organised  as a
ring   (or, there  is a Harmilton   circle  in the  process connection
graph).  One process is a token master process and the  rest are token
slave processes. The master and slaves circulate a token continuously.
This contains  two fields: colour  and  time.   The time represents  a
proposed value for the GVT. The proposed GVT can be the  minimum clock
value (or plus a predefined positive value).

First round  of  the  circulation only  notifies the  processes of the
proposed  GVT  and  the token is only  passed when a process currently
holding  the  token  has processed all the  messages  with  send times
before the proposed GVT.  A process is coloured white after  it passes
the token to its successor.

A white process becomes black when  it  receives a message with a send
time before the proposed GVT.  A black process passes a black token to
its successor whereas the white process does not  change the colour of
token. This is where the Dijkstra Algorithm comes in.   The idea is to
circulate a  token which is   initially white and at   the  end of the
second circuit, if the colour of the token is white, then  the message
before  the  proposed GVT can be    discarded. Otherwise start another
circulation until  the colour of  token at the  end of the  circuit is
white.

In the  context of general  process  communication structure, there is
also another version of Dijkstra algorithm available.

The algorithm does not generate lots of messages and in fact it is not
computational intensive.  Moreover,  it is flexible. The proposed  GVT
can also  be elected, if  you  like, from the  processes whose message
queue exceeds a threshold value. This means that the states may not be
discarded regularly. As mentioned earlier, I do not know enougth about
other  GVT finding     algorithms  to  make  more credits    on  mine.
Therefore...

If anyone have any information or experience on calculating the GVT and
is willing to let me know, please resort to the following addresses:




Ming Q. Xu

Dept. of Computer Science
University of Exeter
Prince of Wales Road
Exeter EX4 4PT

England. U.K.

Email: mxu@uk.ac.ex.cs

------------------------------

Posted-Date: Thu, 14 Dec 89 10:21:58 CST
Date: Thu, 14 Dec 89 10:21:58 CST
From: steve@titan.tsd.arlut.utexas.edu (Steve Glicker)
To: simulation@bikini.cis.ufl.edu
Subject: Re:  Overview of DMOD

This is a response on the "Overview of DMOD" article submitted by
Sanjai Narain.  Sorry about the slow response time -- I have been busy
with a simulator.

I would like to get clarification on some terminology and several
points in the article:

As I understand it, events are treated as data and rules are used to
to represent causality relations (derive state information and
schedule events as well as present state and event information to the
user).  I assume that instances of simulated entities (e.g., machines
and materials) are represented as data which contain no state
information.  All historical information is saved (all *events*
canceled or not and simulated entities) and simulation is driven by
events and possibly limited by some time constraint.  Please correct
me if I am wrong.

> DMOD is a first step in our attempts to achieve our goals.  It is based
> upon the following fundamental assumption:
>
>        If all event occurrences in a system till time T are known, the
>        state of the system can be computed at any point of time till T.
>
> Thus, simulation is regarded as computation of event occurrences. ...

Viewing DES as the computation of events, IMHO, is accepted and
requires no assumption.

I do think that there is a problem with the way this assumption is
stated.  Simulation models that have state usually require an initial
state before simulation can proceed.  Some default initial state
must be assumed (consider a simulation model of a closed-system).

I think of an event occurrence as something that takes place at a
point in time (or a simulated something that takes place at a point in
simulation time).  I don't think of a canceled event (a descheduled
event) as one that occurred.  I get the impression the term "event
occurrence" is used to indicate that and event record (in terms of an
event record for a next- event or time queue) is created.

With clarification and/or confirmation on these items I should be able
to contribute some more comments.

Steve Glicker
(steve@titan.tsd.arlut.utexas.edu)


------------------------------

To: simulation-maillist@ufl.edu
Cc: narain%pluto@rand.org
Date: Mon, 18 Dec 89 13:21:27 PST
From: narain%pluto@rand.org


The event scheduling view of the discrete-event technique is generally
intended to simulate discrete systems, i.e. those whose state changes only
at discrete instants of time.  When continuous parameters are involved,
e.g. position, temperature, voltage, pressure, the technique is suitably
extended.  However, sometimes the extension is not obvious for situations
such as the one defined by the following rules:

        If a chip is put into the furnace at time T then it is cooked at
        time T+300, provided the temperature of the furnace never drops
        below 700 at any point of time between T and T+300.

	Temperature of furnace at time 0 is 800.

        When furnace is switched off, temperature drop is given by the
        continuous function f(t).  Similarly, when furnace is switched on,
        temperature rise is given by the continuous function g(t).

Now, suppose an event "chip put into furnace" occurs at time 0.  By the
discrete-event technique, we would schedule the event "chip cooked" for
time 300.  Now, suppose an event "furnace switched off" occurs at time
100.  Question:

	 Do we unschedule "chip cooked" for time 300?

If we do unschedule it, then as the temperature falls gradually from 800,
"furnace switched on" may occur before temperature has fallen to below
700.  So, the unscheduling condition may never actually arise. However,
"chip cooked" would not be recorded as occurring at time 300, as it should
be.

If we do not unschedule it, then the temperature could eventually fall
below 700 and "chip cooked" would be erroneously recorded as occurring at
300.

The problem arises because in unscheduling events we try to do "look
ahead" i.e. determine, given the current state, whether a scheduled event
will occur.  The implicit assumption is that state changes occur only at
event boundaries.  However, when continuous parameters are involved, state
changes may also occur between event boundaries and give rise to
unscheduling conditions.  Monitoring for such conditions can be quite
hard.

This problem does not arise in DMOD as it is based upon "looking back".
If an event E is predicted to occur at time T then a condition C is
associated with it which is defined over the entire past of T.  When
simulation time reaches T the entire past of T is known, so C can be
evaluated.  If true, E is recorded as occurring, otherwise it is
discarded.  As DMOD focusses on computing histories, not states, a
convenient handle on the past of T is the history up to T.

Comments are greatly appreciated.

Sanjai Narain
RAND

------------------------------


Date:  Thu, 14 Dec 89 14:05:11 EST
From: mike@bucasb.BU.EDU (Michael Cohen)
To: ai-chi@LLL-LCC.LLNL.GOV, ailist@AI.AI.MIT.EDU, vision-list@ADS.COM,
        epsynet@uhupvm1.bitnet, neuron@hplabs.hp.com, self-org@mc.lcs.mit.edu,
        arpanet-bboards@mc.lcs.mit.edu, parsym@sumex-aim.stanford.edu,
        physics@mc.lcs.mit.edu, soft-eng@xx.lcs.mit.edu, TheoryNet@ibm.com,
        connectionists@RI.CMU.EDU, info-futures@CS.BU.EDU,
        dynsys-l@uncvm1.bitnet, biotech@umdc.bitnet, mcmi!denny,
        human-nets@aramis.rutgers.edu, optics-l@taunivm.bitnet,
        simulation@ufl.edu
Subject: WANG INSTITUTE CONFERENCE

BOSTON UNIVERSITY, A WORLD LEADER IN NEURAL NETWORK RESEARCH AND TECHNOLOGY,
PRESENTS TWO MAJOR SCIENTIFIC EVENTS:


MAY 6--11, 1990
NEURAL NETWORKS: FROM FOUNDATIONS TO APPLICATIONS
A self-contained systematic course by leading neural architects. 


MAY 11--13, 1990
NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION
An international research conference presenting INVITED and CONTRIBUTED 
papers, herewith solicited, on one of the most active research topics in 
science and technology today.


SPONSORED BY
THE CENTER FOR ADAPTIVE SYSTEMS,
THE GRADUATE PROGRAM IN COGNITIVE AND NEURAL SYSTEMS,
AND
THE WANG INSTITUTE
OF
BOSTON UNIVERSITY
WITH PARTIAL SUPPORT FROM
THE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH

-----------------------------------------------------------------------------


                             CALL FOR PAPERS
                             ---------------

              NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION
                             MAY 11--13, 1990

This research conference at the cutting edge of neural network science and
technology will bring together leading experts in academe, government, and
industry to present their latest results on automatic target recognition in
invited lectures and contributed posters. Automatic target recognition is a
key process in systems designed for vision and image processing, speech and
time series prediction, adaptive pattern recognition, and adaptive 
sensory-motor control and robotics. It is one of the areas emphasized by the
DARPA Neural Networks Program, and has attracted intense research activity
around the world. Invited lecturers include:

JOE BROWN, Martin Marietta, "Multi-Sensor ATR using Neural Nets"

GAIL CARPENTER, Boston University, "Target Recognition by Adaptive Resonance: 
ART for ATR"

NABIL FARHAT, University of Pennsylvania, "Bifurcating Networks for Target 
Recognition"

STEPHEN GROSSBERG, Boston University, "Recent Results on Self-Organizing 
ATR Networks"

ROBERT HECHT-NIELSEN, HNC, "Spatiotemporal Attention Focusing by Expectation 
Feedback"

KEN JOHNSON, Hughes Aircraft, "The Application of Neural Networks to the 
Acquisition and Tracking of Maneuvering Tactical Targets in High Clutter 
IR Imagery"

PAUL KOLODZY, MIT Lincoln Laboratory, "A Multi-Dimensional ATR System"

MICHAEL KUPERSTEIN, Neurogen, "Adaptive Sensory-Motor Coordination using 
the INFANT Controller"

YANN LECUN, AT&T Bell Labs, "Structured Back Propagation Networks for
Handwriting Recognition"

CHRISTOPHER SCOFIELD, Nestor, "Neural Network Automatic Target Recognition 
by Active and Passive Sonar Signals" 

STEVEN SIMMES, Science Applications International Co., "Massively Parallel 
Approaches to Automatic Target Recognition"

ALEX WAIBEL, Carnegie Mellon University, "Patterns, Sequences and Variability:
Advances in Connectionist Speech Recognition"

ALLEN WAXMAN, MIT Lincoln Laboratory, "Invariant Learning and Recognition of 
3D Objects from Temporal View Sequences"

FRED WEINGARD, Booz-Allen and Hamilton, "Current Status and Results of Two 
Major Government Programs in Neural Network-Based ATR" 

BARBARA YOON, DARPA, "DARPA Artificial Neural Networks Technology Program: 
Automatic Target Recognition"

          ------------------------------------------------------

CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR
neural network research will be held on May 12, 1990. Attendees who wish to
present a poster should submit 3 copies of an extended abstract 
(1 single-spaced page), postmarked by March 1, 1990, for refereeing. Include
with the abstract the name, address, and telephone number of the corresponding
author. Mail to: ATR Poster Session, Neural Networks Conference, Wang
Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. Authors
will be informed of abstract acceptance by March 31, 1990.

SITE: The Wang Institute possesses excellent conference facilities on a
beautiful 220-acre rustic setting. It is easily reached from Boston's Logan
Airport and Route 128. 

REGISTRATION FEE: Regular attendee--$90; full-time student--$70. Registration 
fee includes admission to all lectures and poster session, one reception, two 
continental breakfasts, one lunch, one dinner, daily morning and afternoon 
coffee service. STUDENTS: Read below about FELLOWSHIP support.

REGISTRATION: To register by telephone with VISA or MasterCard call 
(508) 649-9731 between 9:00AM--5:00PM (EST). To register by FAX, fill out 
the registration form and FAX back to (508) 649-6926. To register by mail, 
complete the registration form and mail with your full form of payment as 
directed. Make check payable in U.S. dollars to "Boston University". See below 
for Registration Form. To register by electronic mail, use the address 
"rosenber@bu-tyng.bu.edu". On-site registration on a space-available basis will 
take place from 1:00--5:00PM on Friday, May 11. A RECEPTION will be held from 
3:00--5:00PM on Friday, May 11. LECTURES begin at 5:00PM on Friday, May 11 
and conclude at 1:00PM on Sunday, May 13. 

------------------------------------------------------------------------------


             NEURAL NETWORKS: FROM FOUNDATIONS TO APPLICATIONS
                               MAY 6--11, 1989

This in-depth, systematic, 5-day course is based upon the world's leading
graduate curriculum in the technology, computation, mathematics, and biology
of neural networks. Developed at the Center for Adaptive Systems (CAS) and
the Graduate Program in Cognitive and Neural Systems (CNS) of Boston 
University, twenty-eight hours of the course will be taught by six CAS/CNS 
faculty. Three distinguished guest lecturers will present eight hours of 
the course. 


COURSE OUTLINE
--------------

          MAY 7, 1990
          -----------

          MORNING SESSION (PROFESSOR GROSSBERG)

HISTORICAL OVERVIEW: 
Introduction to the binary, linear, and continuous-nonlinear streams of 
neural network research: McCulloch-Pitts, Rosenblatt, von Neumann; Anderson, 
Kohonen, Widrow; Hodgkin-Huxley, Hartline-Ratliff, Grossberg.

CONTENT ADDRESSABLE MEMORY: 
Classification and analysis of neural network models for absolutely stable CAM.
Models include: Cohen-Grossberg, additive, shunting, Brain-State-In-A-Box,
Hopfield, Boltzmann Machine, McCulloch-Pitts, masking field, bidirectional
associative memory.

COMPETITIVE DECISION MAKING:
Analysis of asynchronous variable-load parallel processing by shunting
competitive networks; solution of noise-saturation dilemma; classification of 
feedforward networks: automatic gain control, ratio processing, Weber law, total
activity normalization, noise suppression, pattern matching, edge detection,
brightness constancy and contrast, automatic compensation for variable
illumination or other background energy distortions; classification of feedback
networks: influence  of nonlinear feedback signals, notably sigmoid signals, 
on pattern transformation and memory storage, winner-take-all choices, partial 
memory compression, tunable filtering, quantization and normalization of total
activity, emergent boundary segmentation; method of jumps for classifying
globally consistent and inconsistent competitive decision schemes. 

ASSOCIATIVE LEARNING:
Derivation of associative equations for short-term memory and long-term memory.
Overview and analysis of associative outstars, instars, computational maps,
avalanches, counterpropagation nets, adaptive bidrectional associative memories.
Analysis of unbiased associative pattern learning by asynchronous parallel
sampling channels; classification of associative learning laws.


          AFTERNOON SESSION (PROFESSORS JORDAN AND MINGOLLA)

COMBINATORIAL OPTIMIZATION

PERCEPTRONS: 
Adeline, Madeline, delta rule, gradient descent, adaptive statistical predictor,
nonlinear separability.

INTRODUCTION TO BACK PROPAGATION:
Supervised learning of multidimensional nonlinear maps, NETtalk, image
compression, robotic control.

RECENT DEVELOPMENTS OF BACK PROPAGATION:
This two-hour guest tutorial lecture will provide a systematic review of recent
developments of the back propagation learning network, especially focussing on
recurrent back propagation variations and applications to outstanding
technological problems.


          EVENING SESSION: DISCUSSIONS WITH TUTORS
  


          MAY 8, 1990
          -----------

          MORNING SESSION (PROFESSORS CARPENTER AND GROSSBERG)

ADAPTIVE PATTERN RECOGNITION:
Adaptive filtering; contrast enhancement; competitive learning of recognition
categories; adaptive vector quantization; self-organizing computational maps;
statistical properties of adaptive weights; learning stability and causes of
instability. 

INTRODUCTION TO ADAPTIVE RESONANCE THEORY:
Absolutely stable recognition learning, role of learned top-down expectations;
attentional priming; matching by 2/3 Rule; adaptive search; self-controlled
hypothesis testing; direct access to globally optimal recognition code; control
of categorical coarseness by attentional vigilance; comparison with relevant
behavioral and brain data to emphasize biological basis of ART computations.

ANALYSIS OF ART 1:
Computational analysis of ART 1 architecture for self-organized real-time
hypothesis testing, learning, and recognition of arbitrary sequences of 
binary input patterns.


          AFTERNOON SESSION (PROFESSOR CARPENTER)

ANALYSIS OF ART 2:
Computational analysis of ART 2 architecture for self-organized real-time
hypothesis testing, learning, and recognition for arbitrary sequences of analog
or binary input patterns.

ANALYSIS OF ART 3:
Computational analysis of ART 3 architecture for self-organized real-time
hypothesis testing, learning, and recognition within distributed network
hierarchies; role of chemical transmitter dynamics in forming a memory
representation distinct from short-term memory and long-term memory;
relationships to brain data concerning neuromodulators and synergetic ionic and
transmitter interactions. 

SELF-ORGANIZATION OF INVARIANT PATTERN RECOGNITION CODES:
Computational analysis of self-organizing ART architectures for recognizing
noisy imagery undergoing changes in position, rotation, and size.

NEOCOGNITION: 
Recognition and completion of images by hierarchical bottom-up filtering and
top-down attentive feedback.


          EVENING SESSION: DISCUSSIONS WITH TUTORS



          MAY 9, 1990
          -----------

          MORNING SESSION (PROFESSORS GROSSBERG & MINGOLLA)

VISION AND IMAGE PROCESSING:
Introduction to Boundary Contour System for emergent segmentation and Feature
Contour System for filling-in after compensation for variable illumination;
image compression, orthogonalization, and reconstruction; multidimensional
filtering, multiplexing, and fusion; coherent boundary detection, 
regularization, self-scaling, and completion; compensation for variable 
illumination sources, including artificial sensors (infrared sensors, 
laser radars); filling-in of surface color and form; 3-D form from
shading, texture, stereo, and motion; parallel processing of static form and
moving form; motion capture and induced motion; synthesis of static form and 
motion form representations.

          AFTERNOON SESSION (PROFESSORS BULLOCK, COHEN, & GROSSBERG)

ADAPTIVE SENSORY-MOTOR CONTROL AND ROBOTICS:
Overview of recent progress in adaptive sensory-motor control and related
robotics research. Reaching to, grasping, and transporting objects of variable
mass and form under visual guidance in a cluttered environment will be used as a
target behavioral competence to clarify subproblems of real-time adaptive
sensory-motor control. The balance of the tutorial will be spent detailing
neural network modules that solve various subproblems. Topics include:
Self-organizing networks for real-time control of eye movements, arm movements,
and eye-arm coordination; learning of invariant body-centered target position
maps; learning of intermodal associative maps; real-time trajectory formation;
adaptive vector encoders; circular reactions between action and sensory 
feedback; adaptive control of variable speed movements; varieties of error 
signals; supportive behavioral and neural data; inverse kinematics; automatic 
compensation for unexpected perturbations; independent adaptive control of 
force and position; adaptive gain control by cerebellar learning; 
position-dependent sampling from spatial maps; predictive motor planning 
and execution.

SPEECH PERCEPTION AND PRODUCTION:
Hidden Markov models; self-organization of speech perception and production
codes; eighth nerve Average Localized Synchrony Response; phoneme recognition by
back propagation, time delay networks, and vector quantization.



          MAY 10, 1990
          ------------

          MORNING SESSION (PROFESSORS COHEN, GROSSBERG, & MERRILL)

SPEECH PERCEPTION AND PRODUCTION:
Disambiguation of coarticulated vowels and consonants; dynamics of working
memory; multiple-scale adaptive coding by masking fields; categorical
perception; phonemic restoration; contextual disambiguation of speech tokens;
resonant completion and grouping of noisy variable-rate speech streams.

REINFORCEMENT LEARNING AND PREDICTION:
Recognition learning, reinforcement learning, and recall learning are the 3 R's
of neural network learning. Reinforcement learning clarifies how external
events interact with internal organismic requirements to trigger learning
processes capable of focussing attention upon and generating appropriate actions
towards motivationally desired goals. A neural network model will be derived to
show how reinforcement learning and recall learning can self-organize in
response to asynchronous series of significant and irrelevant events. These
mechanisms also control selective forgetting of memories that are no longer
predictive, adaptive timing of behavioral responses, and self-organization of
goal directed problem solvers.


          AFTERNOON SESSION 
          (PROFESSORS GROSSBERG & MERRILL AND DR. HECHT-NIELSEN)

REINFORCEMENT LEARNING AND PREDICTION:
Analysis of drive representations, adaptive critics, conditioned reinforcers,
role of motivational feedback in focusing attention on predictive data;
attentional blocking and unblocking; adaptively timed problem solving; synthesis
of perception, recognition, reinforcement, recall, and robotics mechanisms into
a total neural architecture; relationship to data about hypothalamus,
hippocampus, neocortex, and related brain regions.

RECENT DEVELOPMENTS IN THE NEUROCOMPUTER INDUSTRY:
This two-hour guest tutorial will provide an overview of the growth and 
prospects of the burgeoning neurocomputer industry by one of its most 
important leaders.


          EVENING SESSION: DISCUSSIONS WITH TUTORS



          MAY 11, 1990
          ------------

          MORNING SESSION (DR. FAGGIN)

VLSI IMPLEMENTATION OF NEURAL NETWORKS: 
This is a four-hour self-contained tutorial on the application and development 
of VLSI techniques for creating compact real-time chips embodying neural 
network designs for applications in technology. Review of neural networks 
from a hardware implementation perspective; hardware requirements and 
alternatives; dedicated digital implementation of neural networks; 
neuromorphic design methodology using VLSI CMOS technology; applications and
performance of neuromorphic implementations; comparison of 
neuromorphic and digital hardware; future prospectus.

----------------------------------------------------------------------------


                    COURSE FACULTY FROM BOSTON UNIVERSITY
                    -------------------------------------

STEPHEN GROSSBERG, Wang Professor of CNS, as well as Professor of Mathematics,
Psychology, and Biomedical Engineering, is one of the world's leading neural 
network pioneers and most versatile neural architects; Founder and 1988 
President of the International Neural Network Society (INNS); Founder
and Co-Editor-in-Chief of the INNS journal "Neural Networks"; an editor of
the journals "Neural Computation", "Cognitive Science", and "IEEE Expert";
Founder and Director of the Center for Adaptive Systems; General 
Chairman of the 1987 IEEE First International Conference on Neural Networks
(ICNN); Chief Scientist of Hecht-Nielsen Neurocomputer Company (HNC); and one
of the four technical consultants to the national DARPA Neural Network Study.
He is author of 200 articles and books about neural networks, 
including "Neural Networks and Natural Intelligence" (MIT Press, 1988),
"Neural Dynamics of Adaptive Sensory-Motor Control" (with Michael Kuperstein, 
Pergamon Press, 1989), "The Adaptive Brain, Volumes I and II" 
(Elsevier/North-Holland, 1987), "Studies of Mind and Brain" (Reidel Press,
1982), and the forthcoming "Pattern Recognition by Self-Organizing Neural
Networks" (with Gail Carpenter).

GAIL CARPENTER is Professor of Mathematics and CNS; Co-Director of the
CNS Graduate Program; 1989 Vice President of the International Neural Network
Society (INNS); Organization Chairman of the 1988 INNS annual meeting; 
Session Chairman at the 1989 and 1990 IEEE/INNS International Joint Conference
on Neural Networks (IJCNN); one of four technical consultants to the national
DARPA Neural Network Study; editor of the journals "Neural Networks", 
"Neural Computation", and "Neural Network Review"; and a member of the
scientific advisory board of HNC. A leading neural architect, Carpenter is
especially well-known for her seminal work on developing the adaptive
resonance theory architectures (ART 1, ART 2, ART 3) for adaptive pattern
recognition. 

MICHAEL COHEN, Associate Professor of Computer Science and CNS, is a
leading architect of neural networks for content addressable memory
(Cohen-Grossberg model), vision (Feature Contour System), and speech (Masking
Fields); editor of "Neural Networks"; Session Chairman at the 1987 ICNN, 
and the 1989 IJCNN; and member of the DARPA Neural Network Study panel on 
Simulation/Emulation Tools and Techniques.

ENNIO MINGOLLA, Assistant Professor of Psychology and CNS, is holder of
one of the first patented neural network architectures for vision and image
processing (Boundary Contour System); Co-Organizer of the 3rd Workshop on
Human and Machine Vision in 1985; editor of the journals "Neural Networks"
and "Ecological Psychology"; member of the DARPA Neural Network Study
panel of Adaptive Knowledge Processing; consultant to E.I. duPont de Nemours,
Inc.; Session Chairman for vision and image processing at the 1987 ICNN, 
and the 1988 INNS meetings. 

DANIEL BULLOCK, Assistant Professor of Psychology and CNS, is developer
of neural network models for real-time adaptive sensory-motor control of arm
movements and eye-arm coordination, notably the VITE and FLETE models for
adaptive control of multi-joint trajectories; editor of "Neural Networks";
Session Chairman for adaptive sensory-motor control and robotics at the 1987
ICNN and the 1988 INNS meetings; invited speaker at the 1990 IJCNN.

JOHN MERRILL, Assistant Professor of Mathematics and CNS, is developing
neural network models for adaptive pattern recognition, speech recognition,
reinforcement learning, and adaptive timing in problem solving behavior, after
having received his Ph.D. in mathematics from the University of Wisconsin at
Madison, and completing postdoctoral research in computer science and
linguistics at Indiana University. 


                              GUEST LECTURERS
                              ---------------

FEDERICO FAGGIN is co-founder and president of Synaptics, Inc. Dr. Faggin 
developed the Silicon Gate Technology at Fairchild Semiconductor. He also 
designed the first commercial circuit using Silicon Gate Technology: the 3708,
an 8-bit analog multiplexer. At Intel Corporation he was responsible for 
designing what was to become the first microprocessor---the 4000 family, 
also called MCS-4. He and Hal Feeney designed the 8008, the first 8-bit 
microprocessor introduced in 1972, and later Faggin conceived the 8080 and 
with M. Shima designed it. The 8080 was the first high-performance 8-bit 
microprocessor. At Zilog Inc., Faggin conceived the Z80 microprocessor 
family and directed the design of the Z80 CPU. Faggin also started Cygnet 
Technologies, which developed a voice and data communication
peripheral for the personal computer. In 1986 Faggin co-founded Synaptics
Inc., a company dedicated to the creation of a new type of VLSI hardware for
artificial neural networks and other machine intelligence applications. 
Faggin is the recipient of the 1988 Marconi Fellowship Award for his 
contributions to the birth of the microprocessor. 

ROBERT HECHT-NIELSEN is co-founder and chairman of the Board of Directors 
of Hecht-Nielsen Neurocomputer Corporation (HNC), a
pioneer in neurocomputer technology and the application of neural networks,
and a recognized leader in the field. Prior to the formation of HNC, he
founded and managed the neurocomputer development and neural network
applications at TRW (1983--1986) and Motorola (1979--1983). He has been active
in neural network technology and neurocomputers since 1961 and earned his 
Ph.D. in mathematics in 1974. He is currently a visiting lecturer in
the Electrical Engineering Department at the University of California at San
Diego, and is the author of influential technical reports and papers on
neurocomputers, neural networks, pattern recognition, signal processing
algorithms, and artificial intelligence.  

MICHAEL JORDAN is an Assistant Professor of Brain and Cognitive Sciences at MIT.
One of the key developers of the recurrent back propagation algorithms,
Professor Jordan's research is concerned with learning in recurrent networks
and with the use of networks as forward models in planning and control. His
interest in interdisciplinary research on neural networks is founded in his
training for a Bachelors degree in Psychology, a Masters degree in Mathematics,
and a Ph.D. in Cognitive Science from the University of California at San 
Diego. He was a postdoctoral researcher in Computer Science at the University of
Massachusetts at Amherst before assuming his present position at MIT. 

             ----------------------------------------------------------

REGISTRATION FEE: Regular attendee--$950; full-time student--$250.
Registration fee includes five days of tutorials, course notebooks, one
reception, five continental breakfasts, five lunches, four dinners, daily
morning and afternoon coffee service, evening discussion sessions with
leading neural architects. 

REGISTRATION: To register by telephone with VISA or MasterCard call 
(508) 649-9731 between 9:00AM--5:00PM (EST). To register by FAX, fill out 
the registration form and FAX back to (508) 649-6926. To register by mail, 
complete the registration form and mail with you full form of payment as 
directed. Make check payable in U.S. dollars to "Boston University". See below 
for Registration Form. To register by electronic mail, use the address 
"rosenber@bu-tyng.bu.edu". On-site registration on a space-available basis will 
take place from 2:00--7:00PM on Sunday, May 6 and from 7:00--8:00AM on Monday,
May 7, 1990. A RECEPTION will be held from 4:00--7:00PM on Sunday, May 6. 
LECTURES begin at 8:00AM on Monday, May 7 and conclude at 12:30PM on Friday,
May 11. 

STUDENT FELLOWSHIPS supporting travel, registration, and lodging for the 
Course and the Research Conference are available to full-time graduate 
students in a PhD program. Applications must be postmarked by March 1, 1990. 
Send curriculum vitae, a one-page essay describing your interest in neural 
networks, and a letter from a faculty advisor to: Student Fellowships, Neural 
Networks Course, Wang Institute of Boston University, 72 Tyng Road, 
Tyngsboro, MA 01879.

CNS FELLOWSHIP FUND: Net revenues from the course will endow fellowships
for Ph.D. candidates in the CNS Graduate Program. Corporate and individual gifts
to endow CNS Fellowships are also welcome. Please write: Cognitive and Neural
Systems Fellowship Fund, Center for Adaptive Systems, Boston University, 111
Cummington Street, Boston, MA 02215. 

------------------------------------------------------------------------------


                  REGISTRATION FOR COURSE AND RESEARCH CONFERENCE

Course: Neural Network Foundations and Applications, May 6--11, 1990

Research Conference: Neural Networks for Automatic Target Recognition, 
May 11--13, 1990 


NAME: _________________________________________________________________

ORGANIZATION (for badge): _____________________________________________

MAILING ADDRESS: ______________________________________________________

                 ______________________________________________________

CITY/STATE/COUNTRY: ___________________________________________________

POSTAL/ZIP CODE: ______________________________________________________

TELEPHONE(S): _________________________________________________________




     COURSE                           RESEARCH CONFERENCE
     ------                           -------------------

     [    ] regular attendee $950     [   ] regular attendee $90 
     [    ] full-time student $250    [   ] full-time student $70
     (limited number of spaces)       (limited number of spaces)

                [   ] Gift to CNS Fellowship Fund 


TOTAL PAYMENT: $________

FORM OF PAYMENT:
     [    ] check or money order (payable in U.S. dollars to Boston University) 
     [    ] VISA  [   ] MasterCard 

Card Number:      ______________________________________________

Expiration Date:  ______________________________________________

Signature:        ______________________________________________



Please complete and mail to: 
Neural Networks 
Wang Institute of Boston University 
72 Tyng Road 
Tyngsboro, MA 01879 USA 

To register by telephone, call: (508) 649-9731. 


HOTEL RESERVATIONS:
Room blocks have been reserved at 3 hotels near the Wang Institute. Hotel
names, rates, and telephone numbers are listed below. A shuttle bus will take
attendees to and from the hotels for the Course and Research Conference.
Attendees should make their own reservations by calling the hotel. The special
conference rate applies only if you mention the name and dates of the meeting
when making the reservations. 

Sheraton Tara       Red Roof Inn          Stonehedge Inn 
Nashua, NH          Nashua, NH            Tyngsboro, MA 
(603) 888-9970      (603) 888-1893        (508) 649-4342 
$70/night+tax       $39.95/night+tax      $89/night+tax 

The hotels in Nashua are located approximately 5 miles from the Wang
Institute. A shuttle bus will be provided.

-------------------------------------------------------------------------------





------------------------------




END OF SIMULATION DIGEST
************************