[comp.ai.neural-nets] Neuron Digest V7 #34

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (06/12/91)

Neuron Digest   Tuesday, 11 Jun 1991
                Volume 7 : Issue 34

Today's Topics:
          ANN and GA application to chaotic dynamical systems?
                       Transportation Applications
            Research positions in speech and image processing
                          OPtimization Methods
      Attending IJCNN and would like to visit schools and companies
                        ANNA91 Proceeedings Info
                       Re: Rigorous Results on the
                  Backprop Issues List and ... (part 2)
                  fast recognition of noisy characters
                  Looking for optimization applications
                          RFD: comp.org.issnnet


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

----------------------------------------------------------------------

Subject: ANN and GA application to chaotic dynamical systems? 
From:    MSANDRI%IVRUNIV.BITNET@ICINECA.CINECA.IT
Date:    Fri, 31 May 91 12:45:47 -0100

[[ Editor's Note: This request is a quite broad and a little vague.  I
usually refer submissions like this to relevant post issues of teh
Digest, general books or papers, and then ask the author to resubmit with
more specific questions and a demonstration that he or she has done a
little research on the topic before asking for general help.  For
example, how would you respond to the question "Give me detailed
information about government?" However, I leave this alone and hope
someonw can provide a useful cogent answer. -PM ]]


Dear network user, 

Do you know applications of neural network, genetic algorithms and so on,
to chaotic dynamical systems?  I am very interested in such areas.

Thank you for your kindness.
Your Marco.


------------------------------

Subject: Transportation Applications
From:    Yu Shen <shen@IRO.UMontreal.CA>
Date:    Sun, 02 Jun 91 10:54:08 -0400

I, with my supervisor Guy Lapalme and Jean-Yves Potvin, is working on
neural network model of vehicle dispatching.

Backpropogation is used to compute the choice heuristics from examples of
preference predicates extracted from previous decision cases of experts.
Near 80% correct rate is achieved with untrained cases.

The most recent report of the work will appear in IJCNN-91-Seatle
(abstract only).

I'd like to know your finding in transportation application of neural
network.

Yu Shen

PhD Student
Dept. d'Informatique et Recherche Operationnelle
University de Montreal
C.P. 6128 Succ. A.
Montreal, Que. 
Canada
H3C 3J7

(514) 342-7089 (H)
shen.iro.umontreal.ca


------------------------------

Subject: Research positions in speech and image processing
From:    Kari Torkkola <karit@spine.hut.fi>
Date:    Mon, 03 Jun 91 14:07:02 +0700


                  RESEARCH POSITIONS AVAILABLE

     The newly created "Institut Dalle Molle d'Intelligence Arti-
ficielle  Perceptive"  (IDIAP)  in  Martigny Switzerland seeks to
hire qualified researchers in the areas of speech recognition and
image  manipulation.   Candidates  should  be able to conduct in-
dependent research in a UNIX environment on the  basis  of  solid
theoretical and applied knowledge.  Salaries will be aligned with
those offered by the Swiss government for  equivalent  positions.
Laboratories  are  now  being  established in the newly renovated
building that houses the  Institute,  and  international  network
connections  will  soon be in place.  Researchers are expected to
begin activity during the academic year 1991-1992.

     IDIAP is the third institute of artificial intelligence sup-
ported by the Dalle Molle Foundation, the others being ISSCO (at-
tached to the University of Geneva) and IDSIA  (situated  in  Lu-
gano).   The new institute will maintain close contact with these
latter centers as  well  as  with  the  Polytechnical  School  of
Lausanne and the University of Geneva.

     To apply for a research position at  IDIAP,  please  send  a
curriculum vita and technical reports to:

                   Daniel Osherson, Directeur
                              IDIAP
                        Case Postale 609
                        CH-1920 Martigny
                           Switzerland

     For further information by e-mail, contact:

                    osherson@disuns2.epfl.ch



------------------------------

Subject: OPtimization Methods
From:    noyesjl%avlab.dnet@wrdc.af.mil
Date:    Mon, 03 Jun 91 05:56:51 -0400

 
                 NEURAL NETWORK OPTIMIZATION METHODS

Here is some information for any that are interested in the use of
standard optimization techniques to solve multi-layer feed-forward neural
networks.  Standard superlinearly convergent methods for solving
unconstrained optimization problems include Conjugate Gradient (CG) and
Quasi-Newton (QN) methods.  Most neural net researchers that are
interested in optimization methods seem to favor CG methods over QN
methods.  This is because CG methods use O(n) memory locations, while QN
methods typically require O(n^2) memory locations.  (Here n is the number
of weights and biases.)  On the other hand, QN methods are usually
acknowledged to be faster.  However, there is an alternative.

I have found that the (relatively) new Low Storage QN methods usually
produce very satisfactory results.  These methods can approximate the
standard BFGS (Broyden-Fletcher-Goldfarb-Shanno) update matrix by using m
of the most recent improvement vectors, where m << n for large problems
(e.g., m is around 5 or 10).  In addition, a line search is not usually
needed at each step.  The method that I have been using was developed by
Jorge Nocedal (see [1], [2]), but there are other low storage algorithms
as well.  So far I have solved problems with up to 3608 weights and
biases (including the 2-2-1 XOR, Fahlman's 10-5-10 Encoder and Complement
Encoder, along with some 25-10-8 and 81-40-8 alphabet problems).  More
testing needs to be done to see how low storage QN methods compare to
some of the newer neural net methods in terms of overall efficiency,
including floating-point operations as well as training epochs and
memory.  (QN methods are "numerically intensive").  More details may be
found in [3] which should be available this summer.  (If someone needs a
copy sooner, contact me at the address below with your surface-mail
address and I will try to provide you with a hardcopy as soon as
possible.)

References:

1. Jorge Nocedal, "Updating Quasi-Newton Matrices with Limited Storage,"
   Mathematics of Computation, Vol. 35, No. 151, July 1980, pp. 773-782.

2. Dong C. Liu and Jorge Nocedal, "On the Limited Memory BFGS Method for
   Large Scale Optimization," Mathematical Programming, Series B, Vol.
   45, No. 3, December 1989, pp. 503-528.

3. James L. Noyes, "Neural Network Optimization Methods," Proceedings of
   the Fourth Conference on Neural Networks and Parallel Distributed
   Processing, Indiana-Purdue University, Fort Wayne, Indiana, April
   11-13, 1991.  (To appear.)

Jim Noyes
Department of Mathematics
 and Computer Science
Wittenberg University
Box 720
Springfield, OH 45501
noyes@wittenberg.edu



------------------------------

Subject: Attending IJCNN and would like to visit schools and companies
From:    kddlab!as1003.meken.fuchueis.toshiba.junet!simokawa@uunet.UU.NET
Date:    Mon, 03 Jun 91 19:01:58 +0200

[[ Editor's Note: It is my firm policy (accidently violated only once)
that the mailing list for Neuron Digest is *not* available to anyone. I
will be glad to publish requests such as this one and hope that readers
will respond. Please contact Mr. Shimokawa directly, or send me a messge
which I will then publish here if you would be willing to meet with
visiting researchers. -PM ]]

Dear Mr. Peter Marvit:

I am planning to attend IJCNN-91-SEATTLE this July 9-12, after that wish
to visit the laboratories, universities or private companies. To arrange
my schedule, I intend to use e-mail.  So I wish to know e-mail address
who involved to neural nets. Would you e-mail me mail list of
neuron-request or/and some other list if you have? Please note that we
can not use ftp.  My purpose to visit are to survey and exchange opinions
of followings

 1. Recent application status of NN. Especially I wish to see demonstrations.
 2. Hardware implementation of NN.( we are making ASIC NN chips for
    parallel processing)
 3. NN applications for pattern recognition ( image data)

Looking for your answer
                                        Sincerely,
                                                Y.Shimokawa

[[ Apparent email address: Shimokawa@as1003.meken.fuchueis.toshiba.june ]]


------------------------------

Subject: ANNA91 Proceeedings Info
From:    enorris@gmuvax2.gmu.edu (Gene Norris)
Date:    Thu, 06 Jun 91 09:17:21 -0400

[[ Editor's Note: I assume the cost mentioned below is 25 dollars (US),
since I'm not aware of "%" as a currency abbreviation. This annoucnement
doesn't mention shipping costs. Is it post-paid? International readers
may wish to enquire ahead of time for possible overseas charges. -PM ]]

The ANNA 91 Conference Proceedings contains full text and illustrations
of 18 papers presented at the ANNA 91 Conference on the Analysis of
Neural Network Applications held May, 1991 at George Mason University.
Proceedings are 212 pages, soft-bound. Copies may be ordered from:

Toni Shetler, ANNA 91 Chair
TRW FVA6/3444
PO Box 10400
Fairfax, VA 22031
(703) 876-4103

Cost is %25.00 per copy.

  Prof. Eugene M. Norris
  CS Dept George Mason University Fairfax, VA 22032 (703)323-2713
  enorris@gmuvax2.gmu.edu                  FAX: 703 323 2630


------------------------------

Subject: Re: Rigorous Results on the
From:    Peter Monsen <ptm3115@draper.com>
Date:    07 Jun 91 10:07:57 -0400

                       Subject:                               Time:1:14 AM
  OFFICE MEMO          RE> Rigorous Results on the_           Date:6/6/91

    At Draper Laboratory, we have been investigating dependable and
validatable neural network architectures. Recently, a survey of published
results in dependable NN architectures was conducted in the hopes of
finding quantitative results in this research area.
          The following is a reference listing produced by the survey.
If you are interested in further details you can contact me and I can
send you a copy of a memo containing the abstract and a brief review of
each of the papers.
          Only some of the papers [3,6,9,10,11,17, and 22] provide
analytical work towards a quantitative measure of the dependability of
NN.  The majority of the papers, on the other hand, contained, for the
most part, simulation results describing the performance of specific
networks sloving particular problems under certain assumptions.  The
clearest conclusion obtained through this survey is the need for more
quantitative results in this research area.

Peter Monsen
E-mail address: <ptm3115@draper.com>
Surface address: CS Draper Lab, MS  6F, Cambridge MA, 02139 (617) 258-3115

==------------------------------------------------------------------

SELECTED PAPERS ON DEPENDABLE AND VALIDATABLE NEURAL NETWORK ARCHITECTURES

[1]     
Title:  Neural Networks for Computing?
Author: Abu-Mostafa, Y.S.
Source: in J.S. Denker, ed., AIP Conference Proceedings 151: Neural Networks
for Computing, American Institute of Physics: New York,  1986, pp.1-6.
        
[2]
Title:  Modeling of Fault-Tolerance in Neural Networks.
Authors: Belfore, II, L.A., B.W. Johnson, and J.H. Aylor.
Source: Proceedings of the IJCNN-90-WASH, 1990, pp. I: 325-328. 

[3]
Title:  The Design of Inherently Fault-Tolerant Systems.        
Author: Belfore, L.A., B.W. Johnson, and J.H. Aylor.
Source: Proc. 1987 Workshop on Algorithm, Architecture and Tech. Issues   in
Models of Concurrent Computations, pp. 565-583.

[4]     
Title:  The 'Illusion' of Fault-Tolerance in Neural Networks for Pattern 
Recognition and Signal Processing.
Author: Carter, M.J.
Source: Technical Session on Fault-Tolerant Integrated Systems, University of
New Hampshire, Durham, NH, March 1988.

[5]
Title:  Operational Fault Tolerance of CMAC Networks.
Author: Carter, M.J., F.J. Rudolph, and A.J. Nucci.
Source: in D.S. Touretzky, ed., Advances in Neural Information Processing 
Systems 2, Morgan Kaufman: San Mateo, CA, 1990, pp. 340-347.

[6]     
Title:  Slow Learning in CMAC Networks and Implications for Fault-Tolerance.
Author: Carter, M.J., A.J. Nucci, E. An, W.T. Miller, III, and F.J. Rudolph.
Source: University of New Hampshire, Intelligent Structures Group, Technical
Report ECE.ICG.90.03, July 1990.

[7]     
Title:  Fault Tolerant Neural Networks with Hybrid Redundancy.
Author: Chu, Lon-Chan, and B. W. Wah.
Source: Proc. of the IJCNN1990, Vol. II, pp. 639-649.

[8]     
Title:  Reliability Measures for Hebian-type Associative Memories with Faulty
Interconnections.
Author: Chung, Pau-Choo, and Thomas F. Krile.
Source: Proceedings of the IJCNN 1990, Vol. I, pp. 847-852.

[9]     
Title:  Reliability Analysis of Artificial Neural Networks.
Author: Dugan, J.B. and J.W. Watterson.
Source: 1991 Proceedings Annual Reliability and Maintainability Symposium, pp.
598-603.

[10]    
Title:  Quantitative Failure Models of Feed-Forward Neural Networks.
Author: Dzwonczyk, M.J.
Source: Masters of Science Thesis, MIT, 1991. (CSDL-T-1068)

[11]    
Title:  Using Associated Random Variables to Determine the Reliability of
Neural Networks.
Author: Faris, W.G. and R.S. Maier.
Source: Journal of Neural Network Computing, vol. 2 #2 (Fall 1990), pp. 49-52.

[12]    
Title:  Neural networks and physical systems with emergent collective   
computational abilities.
Author: Hopfield, J.J.
Source: Proc. Natl. Acad. Sci. USA, April 1982, pp. 2554-2558.

[13]
Title:  Insipient Fault Detection and Diagnosis Using Artificial Neural
Networks.
Author: Hoskins, J.C., K.M. Kaliyur, and D.M. Himmelblau.
Source: Proceedings of the IJCNN 1990, Vol. I, pp. 81-86.

[14]    
Title:  Reliability and speed of Recall in an Associative Network.
Author: Lansner, A. and O. Ekeberg.
Source: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.
PAMI-7, No. 4, July 1985, pp. 490-498.

[15]    
Title:  In Search of the Engram.
Author: Lashely, K.S.
Source: Society of Experimental Biology Symposium #4: Psychological Mechanisms
in Animal Behavior, London: Cambridge University Press, 1950, pp.478-505.
Partially reprinted in J.A. Anderson and E. Rosenfeld, eds., Neurocomputing:
Foundations of Research, Cambridge, MA: The MIT Press,1988, pp.59-63.

[16]    
Title:  Optimal Brain Damage.
Author: Le Cun, Y., J.S. Denker, and S.A. Solla.
Source: in D.S. Touretzky, ed., Advances in Neural Information Processing 
Systems 2, Morgan Kaufman: San Mateo, CA, 1990, pp. 598-605.

[17]    
Title:  Maximally fault-tolerant neural networks and nonlinear programming.
Author: Neti, C., M.H. Schneider, and E.D. Young.
Source: Proceedings of the IJCNN 1990, Vol. II, pp. 483-496.

[18]    
Title:  Limits to the Fault-Tolerance of a Feedforward Neural Network with
Learning.
Author: Nijhuis,J., Hofflinger, A. van Schaik, and L. Spaanenburg.
Source: Digest of Papers, Fault-Tolerant Computing: The Twentieth International
Symposium (FTCS-20), June 1990, pp. 228-235.

[19]    
Title:  Trellis Codes, Receptive Fields, and Fault Tolerant, Self-Repairing 
Neural Networks.
Author: Petsche, T. and B.W. Dickinson.
Source: IEEE Transactions on Neural Networks, vol. 1 (1990) pp.154-166.

[20]
Title:  Fault-Tolerance of a Neural Network Solving the Travelling Salesman
Problem.
Author: Protzel, P., Palumbo, D., and M. Arras.
Source: NASA Contractor Report No. 181798, February 1989.

[21]    
Title:  Fault Tolerance in Artificial Neural Networks.
Author: Sequin, C.H. and R.D. Clay.
Source: Proceedings of the IJCNN 1990, Vol. I, pp. 703-708.

[22]    
Title:  Sensitivity of Feedforward Neural Networks to Weight Errors.
Author: Stevenson, M., R. Winter, and B. Widrow.
Source: IEEE Transactions on Neural Networks, vol. 1 (1990), pp.71-80.

[23]    
Title:  Fault Tolerance in Neural Networks.
Author: Swaminathan, G., S. Srinivasan, S. Mitra, J. Minnix, B. Johnson, and 
R.M. Inigo.
Source: Proceedings of the IJCNN-90-WASH, 1990, pp. II: 699-702.





------------------------------

Subject: Backprop Issues List and ... (part 2)
From:    mgj@cup.portal.com
Date:    Mon, 10 Jun 91 00:16:20 -0700

[[ Editor's Note: Many thanks to Mark Jurik for his work.  I look forward
both to his summary and the announcement of his book.  I have found his
talks in the past to be illuminating and thought provoking, even if I
don't alsways agree with what he says. -PM ]]

            REQUEST FOR REFERENCES TO BACKPROP UPGRADES, PART 2

One month ago, I announced that I am collecting references and
suggestions (for eventual posting and inclusion in a book) on all means
of improving and testing the performance of BackPropagation.  A list
suggestions was posted to get things started.  Many new suggestions were
offered since that time, thereby expanding the original list.  Here s the
latest list:

TRAINING
   1. Low bit quantization                  (how low can you go?)
   2. Batching                              (optimal batch size?)
   3. Momentum                              (fixed and adaptive)
   4. Learn rate                            (fixed and adaptive)
   5. Weight decay                          (fixed and adaptive)
   6. Added noise to weight adjustments     (fixed and adaptive)
   7. Conjugate gradient searching          (too much overhead?)
   8. Fastprop/Quickprop                    (are they the same?)
   9. Uniprop                               (does this exist?)
   9. Whateverprop                          (anything else?)

ARCHITECTURE
   1. Multiple hidden layers                (too much of a good thing?)
   2. Sigmoidal vs.Gaussian thresholding    (any others?)
   4. Recurrent connectivity                (instability issues?)
   5. Network size                          (is smaller better?)
   6. Complex (real and imaginary) weights  (when is it useful?)

PREPROCESSING
   1. Kohonen layer quantization            (useful for classification?)
   2. Fuzzy membership representation       (thermometers, etc. ...)
   3. Added noise                           (how much is safe?)
   4. Principal component decomposition     (when does it help?)
   5. Remove linear transformations         (& add back later. Is it wise?)

LABORATORY BENCHMARKS
   1. N-Bit parity
   2. N-M-N encoder/decoder
   3. N-N-N linear channel
   4. N-2-1 symmetry detection
   5. 3-N-1  two out of three  detection
   6. 2-N-1 Intertwined spiral classification

If you have more topics for the list or references to suggest: please
E-mail your suggestions to mgj@cup.portal.com.  The odds are you know of
at least one good paper that most others are not aware.

If you have material you would like me to read and consider for posting
and referenceing in an upcoming book, please mail to JURIK RESEARCH, PO
2379, Aptos, CA 95001

After sufficient information has been collected, a brief synopsis of all
*that has been submitted* will be posted.


  -- Mark Jurik, mgj@cup.portal.com


------------------------------

Subject: fast recognition of noisy characters
From:    PVR%AUTOCTRL.RUG.AC.BE@CUNYVM.CUNY.EDU
Date:    Mon, 10 Jun 91 10:24:00 +0100


I have a couple of problems where it is necessary to recognize
alphanumeric characters (e.g. numberplate inspection, parcel number
inspection, etc). In most of these applications, a fixed character type
is used, but this can change from application to application, even from
batch to batch. One of our students needs to develop a neural network,
which is as general as possible (which means it should be able to tackle
different problems) and as fast as possible, as some of these parcels
need to be inspected at a rate of 25 per second). Furthermore it should
be as accurate as possible.  Characters can be covered with noise or
incomplete.

Does anyone have ideas about the direction we should take, hints for
building this network, examples of applications in this field, etc ?  Is
backprop the best and only candidate for this network ? If people have
developed applications like this, is it possible to look at the
implementation details ? Etc, etc.

All interventions will be appreciated. I will summarize to the net.

Patrick Van Renterghem
State University of GHENT, Belgium
pvr@autoctrl.rug.ac.be


------------------------------

Subject: Looking for optimization applications
From:    "Guillermo Alfonso Parra R." <RYP%ANDESCOL@CUNYVM.CUNY.EDU>
Date:    Tue, 11 Jun 91 14:44:07 -1100

[[ Editor's Note: Check Jim Noyes' posting earlier in this issue of the
Digest regarding algorithms.  However, it would be intersting to note the
*application* of these algorithms, as the appeal below requests. By the
way, I think the rely address here could also be ryp@andescol.BITNET -PM ]]

Dear Sirs:

I would apreciate any help finding information about optimization
applications using neural networks, specificaly about an article called
"Optimization Algorithms, Simulated Annealing and Neural Networks
Processing", that appeared in Vol. 310 (November 1986), from
"Astrophysical Journal".  Please send me any information you have about
this. Thanks a lot,
                                         Guillermo Alfonso Parra R.


------------------------------

Subject: RFD: comp.org.issnnet
From:    issnnet@park.bu.edu
Date:    Mon, 03 Jun 91 11:27:36 -0400


                        REQUEST FOR DISCUSSION
                        ----------------------

GROUP NAME:     comp.org.issnnet

STATUS:         unmoderated

CHARTER:        The newsgroup shall serve as a medium for discussions
                pertaining to the International Student Society for
                Neural Networks (ISSNNet), Inc., and to its activities
                and programs as they pertain to the role of students
                in the field of neural networks. See details below.

TARGET VOTING DATE:     JUNE 20 - JULY 20, 1991
                                   
******************************************************************************
                             PLEASE NOTE

        In agreement with USENET newsgroup guidelines for the creation
        of new newsgroups, this discussion period will continue until
        June 21, at which time voting will begin if deemed
        appropriate. ALL DISCUSSION SHOULD TAKE PLACE ON THE NEWSGROUP

                             "news.groups"
                             
        If you do not have access to USENET newsgroups but wish to
        contribute to the discussion, send your comments to:
                         issnnet@park.bu.edu
        specifying whether you would like your message relayed to
        news.groups. A call for votes will be made to the same
        newsgroups and mailing lists that originally received this
        message.
                                   
    PLEASE DO NOT SEND REPLIES TO THIS MAILING LIST OR NEWSGROUP DIRECTLY!

        A call for votes will be broadcast in a timely fashion. Please
        do not send votes until then.

******************************************************************************

BACKGROUND AND INFORMATION:

   The purpose of the International Student Society for Neural
Networks (ISSNNet) is to (1) provide a means of exchanging information
among students and young professionals within the area of Neural
Networks; (2) create an opportunity for interaction between students and
professionals from academia and industry; (3) encourage support >from
academia and industry for the advancement of students in the area of
Neural Networks; (4) insure that the interest of all students in the area
of Neural Networks is taken into consideration by other societies and
institutions involved with Neural Networks; and (5) to foster a spirit of
international and interdisciplinary kinship among students as the study
of Neural Networks develops into a self-contained discipline.

   Since its creation one year ago, ISSNNet has grown to over 300
members in more than 20 countries around the world. One of the biggest
problems we have faced thus far is to efficiently communicate with all
the members. To this end, a network of "governors" has been created.
Each governor is in charge of distributing information (such as our
newsletter) to all local members, collect dues, notify local members of
relevant activities, etc.

   However, even this system has problems. Communication to a possibly
very large number of members relies entirely on one individual, and given
the typically erratic schedule of a student, it is often difficult to
insure prompt and timely distribution to all members.

   More to the point, up until this time all governors have been
contacting a single person (yours truly), and that has been a problem.
Regular discussions on the society and related matters become very
difficult when routed through individuals in this fashion.

   The newsgroup would be primarily dedicated to discussion of items
pertaining to the society. We are about to launch a massive call for
nominations, in the hope that more students will step forward and take a
leading role in the continued success of the society.

   In addition, ISSNNet is involved with a number of projects, many of
which require extensive electronic mail discussions. For example, we are
developing a sponsorship program for students presenting papers at NNet
conferences. This alone has generated at least 100 mail messages to the
ISSNNet account, most of which could have been answered by two or three
"generic" postings.

   We have refrained from using some of the existing mailing lists and
USENET newsgroups that deal with NNets because of the non-technical
nature of our issues. In addition to messages that are strictly
society-related, we feel that there are many messages posted to these
existing bulletin boards for which our newsgroup would be a better forum.
Here is a list of topics that frequently come up, which would be handled
in comp.org.issnnet as part of our "sponsored" programs:
                                   
                "What graduate school should I go to?"

Last year, ISSNNet compiled a list of graduate programs around the world.
The list will be updated later this year to include a large number of new
degree programs around the world.

                                   
                      "What jobs are available?"

We asked companies that attended last year's IJCNN-San_Diego and
INNC-Paris conferences to fill out a questionnaire on employment
opportunities for NNet students.

                                   
           "Does anyone have such-and-such NNet simulator?"

Many students have put together computer simulations of NNet paradigms
and these could be shared by people on this group.

                                   
                 "When is the next IJCNN conference?"

We have had a booth at past NNet conferences, and hope to continue doing
this for more and more international and local meetings. We often have
informal get-togethers at these conferences, where students and others
have the opportunity to meet.


 -----------------------------------------------------------------------

For more information, please send e-mail to issnnet@park.bu.edu (ARPANET)
write to:

        ISSNNet, Inc.
        PO Box 557, New Town Br.
        Boston, MA 02258   USA

ISSNNet, Inc. is a non-profit corporation in the Commonwealth of
Massachusetts. 

ISSNNet, Inc.
P.O. Box 557, New Town Branch
Boston, MA  02258  USA


------------------------------

End of Neuron Digest [Volume 7 Issue 34]
****************************************