[comp.ai.neural-nets] Neuron Digest V7 #10

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (02/21/91)

Neuron Digest   Wednesday, 20 Feb 1991
                Volume 7 : Issue 10

Today's Topics:
          Speech Recognition & NNs preprints/reprints available
              Adaptive Range Coding - Tech Report Available
                TR available: Yet another ANN/HMM hybrid.
              header for TR on ANN/HMM hybrid in neuroprose
                   Tech Report Available in Neuroprose
                            New TR Available
                 Manuscript available on BackPercolation
                          AI*IA Call for papers


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Speech Recognition & NNs preprints/reprints available
From:    Vince Weatherill <vincew@cse.ogi.edu>
Date:    Wed, 13 Feb 91 15:30:36 -0800


Reprints and preprints are now available for the following publications
of the OGI Speech Group.  Please respond directly to me by e-mail or
surface mail.  Don't forget to include your address with your request.
Unless you indicate otherwise, I will send all 6 reports.

Vince Weatherill
Dept. of Computer Science and Engineering
Oregon Graduate Institute
19600 NW von Neumann Drive
Beaverton, OR  97006-1999


Barnard, E., Cole, R.A., Vea, M.P., and Alleva, F. "Pitch 
     detection with a neural-net classifier," IEEE Transactions
     on Acoustics, Speech & Signal Processing, (February, 1991).

Cole, R.A., M. Fanty, M. Gopalakrishnan, and R.D.T. Janssen,
     "Speaker-independent name retrieval from spellings using a
     database of 50,000 names," Proceedings of the IEEE Interna-
     tional Conference on Acoustics, Speech and Signal Process-
     ing, Toronto, Canada, May 14-17, (1991).

Muthusamy, Y. K., R.A. Cole, and M. Gopalakrishnan, "A segment-
     based approach to automatic language identification,"
     Proceedings of the 1991 IEEE International Conference on
     Acoustics, Speech and Signal Processing, Toronto, Canada,
     May 14-17, (1991).

Fanty, M., R. A. Cole, and , "Spoken Letter Recognition,"
     Proceedings of the Neural Information Processing Systems
     Conference, Denver, CO, (Nov. 1990).

Janssen, R.D.T, M. Fanty, and R.A. Cole, "Speaker-independent
     phonetic classification in continuous English letters,"
     Proceedings of the International Joint Conference on Neural
     Networks, Seattle, WA, Jul 8-12, (1991), submitted
     for publication.

Fanty, M., R. A. Cole, and , "Speaker-independent English alpha-
     bet recognition: Experiments with the E-Set," Proceedings of
     the 1990 International Conference on Spoken Language Pro-
     cessing, Kobe, Japan, (Nov. 1990).


****************************************************************

        PITCH DETECTION WITH A NEURAL-NET CLASSIFIER

   Etienne Barnard, Ronald Cole, M. P. Vea and Fil Alleva


                        ABSTRACT
Pitch detection based on neural-net classifiers is investigated.  To this
end, the extent of generalization attainable with neural nets is first
examined, and it is shown that a suitable choice of features is required
to utilize this property.  Specifically, invariant features should be
used whenever possible.  For pitch detection, two feature sets, one based
on waveform samples and the other based on properties of waveform peaks,
are introduced.  Experiments with neural classifiers demonstrate that the
latter feature set --which has better invariance properties--performs
more successfully. It is found that the best neural-net pitch tracker
approaches the level of agreement of human labelers on the same data set,
and performs competitively in comparison to a sophisticated feature-based
tracker. An analysis of the errors committed by the neural net (relative
to the hand labels used for training) reveals that they are mostly due to
inconsistent hand labeling of ambiguous waveform peaks.

*************************************************************


 SPEAKER-INDEPENDENT NAME RETRIEVAL FROM SPELLINGS USING A
                  DATABASE OF 50,000 NAMES

Ronald Cole, Mark Fanty, Murali Gopalakrishnan, Rik Janssen

                        ABSTRACT
We describe a system  that  recognizes  names  spelled  with
pauses  between letters using high quality speech.  The sys-
tem uses neural network classifiers to locate  and  classify
letters,  then searches a database of names to find the best
match to the letter scores.  The  directory  name  retrieval
system  was  evaluated on 1020 names provided by 34 speakers
who were not used to train the system.  Using a database  of
50,000  names,  972,  or 95.3%, were correctly identified as
the first choice.  Of the remaining 48  names,  all  but  10
were  in  the  top 3 choices.  Ninty nine percent of letters
were correctly located, although speakers  failed  to  pause
completely about 10% of the time.  Classification of indivi-
dual spoken letters that were correctly located was 93%.

*************************************************************


                A SEGMENT-BASED APPROACH TO
             AUTOMATIC LANGUAGE IDENTIFICATION

Yeshwant K. Muthusamy, Ronald A. Cole and Murali Gopalakrishnan

                          ABSTRACT
A segment-based approach to automatic  language  identifica-
tion  is  based  on  the idea that the acoustic structure of
languages can be estimated by segmenting speech  into  broad
phonetic  categories.  Automatic language identification can
then be achieved by computing  features  that  describe  the
phonetic  and  prosodic characteristics of the language, and
using these feature measurements to train  a  classifier  to
distinguish  between  languages.   As  a  first step in this
approach, we have built a  multi-language,  neural  network-
based  segmentation and broad classification algorithm using
seven broad phonetic categories.  The algorithm was  trained
and tested on separate sets of speakers of American English,
Japanese, Mandarin Chinese and Tamil.  It currently performs
with an accuracy of 82.3% on the utterances of the test set.

*************************************************************

               SPOKEN LETTER RECOGNITION

                  Mark Fanty and Ron Cole

                        ABSTRACT
Through the use of neural network  classifiers  and  careful
feature  selection,  we have achieved high-accuracy speaker-
independent  spoken  letter   recognition.    For   isolated
letters, a broad-category segmentation is performed Location
of segment boundaries  allows  us  to  measure  features  at

------------------------------

Subject: Adaptive Range Coding - Tech Report Available
From:    Bruce E Rosen <rosen@CS.UCLA.EDU>
Date:    Thu, 14 Feb 91 11:11:46 -0800


    REPORT AVAILABLE ON ADAPTIVE RANGE CODING

At the request of a few people at NIPS, I placed in the connectionists archive
the postscript version of my report describing adaptive range coding.
Below are the abstract and instructions on ftp retrieval.

I would very much welcome any discussion of this subject.  If you want, send 
email to me and I can summarize later for the net.

Thanks
Bruce

 ---------------------------------------------------------------------------

Report DMI-90-4, UCLA Distributed Machine Intelligence Laboratory, January 1991

                Adaptive Range Coding

                        Abstract

This paper examines a class of neuron based learning systems for dynamic
control that rely on adaptive range coding of sensor inputs.  Sensors are
assumed to provide binary coded range vectors that coarsely describe the
system state.  These vectors are input to neuron-like processing
elements.  Output decisions generated by these "neurons" in turn affect
the system state, subsequently producing new inputs.  Reinforcement
signals from the environment are received at various intervals and
evaluated.  The neural weights as well as the range boundaries
determining the output decisions are then altered with the goal of
maximizing future reinforcement from the environment.  Preliminary
experiments show the promise of adapting "neural receptive fields" when
learning dynamical control.  The observed performance with this method
exceeds that of earlier approaches.


 -----------------------------------------------------------------------

To obtain copies of the postscript file, please use Jordan Pollack's service:

Example:
unix> ftp cheops.cis.ohio-state.edu          # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): <ret>
ftp> cd pub/neuroprose
ftp> binary
ftp> get
(remote-file) rosen.adaptrange.ps.Z
(local-file) rosen.adaptrange.ps.Z
ftp> quit
unix> uncompress rosen.adaptrange.ps
unix> lpr -P(your_local_postscript_printer) rosen.adaptrange.ps

 ----------------------------------------------------------------------------
If you have any difficulties with the above, please send e-mail to
rosen@cs.ucla.edu.   DO NOT "reply" to this message, please.

------------------------------

Subject: TR available: Yet another ANN/HMM hybrid.
From:    Yoshua BENGIO <yoshua@homer.cs.mcgill.ca>
Date:    Sun, 17 Feb 91 21:20:42 -0500


The following technical report is now available by ftp from neuroprose:


Yoshua Bengio, Renato De Mori, Giovanni Flammia, and Ralf Kompe (1990),
"Global Optimization of a Neural Network - Hidden Markov Model Hybrid",
Technical Report TR-SOCS-90.22, December 1990, School of Computer
Science, McGill University.

  
Abstract:


Global Optimization of a Neural Network - Hidden Markov Model Hybrid

   Yoshua Bengio, Renato De Mori, Giovanni Flammia, Ralf Kompe

                    TR-SOCS-90.22, December 1990



In this paper a method for integrating Artificial Neural Networks (ANN)
with Hidden Markov Models (HMM) is proposed and evaluated. ANNs are
suitable to perform phonetic classification, whereas HMMs have been
proven successful at modeling the temporal structure of the speech
signal. In the approach described here, the ANN outputs constitute the
sequence of observation vectors for the HMM. An algorithm is proposed for
global optimization of all the parameters.  An incremental design method
is described in which specialized networks are integrated to the
recognition system in order to improve its performance. Results on
speaker-independent recognition experiments using this integrated ANN-HMM
system on the TIMIT continuous speech database are reported.
 

 ---------------------------------------------------------------------------
Copies of the postscript file bengio.hybrid.ps.Z may be obtained from the
pub/neuroprose directory in cheops.cis.ohio-state.edu. Either use the Getps
script or do this:

unix-1> ftp cheops.cis.ohio-state.edu          # (or ftp 128.146.8.62)
Connected to cheops.cis.ohio-state.edu.
Name (cheops.cis.ohio-state.edu:): anonymous
331 Guest login ok, sent ident as password.
Password: neuron
230 Guest login ok, access restrictions apply.
ftp> cd pub/neuroprose
ftp> binary
ftp> get bengio.hybrid.ps.Z
ftp> quit
unix-2> uncompress bengio.hybrid.ps.Z
unix-3> lpr -P(your_local_postscript_printer) bengio.hybrid.ps

Or, order a hardcopy by sending your physical mail address to
yoshua@cs.mcgill.ca, mentioning Technical Report TR-SOCS-90.22. 
PLEASE do this only if you cannot use the ftp method described above.



------------------------------

Subject: header for TR on ANN/HMM hybrid in neuroprose
From:    Yoshua BENGIO <yoshua@homer.cs.mcgill.ca>
Date:    Tue, 19 Feb 91 14:33:08 -0500


The following technical report available by ftp from neuroprose was
recently advertised:


Yoshua Bengio, Renato De Mori, Giovanni Flammia, and Ralf Kompe (1990),
"Global Optimization of a Neural Network - Hidden Markov Model Hybrid",
Technical Report TR-SOCS-90.22, December 1990, School of Computer Science,
McGill University.

However, it was not mentionned that the front pages of the TR are in

bengio.hybrid_header.ps.Z

whereas the paper itself is in:

bengio.hybrid.ps.Z


Sorry for the inconvenience,

Yoshua Bengio
School of Computer Science, McGill University



------------------------------

Subject: Tech Report Available in Neuroprose
From:    Mark Plutowski <pluto@cs.UCSD.EDU>
Date:    Tue, 19 Feb 91 12:21:17 -0800


[[ Editor's Note: Readers, if you want a copy of this paper mailed to
you, BE SURE to include the US$5.00 in your request.  It costs real money
to make photocopies and send papers by surface mail -- especially
overseas.  -PM ]]


The following report has been placed in the neuroprose archives at 
Ohio State University:

                     UCSD CSE Technical Report No. CS91-180

                     Active selection of training examples 
                for network learning in noiseless environments.  

                                Mark Plutowski 
          Department of Computer Science and Engineering, UCSD,  and

                                Halbert White
      Institute for Neural Computation and Department of Economics, UCSD.


                                   Abstract:

        We derive a method for {\sl actively selecting} examples to be
        used in estimating an unknown mapping with a multilayer feedforward 
        network architecture.  Active selection chooses from among a set of 
        available examples an example which, when added to the previous set 
        of training examples and learned, maximizes the decrement of network 
        error over the input space.  New examples are chosen according to 
        network performance on previous training examples.  In practice, this 
        amounts to incrementally growing the training set as necessary to 
        achieve the desired level of accuracy.

        The objective is to minimize the data requirement of learning.
        Towards this end, we choose a general criterion for selecting training 
        examples that works well in conjunction with the criterion used for 
        learning, here, least squares.  Examples are chosen to minimize 
        Integrated Mean Square Error (IMSE). IMSE embodies the effects of bias
        (misspecification of the network model) and variance (samplingvariation
        due to noise).  We consider a special case of IMSE, Integrated Squared 
        Bias, (ISB)  to derive a selection criterion ($\Delta ISB$) which we 
        maximize to select new training examples.  $\Delta ISB$ is applicable 
        whenever sampling variation due to noise can be ignored. We conclude 
        with graphical illustrations of the method, and demonstrate its use 
        during network training. 


=-=-=-=-=-=-=-=-=-=-=-=-= How to obtain a copy -=-=-=-=-=-=-=-=-=-=-=-=-=-=

Copies may be obtained by 

a) FTP directly from the Neuroprose directory, or
b) by land mail from the CSE dept. at UCSE.


a) via FTP:

To obtain a copy from Neuroprose, either use the "getps" program, or
ftp the file as follows:

% ftp cheops.cis.ohio-state.edu
Connected to cheops.cis.ohio-state.edu.
220 cheops.cis.ohio-state.edu FTP server (Version 5.49 Tue May 9 14:01:04 EDT 1989) ready.
Name (cheops.cis.ohio-state.edu:your-ident): anonymous
[2331 Guest login ok, send ident as password.
Password: your-ident
230 Guest login ok, access restrictions apply.
ftp> cd pub/neuroprose
250 CWD command successful.
ftp> binary
200 Type set to I.
ftp> get plutowski.active.ps.Z
200 PORT command successful.
150 Opening BINARY mode data connection for plutowski.active.ps.Z (348443 bytes).
226 Transfer complete.
local: plutowski.active.ps.Z remote: plutowski.active.ps.Z
348443 bytes received in 44 seconds (7.2 Kbytes/s)
ftp> quit
% uncompress plutowski.active.ps.Z
% lpr -P<printer-name> plutowski.active.ps


b) via postal mail:

Requests for hardcopies may be sent to:

        Kay Hutcheson
        CSE Department, 0114
        UCSD
        La Jolla, CA 92093-0114

and enclose a check for $5.00 payable to "UC Regents."
The report number is:  Technical Report No. CS91-180

------------------------------

Subject: New TR Available
From:    Bill Hart <whart@cs.UCSD.EDU>
Date:    Tue, 19 Feb 91 22:26:06 -0800

[[ Editor's Note: Again, please note the request of US$5.00 for "hard
copy" of this paper. -PM ]]

The following TR has been placed in the neuroprose archives at 
Ohio State University.

 --Bill

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


                     UCSD CSE Technical Report No. CS91-180

                     Active selection of training examples 
                for network learning in noiseless environments.  

                                Mark Plutowski 
                Department of Computer Science and Engineering, UCSD,  and

                                Halbert White
           Institute for Neural Computation and Department of Economics, UCSD.


                                   Abstract:

We derive a method for {\sl actively selecting} examples to be used in
estimating an unknown mapping with a multilayer feedforward network
architecture.  Active selection chooses from among a set of available
examples an example which, when added to the previous set of training
examples and learned, maximizes the decrement of network error over the
input space.  New examples are chosen according to network performance on
previous training examples.  In practice, this amounts to incrementally
growing the training set as necessary to achieve the desired level of
accuracy.

The objective is to minimize the data requirement of learning.  Towards
this end, we choose a general criterion for selecting training examples
that works well in conjunction with the criterion used for learning,
here, least squares.  Examples are chosen to minimize Integrated Mean
Square Error (IMSE). IMSE embodies the effects of bias (misspecification
of the network model) and variance (sampling variation due to noise).  We
consider a special case of IMSE, Integrated Squared Bias, (ISB) to derive
a selection criterion ($\Delta ISB$) which we maximize to select new
training examples.  $\Delta ISB$ is applicable whenever sampling
variation due to noise can be ignored. We conclude with graphical
illustrations of the method, and demonstrate its use during network
training.



=-=-=-=-=-=-=-=-=-=-=-=-= How to obtain a copy -=-=-=-=-=-=-=-=-=-=-=-=-=-=

a) via FTP:

To obtain a copy from Neuroprose, either use the "getps" program, or
ftp the file as follows:

% ftp cheops.cis.ohio-state.edu
Connected to cheops.cis.ohio-state.edu.
220 cheops.cis.ohio-state.edu FTP server (Version 5.49 Tue May 9 14:01:04 EDT 1989) ready.
Name (cheops.cis.ohio-state.edu:your-ident): anonymous
[2331 Guest login ok, send ident as password.
Password: your-ident
230 Guest login ok, access restrictions apply.
ftp> cd pub/neuroprose
250 CWD command successful.
ftp> binary
200 Type set to I.
ftp> get plutowski.active.ps.Z
200 PORT command successful.
150 Opening BINARY mode data connection for plutowski.active.ps.Z (325222 bytes).
226 Transfer complete.
local: plutowski.active.ps.Z remote: plutowski.active.ps.Z
325222 bytes received in 44 seconds (7.2 Kbytes/s)
ftp> quit
% uncompress plutowski.active.ps.Z
% lpr -P<printer-name> plutowski.active.ps


b) via postal mail:

Requests for hardcopies may be sent to:

        Kay Hutcheson
        CSE Department, 0114
        UCSD
        La Jolla, CA 92093-0114

and enclose a check for $5.00 payable to "UC Regents."
The report number is:  Technical Report No. CS91-180



------------------------------

Subject: Manuscript available on BackPercolation
From:    mgj@cup.portal.com
Date:    Wed, 20 Feb 91 00:36:28 -0800

[[ Editor's Note: Perhaps some reader interested in the formal aspects of
this paper will help the author in constructing the proper mathematical
analyses of the algorithm.  Any takers?  Please note the request for
US$10.00 to cover printing and mailing cost. -PM ]]


                        New Manuscript Available 


                              BACKPERCOLATION : 
  Assigning Localized Activation Error in Feedforward Perceptron Networks

                               Mark  Jurik
                       Jurik Research & Consulting
                       PO 2379, Aptos, Calif.  USA


                                Abstract

This work introduces a new algorithm, BackPercolation, for training
multilayered feedforward perceptron networks.  It assigns each processing
element (PE) its own activation error, thereby giving each PE its own
error surface.  In contrast, Backpropagation decreases global error by
descending along gradients of the global error surface.  Experimental
results reveal that weight adjustments which reduce local activation
errors permit the system to "tunnel through" the global error surface,
increasing convergence rate and the likelihood of attaining an optimal
minima.

****************

In early 1990, over 30 researchers had experimented with a preliminary
version of Backpercolation (Perc).  Their feedback motivated the
development of a more sophisticated version that includes the following
options: learn rate feedback control and a "kick-starter" for weight
initialization.

Although Perc uses the gradient information that is backpropagated
through a multilayered network, Perc does not adjust weights in
proportion to -dE/dw, where E is the global output error.  This enables
Perc to train networks with numerous hidden layers and still avoid the
instability associated with the large dynamic variance in the gradients.
Note the performance in the 6-6-6-1 (two hidden layer) configuration in
the table below.

Perc uses local computations that are slightly more complex than
Backprop- agation, and does NOT employ any of the following techniques:

  - momentum,
  - matrix inversion or pseudo-inversion (computationally non-local), 
  - nested subroutines (such as LineSearch),
  - architecture reconfiguration (troublesome with non-stationary modeling).

Yet despite these simplifying constraints, Perc trains very quickly.  For
example, it solves the 12-2-12 encoder problem in less than 600 epochs
and the 6-6-6-1 parity problem in only 93 epochs.  The table below
compares Perc's speed against other high-performance nets on some popular
simple tests.  For each test, the opposing paradigm listed is the one
that the literature (listed below) revealed to be the fastest one for
that test. This way, Perc is being compared to the best of the best (or
something like that).

The "D" symbol in the configuration column below denotes that both Perc
and the compared paradigm included direct connections between the input
and output nodes.  The "G" symbol denotes that the hidden layer cells in
both Perc and the other paradigm used Gaussian rather than sigmoidal
thresholding.  The Epoch Count is the number of training cycles required
to get the output of every cell, for every pattern, to be within the
error tolerance as specified in the respective published sources.  Perc's
epoch count was averaged over 50 trials per problem task.  Only
successfully converging episodes were included in the average, but as
shown below, Perc's probability of success was frequently 100%.  Also
keep in mind that many published articles do not mention the probability
of success of convergence.  (I wonder why?)

*****************************************************************************
PROBLEM                 ERROR    COMPARING    PARADIGM'S     PERC'S  PERC'S
 TYPE       CONFIG      TOLER.   PARADIGM         EPOCHS     EPOCHS  % SUCC
 -------------------------------------------------------------------------
Parity      2-2-1        O.1    CONJUGATE GRADIENT     73       8    100
            3-3-1        0.1    GRAM-SCHMIDT          247      18    100
            2-1-1 (D)    0.1    CASCADE-CORR (D,G)     24       8    100
            3-1-1 (D)    0.1    CASCADE-CORR (D,G)     32       6    100
            4-4-1 (D,G)  0.1    CASCADE-CORR (D,G)     66       5    100
            8-8-1 (D,G)  0.1    CASCADE-CORR (D,G)    357      28    100
            6-6-6-1       *     SOLIS/WETS OPTIMIZE  5713      93     80

Encoder/    8-2-8        0.4    QUICKPROP             103      88    100
            8-2-8        0.1    **                            194    100
decoder     8-3-8        0.4    QUICKPROP              22      22    100
            8-3-8        0.1    CONJUGATE GRADIENT     68      28    100
            10-5-10      0.4    QUICKPROP              14      15    100
            10-5-10      0.1    CONJUGATE GRADIENT     71      19    100
            12-2-12      0.4    **                            581    100
Linear            
Channel     10-10-10     0.1    SUPER-SAB             125       8    100

Multiplex   6-6-1        0.1    DELTA-BAR-DELTA       137      24    100

Symmetry    6-2-1        0.3    GAIN BACK-PROP        198     124     84 

* In the 6-6-6-1 task, training stopped when the squared error, summed across 
all 64 training patterns, totaled less than 0.0002.
** I am unaware of any published result for this test.

Data for the above comparisons were obtained from the following publications:

Gain Backprop           --- J. Complex Systems, Feb 1990, p 51
Delta_bar_delta         --- J. Neural Networks, #4, 1988, p 295
Quickprop               --- Proceedings 1988 Connectionist Models, p 38
Super-SAB               --- J. Neural Networks, #5, 1990, p 561
Conjugate Gradient      --- Advances in Neural Info Proc Sys 1, 1988, p 40
Cascade Correlation     --- Advances in Neural Info Proc Sys 2, 1989, p 524
Gram_Schmidt            --- Neural Computation, #1, 1990, p 116
Solis/Wets Optimization --- J. Neural Networks, #5, 1989, p367

As an option, the learn-rate parameter can be initially set to zero and
it will automatically adjust as learning proceeds.  This option frees the
user from needing to find the optimal learn-rate; however, it has one
major drawback: the algorithm needs to access information on all the
cells, which is a non-local process.

With Perc, the user is still required to determine these things: 
   1. number of hidden layers, 
   2. number of cells per hidden layer, 
   3. the scale of the bounds on the initialized weights.

Journal reviewers of an earlier manuscript on Perc have asked for a
mathematical analysis of the algorithm s stability and convergence
properties.  Unfortunately, I am not aware of any proper set of
analytical tools for this kind of nonlinear behavior.  As I see it, the
literature all too frequently applies linear analysis techniques that
require too many simplifying assumptions about the network.  As a result,
many conclusions thought to be gospel one year get discarded the next.
Thus for the sake of keeping things clean, this manuscript will simply
present the Perc algorithm, some derivation and lots of experimental
results.  You will need to verify Perc's utility for your own particular
task(s).

Speaking of validation, Perc has recently been licensed for inclusion
into BrainCel, a commercially available set of macros which give the
Excel spreadsheet program on the PC the capability to train a neural net
on spreadsheet data.  I have been informed that Perc has trained a neural
net to predict the commodities price of leather, as well as estimate from
proprietary demographic data the likelihood that a prospective customer
will agree to buy a luxury cruise from a salseman over the phone.  Now a
computer can prepare a list of likely prospects to teleoperators and
thereby cut down on useless calls that only end up irritating many
people.  The spreadsheet- neuralnet process of BrainCel is so automatic
that the user does not even need to know what a neural network is.  Call
203-562-7335 for details.

A revised manuscript on BACKPERCOLATION is now available.  It will
include a hardcopy of the source code used to solve the 12-2-12
encoder/decoder problem.  You will have enough documentation to code your
own version.  For one copy, please enclose US$10 (or foreign currency
equivalent) to cover printing, shipping and handling expenses.  Sorry,
this will not be available via ftp.

PS. All researchers who have tested Perc and helped develop its upgrade
during this past year will automatically be receiving a free copy of the
manuscript and source code.

 ---------------------------------------------------------------
           JURIK RESEARCH & CONSULTING
               PO BOX 2379 
          APTOS, CALIFORNIA 95001
 ---------------------------------------------------------------

------------------------------

Subject: AI*IA Call for papers
From:    CHELLA%IPACRES.BITNET@ICNUCEVM.CNUCE.CNR.IT
Date:    Thu, 07 Feb 91 14:39:00 +0000

*******************************************************************************
*                                                                             *
*                    C A L L   F O R   P A P E R S                            *
*                                                                             *
*                               A I * I A                                     *
*                                                                             *
*                                                                             *
*               S E C O N D  S C I E N T I F I C  C O N G R E S S             *
*                                                                             *
*               A N D  I N D U S T R I A L   E X H I B I T I O N              *
*                                                                             *
*******************************************************************************


Scientific subjects

 - Architectures, languages and environments
 - Knowledge representation and automated reasoning
 - Problem solving and planning
 - Knowledge acquisition and automatic learning
 - Cognitive models
 - Natural language
 - Perception and robotics
 - Industrial applications of artificial intelligence


Call for Papers

        AI*IA (Italian Association for Artificial Intelligence) has been
founded in 1988 with the intent of promoting the development of study and
research in artificial intelligence and its applications. To this end,
among a variety of activities, AI*IA organizes a National Congress every
other year.
        The first AI*IA Congress took place in Trento in November 1989
and resulted in the presentation of more than 40 scientific papers, the
exhibition of a number of industrial systems and products of AI and the
presence of over 350 partecipants.
        The second Congress, open to international partecipation, will
be held in Palermo and will focus on high quality scientific and technical
results as well as on innovative industrial applications.
        Special sessions on Industrial Experiences are envisaged.
        During these sessions companies operating in the AI field will
have an opportunity to illustrate their activities and to share their
experiences.


Papers

        Papers (5000 words max) must be in English. Authors must send 4
copies including summary (about 200 words) and key words, and they should
point out the scientific topic being treated.
        Papers must treat original research and results or innovative
industrial applications and must not have been previously published.
Accepted papers shall be published in a special volume of "Lecture Notes
in Artificial Intelligence", Springer Verlag ed.
        Italian and English are the Congress official languages.


Deadlines

        Papers must arrive by April 10, 1991.
        Authors will receive communication of acceptance by May 31, 1991
and must send final camera-ready versions by June 30, 1991.

        Papers should be sent to the following address:

        Prof. Salvatore Gaglio
        CRES
        Centro per la Ricerca Elettronica in Sicilia
        Viale Regione Siciliana, 49
        90046 MONREALE (Palermo)


Industrial Experiences

        Companies interested in presenting their activities during the
sessions for Industrial Experiences must make their request by May 15,1991
at the following address:

        Prof. Filippo Sorbello
        Dipartimento di Ingegneria Elettrica
        Viale delle Scienze - 90128 PALERMO
        Tel.+39-91-595735
        Telefax +39-91-488452

        Request should contain some documentation regarding the experiences
to be presented (4 pages A4 format max) so as to evaluate pertinence with
the scientific subjects of the Congress.


Program Committee

Chairman
S. Gaglio (Universita' di Palermo)

G. Berini (DIGITAL)                  L. Carlucci Aiello (Un. Roma La Sapienza)
S. Cerri (DIDAEL)                    M. Del Canto (ELSAG)
G. Ferrari (Universita' di Pisa)     G. Guida (Universita' di Udine)
F. Lauria (Universita' di Napoli)    L. Lesmo (Universita' di Torino)
E. Pagello (Universita' di Padova)   D. Parisi (CNR)
L. Saitta (Universita' di Torino)    G. Semeraro (CSATA)
R. Serra (DIDAEL)                    L. Spampinato (QUINARY)
L. Stringa (IRST)                    P. Torasso (Universita' di Torino)
R. Zaccaria (Universita' di Genova)


Local Organization

        Lia Giangreco
        Ina Paladino
        Giusi Romano
        CRES
        Centro per la Ricerca Elettronica in Sicilia
        Viale Regione Siciliana, 49
        90046 MONREALE (Palermo)
        Tel.+39-91-6406192/6406197/6404501
        Telefax +39-91-6406200


Logistic Arrangements

        GEA Congressi S.r.l.
        Via C.Colombo, 24
        90142 PALERMO
        Tel. +39-91-6373418
        Telefax +39-91-6371625
        Telex 910070 ADELFI


Scientific Secretariat

        E.Ardizzone, F.Sorbello
        Dipartimento di Ingegneria Elettrica
        Universita' di Palermo
        Viale delle Scienze
        90128 PALERMO
        Tel. +39-91-595735/489856/421639
        Telefax +39-91-488452

------------------------------

End of Neuron Digest [Volume 7 Issue 10]
****************************************