[comp.ai.neural-nets] fault-tolerance of feedforward networks`

omlinc@cs.rpi.edu (Christian Omlin) (04/26/91)

Hi !

I am running simulations with backprop networks. The network is used as
a classifier.
I am interested in the sensitivity of the network to perturbations
in the weights. My experiments indicate that the performance degrades
more rapidly when the weights from the input to the hidden layer are
perturbed as opposed to perturbation of weights from the hidden to
the output layer. This implies that, for my experiments, the shape
of the decision regions is largely determined by the first hidden 
layer. Are there any references (simulations, etc) confirming this
behavior ?

Thanks.

Christian

----------------------------------------------------------------------------
Christian W. Omlin			

office:                                 home:
Computer Science Department             Foxberry Farm
Amos Eaton 119                          Box 332, Route #3
Rensselaer Polytechnic Institute        Averill Park, NY 12018
Troy, NY 12180 USA                      (518) 766-5790
(518) 276-2930                        

e-mail: omlinc@turing.cs.rpi.edu
----------------------------------------------------------------------------

aam9n@helga2.acc.Virginia.EDU (04/29/91)

In article <j+wg+7.@rpi.edu> omlinc@cs.rpi.edu (Christian Omlin) writes:
>Hi !
>
>I am running simulations with backprop networks. The network is used as
>a classifier.
>I am interested in the sensitivity of the network to perturbations
>in the weights. My experiments indicate that the performance degrades
>more rapidly when the weights from the input to the hidden layer are
>perturbed as opposed to perturbation of weights from the hidden to
>the output layer. This implies that, for my experiments, the shape
>of the decision regions is largely determined by the first hidden 
>layer. Are there any references (simulations, etc) confirming this
>behavior ?

The most directly relevant paper for this would be:

M. Stevenson, R. Winter & B. Widrow, "Sensitivity of Feedforward Neural
Networks to Weight Errors", IEEE Trans. on Neural Networks, vol. 1, no. 1,
pp. 71-80, March 1990.

I am currently working on the robustness of feed-forward nets with
real-valued outputs, but I am looking at perturbations in neuron
outputs. I have done what I think is a fairly thorough literature
search, but would appreciate any references, pointers etc. that
might address this issue. If there is interest, I will summarize
to the net. Thanks.

Any takers for a discussion of neural net fault-tolerance?

Regards,

Ali Minai
University of Virginia
aam9n@Virginia.EDU