[comp.ai.neural-nets] backprop training with noise

ksr1492@cec1.wustl.edu (Kevin Scott Ruland) (02/14/90)

  I heard that Wasserman had tried training feedforward nets by backprop
with a random (cauchy, I think) vector added to the weights.  I saw a single
page report from a proceedings that reported Wasserman had tried this with
some success but failed to list numerical results.  I had tried this training
on a 3-4-1 net to do the 3-d xor problem with some good convergance results
(approx. 95% of all nets trained in this way converged compared to <15%
when trained without the added noise).  If anyone has done some of this, or
knows of some references please drop me a line.

kevin

kevin@rodin.wustl.edu