[comp.ai.neural-nets] Incremental Training in MLP

ilh@sun-of-pooh.mit.edu (I. Lee Hetherington) (12/12/90)

Hello,

Are there any good "incremental" training schemes for multi-layer perceptrons.
By incremental I mean:  once I use a particular training example to perform an
update I throw it away and never use it again.  I rely on a steady stream of
new training data.  The most obvious thing to me is simply to perform some
small weight update using back-prop, but this is likely to require a huge
stream of training data.

Are there any techniques for updating weights using probabilistic methods akin
to Bayesian Learning?  For example, I assume some distribution for each weight.
When I see an new training example, I use this example to update the
distribution for each weight...

Any references would be greatly appreciated!  (Can you email them to me as I
am an infrequent reader of this group, but post them if you think they are of
general interest.)

Thanks in advance!

-------------------------------------------------------------------------------
Lee Hetherington
MIT Spoken Language Systems Group
ilh@goldilocks.lcs.mit.edu
-------------------------------------------------------------------------------