[comp.ai.neural-nets] learning principal components

kortge@elaine2.Stanford.EDU (Chris Kortge) (04/14/91)

  I've been working on a neural net learning algorithm which extracts
principal components from a sequence of input patterns (as backprop
and other NN algorithms can do).  It requires, say, a third as many
pattern presentations as optimized versions of other NN algorithms, but
this is at a cost of requiring an order of magnitude more computation per
pattern.  (It learns faster by reducing interference with old knowledge
when a new pattern is learned; I can say more if anyone's interested.)

  I've been working under the assumption that there will be situations
where it's good to make this tradeoff of within-pattern processing time
for number of patterns required; one might guess, for example, that a
creature running around in the world can do a lot of processing before
the current "pattern" changes to a significant degree.  But I don't
know of any *existing applications* that require this sort of algorithm.
Does anyone else know of any?  It basically just needs to be a situation
where there is some significant limitation on the rate at which inputs
can be observed (e.g. the inputs aren't all stored in RAM ahead of time).
A possibility might be devices which must adapt to each new user, but I
don't know of any specific instances of this that would fit the bill.
It would really help in writing this up if I had something in the real
world to relate it to.

Thanks for the help,
Chris Kortge
kortge@psych.stanford.edu