[comp.ai.neural-nets] abstract learning

wuly@vax5.cit.cornell.edu (12/17/90)

Ok, quick question:
       I recently (in fact not quite yet) implemented the NN algorithm from
the C user's journal april 1989 on a VLSI chip as part of a VLSI design class.
It is the standard? feedforward, feedback, general delta rule type algorithm
with momentum on the back-prop etc. etc.  It is all integer (fixed point)
for speed.  When the chip is fabbed (assuming I didn't screw up) I should have
a fairly speedy engine for NNs.  I know the basics of how they work.  What I
don't know is if or when any type of "general" or "abstract" learning is
possible - in the sense that if I teach a net to identify the edges in a
large set of pictures, and then give it a completely different pict, will it
find edges?  How big a NN is needed? training time?  How does one "train"
"abstract" learning?  Are these ridiculous questions?
       THANX                           JESSE
wuly@vax5.cit.cornell.edu