[comp.ai.neural-nets] Commentary Feedback

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (09/06/89)

Previously, I wrote:

>>  "Commentary" feedback can apply to the entire hypothesis formed 
>>  by the device, not just its performance on a particular input.   


In article <11384@boulder.Colorado.EDU> bill@synapse.Colorado.EDU () 
writes:
>
>  In principle, if you can come up with the training information,
>you can use back-prop (or something very much like it) to apply it.
>
>  Back-prop is essentially gradient descent for an error function.
>Usually the error function is either the total squared error in
>the output layer for a randomly chosen input (this is "online" back-
>prop), or the average total squared error for a set of inputs (which
>is "batch-mode" back-prop).  But as far as the mathematics is concerned,
>the error function can be anything you please.
>

You make a good point, in that the commentary feedback can be 
something easily derived from the architecture, and not
necessarily complex information which itself may be difficult
to obtain, much less use effectively.  

However, 
Considering gradient descent for some forms of commentary
may be difficult, since backpropagation makes "smoothness" assumptions 
on the error surface.  I believe this might impose restrictions
on the type of error feedback which can be effectively utilized
by backpropagation-type credit apportionment.

Consider a network being trained according to
1) the correctness of its hypothesis, 2) the number of weights.

Problem: if the network can go from being
wrong to being correct by eliminating one weight
this corresponds to discontinuous jump in the
error surface, violating the smoothness criterion.

This could be remedied by making it possible for the network to be
"a little bit wrong" (approximately correct),
constraining the type of commentary which can be allowed -
that which does not violate the smoothness criterion.