[comp.ai.neural-nets] Reasoning Paradigms

petkau@herald.UUCP (Jeff Petkau) (10/14/90)

From article <21128@well.sf.ca.us>, by nagle@well.sf.ca.us (John Nagle):
>      It's also worth bearing in mind that nothing like the backward-
> propagation learning of the NN world has yet been discovered in biology.
> The mechanism found so far look much more like collections of
> adaptive controllers operating control loops.  However, it should
> be noted that most of the neural structures actually understood are
> either in very simple animals (like slugs) or very close to sensors
> and actuators (as in tactile control), where one would expect
> structures that work like adaptive feedback control systems.
> 					John Nagle

Not entirely true.  In "Memory Storage and Neural Systems" (Scientific
American, July 1989) Daniel Alkon describes how rabbit neurons change in
response to Pavlovian conditioning.  The basic mechanism is: if neuron
A fires and nearby neuron B happens to fire half a second later, a link
will gradually form such that the firing of B is triggered by the firing
of A, even in the abscence of whatever originally triggered B.  Although
this isn't quite the same as back-propogation, in simulated neural nets
it actually seems to work far better (learning times are greatly reduced),
and has the added advantage that knowledge of the final "goal" is not
required.  It also corresponds (in my mind at least) very closely to the
observed behaviour of living things (mostly my cat).

As a basic example of how such a net can be trained, I'll use a character
recognizer.  Start with a net with a grid of inputs for the pixel data and
a second set of inputs for, say, the ASCII code of the characters (obviously
ASCII isn't the best way to do it, but it keeps this post shorter).  You
also have a set of outputs for the network's guess.  You start by hardwiring
the network so that the ASCII inputs are wired directly to the ASCII outputs:
input hex 4C and you'll see hex 4C on the output.  Now, all you have to do
is continually present pictures of characters at the pixel input along with
their correct ASCII representations.  Thus, when the network sees a capital
L, it is forced to output a hex 4C.  It soon learns to apply the outputs
without benefit of the guiding input, and without the use of artificial
devices like back propogation.

[Sorry if this is old news in c.a.n-n, but it's getting a bit off topic
for c.a.p].

Jeff Petkau: petkau@skdad.USask.ca
Asterisks: ***********************