[net.ai] Hopfield Networks?

ehj@mordor.UUCP (Eric H Jensen) (02/07/86)

In article <1960@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
>In a recent issue (Issue 367) of EE Times, there is an article titled
>"Neural Research Yields Computer that can Learn".  This describes a
>simulation of a machine that uses a "Hopfield Network"; from the ...

I got the impression that this work is just perceptrons revisited.
All this business about threshold logic with weighting functions on
the inputs adjusted by feedback (i.e. the child reading) ...

Anybody in the know have a comment?


-- 
eric h. jensen        (S1 Project @ Lawrence Livermore National Laboratory)
Phone: (415) 423-0229 USMail: LLNL, P.O. Box 5503, L-276, Livermore, Ca., 94550
ARPA:  ehj@angband    UUCP:   ...!decvax!decwrl!mordor!angband!ehj

elman@sdcsvax.UUCP (Jeff Elman) (02/15/86)

In article <5413@mordor.UUCP>, ehj@mordor.UUCP (Eric H Jensen) writes:
> In article <1960@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
> >In a recent issue (Issue 367) of EE Times, there is an article titled
> >"Neural Research Yields Computer that can Learn".  This describes a
> >simulation of a machine that uses a "Hopfield Network"; from the ...
> 
> I got the impression that this work is just perceptrons revisited.
> All this business about threshold logic with weighting functions on
> the inputs adjusted by feedback (i.e. the child reading) ...
> 
> Anybody in the know have a comment?
> 

This refers to some work by Terry Sejnowski, in which he uses a method
developed by Dave Rumelhart (U.C. San Diego), Geoff Hinton (CMU), and Ron
Williams (UCSD) for automatic adjustment of weights on connections between
perceptron-like elements.  Sejnowski applied the technique to
a system which automatically learned text-to-phoneme correspondances
and was able to take text input and then drive a synthesizer.

The current work being done by Rumelhart and his colleagues certainly
builds on the early perceptron work.  However, they have managed to
overcome one of the basic deficiencies of the perceptron.  While perceptron
systems have a simple learning procedure, this procedure only worked
for simple 2-layer networks, and such networks had limited power (they
could not recognize XOR patterns, for instance).  More complex multi-layer
networks were more powerful, but -- until recently -- there has been
no simply way for these systems to automatically learn how to adjust
weights on connections between elements.

Rumelhart has solved this problem, and has discovered a generalized
form of the perceptron convergence procedure which applies to networks
of arbitrary depth.  He and his colleagues have explored this technique in 
a number of interesting simulations, and it appears to have a tremendous 
amount of power.  More information is available from Rumelhart 
(der@ics.ucsd.edu or der@nprdc.arpa), or in a technical report "Learning 
Internal Representations by Error Propagation" (Rumelhart, Hinton, Williams),
available from the Institute for Cognitive Science, U.C. San Diego,
La Jolla, CA 92093.

Jeff Elman
Phonetics Lab, UCSD
elman@amos.ling.ucsd.edu / ...ucbvax!sdcsvax!sdamos!elman