[net.arch] Hopfield Networks?

jer@peora.UUCP (J. Eric Roskos) (02/07/86)

In a recent issue (Issue 367) of EE Times, there is an article titled
"Neural Research Yields Computer that can Learn".  This describes a
simulation of a machine that uses a "Hopfield Network"; from the
description, it appears that the Hopfield Network is some sort of network
using gates whose inputs and outputs use "true" or "false" values,
but in which each input is weighted, with the gate's output yielding
a "true" only if the sum of the weights for all the "true" inputs exceed
some threshold value.  However, the article doesn't give any further
details.  (Also, it said that the inputs to the gates were "analog",
but didn't go on to explain how this related to their description of
the gates, which they say "only transmit when the total input reaches
an assigned threshold value", unless they transmit the sum of the
inputs if the sum is above some value, and a zero-value otherwise,
or something of that sort.)

Does anybody know anything more about these Hopfield Networks?  The
article describes them in the context of a text-to-speech algorithm,
and suggests that the network is "programmed" (in some algorithmic manner)
by adjusting the weights on the inputs of the various gates somehow.
Apparently the interconnections are fixed, but neither the topology nor
the algorithm for adjusting the weights is given.
-- 
UUCP: Ofc:  jer@peora.UUCP  Home: jer@jerpc.CCUR.UUCP  CCUR DNS: peora, pesnta
  US Mail:  MS 795; CONCURRENT Computer Corp. SDC; (A Perkin-Elmer Company)
	    2486 Sand Lake Road, Orlando, FL 32809-7642     xxxxx4xxx

	"There are other places that are also the world's end ...
	 But this is the nearest ... here and in England." -TSE

ehj@mordor.UUCP (Eric H Jensen) (02/07/86)

In article <1960@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
>In a recent issue (Issue 367) of EE Times, there is an article titled
>"Neural Research Yields Computer that can Learn".  This describes a
>simulation of a machine that uses a "Hopfield Network"; from the ...

I got the impression that this work is just perceptrons revisited.
All this business about threshold logic with weighting functions on
the inputs adjusted by feedback (i.e. the child reading) ...

Anybody in the know have a comment?


-- 
eric h. jensen        (S1 Project @ Lawrence Livermore National Laboratory)
Phone: (415) 423-0229 USMail: LLNL, P.O. Box 5503, L-276, Livermore, Ca., 94550
ARPA:  ehj@angband    UUCP:   ...!decvax!decwrl!mordor!angband!ehj

peters@cubsvax.UUCP (02/08/86)

In article <peora.1960> jer@peora.UUCP (J. Eric Roskos) writes:
>In a recent issue (Issue 367) of EE Times, there is an article titled
>"Neural Research Yields Computer that can Learn".  This describes a
>simulation of a machine that uses a "Hopfield Network"; 
...
>Does anybody know anything more about these Hopfield Networks?
...

Probably refers to the work of John Hopfield, a solid-state physicist, formerly
of Princeton, now of Cal Tech, whose recent interests are in biophysics.  
In the 70's he did a number of influential studies on hemoglobin and on 
error-correction in DNA transcription ("kinetic proofreading");  in the
80's he's been interested in modelling nerve networks;  I don't know what a
Hopfield network is, but he publishes in places like J. Mol. Bio., Nature,
Pro. Nat'l. Acad. Sci. and probably Biophysical Journal.  He's eminent.
If you find out, tell us!

Peter Shenkin;  {philabs,rna}!cubsvax!peters or cubsvax!peters@columbia.ARPA

lindahl@ti-csl (02/11/86)

>Does anybody know anything more about these Hopfield Networks?  The
>article describes them in the context of a text-to-speech algorithm,
>and suggests that the network is "programmed" (in some algorithmic manner)
>by adjusting the weights on the inputs of the various gates somehow.
>Apparently the interconnections are fixed, but neither the topology nor
>the algorithm for adjusting the weights is given.

The Hopfield Neural Networks were first mentioned in paper in a biological 
periodical in '83, that I know of. I just recently moved into a new 
house & haven't unpacked my things yet; if anyone else on the net
gets the reference to you first, just let me know. I'll probably find it in 
a week or so. 

Charlie Lindahl
Texas Instruments (CRL/CSL)

ARPA:  lindahl%TI-CSL@CSNet-Relay
UUCP:  {convex!smu, texsun, ut-sally, rice} ! tilde ! lindahl

DISCLAIMER: The opinions/statements made in this note are mine, not
	    of my employer.

dickey@ssc-vax.UUCP (Frederick J Dickey) (02/14/86)

> In article <peora.1960> jer@peora.UUCP (J. Eric Roskos) writes:
> >In a recent issue (Issue 367) of EE Times, there is an article titled
> >"Neural Research Yields Computer that can Learn".  This describes a
> >simulation of a machine that uses a "Hopfield Network"; 
> ...
> >Does anybody know anything more about these Hopfield Networks?
> ...
> 
> Probably refers to the work of John Hopfield, a solid-state physicist, formerly
> of Princeton, now of Cal Tech, whose recent interests are in biophysics.  
> In the 70's he did a number of influential studies on hemoglobin and on 
> error-correction in DNA transcription ("kinetic proofreading");  in the
> 80's he's been interested in modelling nerve networks;  I don't know what a
> Hopfield network is, but he publishes in places like J. Mol. Bio., Nature,
> Pro. Nat'l. Acad. Sci. and probably Biophysical Journal.  He's eminent.
> If you find out, tell us!
> 
> Peter Shenkin;  {philabs,rna}!cubsvax!peters or cubsvax!peters@columbia.ARPA

****************************************************

A reference is the following:

J.J. Hopfield "Neural networks and physical systems with emergent collective
computational abilities." Proc. Nat. Acad. of Sciences USA, 1982, 79, pp. 2554-
2558.

A reference on similar work is the following:

G. Hinton, "Boltzmann Machines" Carnegie-Mellon Tech Rpt CMU-CS-84-119, May,
1984.

The following reference is helpful in understanding the previous reference.

S. Kirkpatrick et al. "Optimization by Simulated Annealing." Science, 13 May 
1983, vol 220, no 4598, pp. 671-680.

If you read all this stuff, you will see that Hopfield networks are not
"perceptrons revisted."



						F.J. Dickey
						Boeing Aerospace Co.
						Seattle, WA

elman@sdcsvax.UUCP (Jeff Elman) (02/15/86)

In article <5413@mordor.UUCP>, ehj@mordor.UUCP (Eric H Jensen) writes:
> In article <1960@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
> >In a recent issue (Issue 367) of EE Times, there is an article titled
> >"Neural Research Yields Computer that can Learn".  This describes a
> >simulation of a machine that uses a "Hopfield Network"; from the ...
> 
> I got the impression that this work is just perceptrons revisited.
> All this business about threshold logic with weighting functions on
> the inputs adjusted by feedback (i.e. the child reading) ...
> 
> Anybody in the know have a comment?
> 

This refers to some work by Terry Sejnowski, in which he uses a method
developed by Dave Rumelhart (U.C. San Diego), Geoff Hinton (CMU), and Ron
Williams (UCSD) for automatic adjustment of weights on connections between
perceptron-like elements.  Sejnowski applied the technique to
a system which automatically learned text-to-phoneme correspondances
and was able to take text input and then drive a synthesizer.

The current work being done by Rumelhart and his colleagues certainly
builds on the early perceptron work.  However, they have managed to
overcome one of the basic deficiencies of the perceptron.  While perceptron
systems have a simple learning procedure, this procedure only worked
for simple 2-layer networks, and such networks had limited power (they
could not recognize XOR patterns, for instance).  More complex multi-layer
networks were more powerful, but -- until recently -- there has been
no simply way for these systems to automatically learn how to adjust
weights on connections between elements.

Rumelhart has solved this problem, and has discovered a generalized
form of the perceptron convergence procedure which applies to networks
of arbitrary depth.  He and his colleagues have explored this technique in 
a number of interesting simulations, and it appears to have a tremendous 
amount of power.  More information is available from Rumelhart 
(der@ics.ucsd.edu or der@nprdc.arpa), or in a technical report "Learning 
Internal Representations by Error Propagation" (Rumelhart, Hinton, Williams),
available from the Institute for Cognitive Science, U.C. San Diego,
La Jolla, CA 92093.

Jeff Elman
Phonetics Lab, UCSD
elman@amos.ling.ucsd.edu / ...ucbvax!sdcsvax!sdamos!elman