smagt@fwi.uva.nl (Patrick van der Smagt) (09/06/90)
I am not quite sure about this: learn vs. store. I am inclined
to classify algorithms such as back-propagation (i.e., optimum-
SEEKING algorithms) as learning algorithms. Whereas when one considers
the methods to "teach" a relaxation model such as what is
commonly called the Hopfield network (typically, the Hebb rule),
the term "learning rule" is too strong I think. But what about
an iterative (though not really optimum-seeking) algorithm such as
Bruce et al.'s? Is it generally agreed upon that this is STORE
as opposed to LEARN?
Reference:
A. D. Bruce, A. Canning, B. Forrest, E. Gardner, D. J. Wallace,
"Learning and memory properties in fully connected networks",
AIP Conference Proceedings 151, Neural Networks for Computing,
J. S. Denker (ed.), Snowbird, Utah, 1986, pp. 65--70.
Patrick van der Smagt /\/\
\ /
Organization: Faculty of Mathematics & Computer Science / \
University of Amsterdam, Kruislaan 409, _ \/\/ _
NL-1098 SJ Amsterdam, The Netherlands | | | |
Phone: +31 20 525 7466 | | /\/\ | |
Telex: 10262 hef nl | | \ / | |
Fax: +31 20 592 5155 | | / \ | |
email: smagt@fwi.uva.nl | | \/\/ | |
| \______/ |
\________/
/\/\
\ /
/ \
\/\/
``The opinions expressed herein are the author's only and do
not necessarily reflect those of the University of Amsterdam.''