[comp.archives] [neural-nets] Anybody's Experience with Fahlman's Quickprop?

loren@tristan.llnl.gov (Loren Petrich) (03/16/91)

Archive-name: ai/neural-nets/fahlman-quickprop/1991-03-11
Archive: cheops.cis.ohio-state.edu:/pub/neuroprose/fahlman.quickprop-tr.ps.Z [128.146.8.62]
Original-posting-by: loren@tristan.llnl.gov (Loren Petrich)
Original-subject: Anybody's Experience with Fahlman's Quickprop? (was Re: Are Conjugate Gradient algorithms any good?)
Reposted-by: emv@msen.com (Edward Vielmetti, MSEN)


	Having reviewed some Conjugate Gradient methods, I find
them rather complicated.

	An alternative, due to Fahlman, is the Quickprop algorithm. It
is described in some papers of his that can be found in the
/pub/neuroprose directory of cheops.cis.ohio-state.edu, available by
anonymous ftp.

	Basically, it works by remembering the previous gradient and
the stepsize taken from there, and finding the new weight values by
fitting a line from the current gradient to the previous gradient.
This operation is done on each weight component separately. In effect,
the Hessian is approximated as a diagonal matrix, but one where the
nonzero elements are independent of each other. There are some fudge
factors that have to be added here and there, such as adding a
gradient-descent "starter" and keeping the stepsizes from growing too
rapidly, but this algorithm is remarkably simple.

	I have found it to be a stable and fast algorithm for solving
gradient-descent problems.

	Has anyone else had experience with Quickprop, and how does it
compare with Conjugate Gradients and other such methods?


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu