[comp.ai.neural-nets] Basins of Attraction vs Error-Detection

jagota@sybil.cs.Buffalo.EDU (Arun Jagota) (05/25/91)

For associative memories, sizes of basins of attractions has been 
an object of theoretical study, motivated by the fact that large
basins are ``better'', for error-correction abiility. While not disputing
this, I offer one (error-detection) application for which basin attraction
sizes is not an issue, whereas the ``stability'' of the stored patterns is.

In one experiment, we ``stably'' stored ~240,000 English words in a 
~580 unit fully connected Hopfield-type Network. The test set to the
network was a collection of random non-word strings. The task was to
identify the strings as errors. The network identified between 80-90%
of the test set strings as errors, taking ONE network cycle per string.
A plausible application of this idea is to take a large document containing 
words from a LARGE dictionary, and ``garbage'' strings (far from any 
dictionary word) and rapidly remove (many of the) latter.

shar file of LateX sources of the paper describing the details is available 
via ftp as folows.

ftp ftp.cs.buffalo.edu
-- if this doesn't work try the following number, which will change
-- in a few weeks. Or send me e-mail.
ftp 128.205.32.3
Name : anonymous
> cd users/jagota
> get ijcnn_Jul91b.shar
> quit

There are several related papers in the same directory, accessible
by ftp as above. Get the file `README' for details. The simulator
is also available there. Get the file `hsn.README' for details.
Finally, as a last resort, but please, only if all else fails, or
if you do not have access to ftp, send me e-mail (jagota@cs.buffalo.edu)
and I can send you the same files (papers or simulator) by e-mail.
-- 
------------------------------------------------------------------------
      Arun Jagota               Internet: jagota@cs.buffalo.edu 
Computer Science Department     BITNET  : jagota%cs.buffalo.edu@ubvm.bitnet
            State Univesity of New York at Buffalo