[comp.ai.neural-nets] NN Benchmarks

loren@dweasel.llnl.gov (Loren Petrich) (06/01/91)

	Here is a list of benchmarks for NN's that I have learned of.

	XOR: a classic. It demonstrates the need for nonlinearity.

	    0 1
	  +----
	0 | 0 1
	1 | 1 0

	Parity: is a binary string odd or even? It has the difficulty
that the two outcomes are intimately intertwined.

	Encoder: an auto-associative problem (output = input
intended). Solve for 100..0, 010..0, ..., 000..1, or some equivalent.

	Multiplexor: essentially the same thing?

	2 out of 3: if correct, return true, otherwise, return false.
This seems like a linear problem.

	Fahlman presents the following ones:

	Zig-zag and Two Spirals.

	Both of them have as their inputs points in 2-space and
outputs the associated values. Fahlman in some papers of his (in
/pub/neuroprose at cheops.cis.ohio-state.edu, avail. by anon. FTP)
uses a filled-up plane of points in the 2-space with appropariate
values as a test set; he graphs the result.

	His training set for Two Spirals consists of two intertwined
spirals of points of opposite colors. His test set is all the area in
between the points; as the training proceeds with his algorithm, one
can see the filled-in spirals take place in the graphs in his paper.

	I think that this type of display may be very useful in
displaying the generalization capabilities of various NN algorithms.
One can represent more than one output with colors or textures on the
graph. Any comments?


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

shers@masala.lcs.mit.edu (Alex Sherstinsky) (06/01/91)

In article <98464@lll-winken.LLNL.GOV> loren@dweasel.llnl.gov (Loren Petrich) writes:
#
#	Here is a list of benchmarks for NN's that I have learned of.
#
This was really interesting.  By encoder, did you really mean shifter?

Also I have a request: I am taking an oral exam this coming week whose
topic is supervised and unsupervised learning (backprop, radial basis 
functions, and Kohonen's self-organized feature maps).  If someone has
the time, could you e-mail me a some questions on the topic that I can 
use for practice?

Thanks a lot!
--
+-------------------------------+------+---------------------------------------+
|Alexander The Great Sherstinsky|me    |shers@masala.lcs.mit.edu|To become as  |
|Alexander Semyon Sherstinsky   |myself|shers@masala.lcs.mit.edu|refined person|
|Alex Sherstinsky               |I     |shers@masala.lcs.mit.edu|as possible.  |

styri@cs.hw.ac.uk (Yu No Hoo) (06/03/91)

Sorry for waisting bandwidth on a triviality, but...

In article <98464@lll-winken.LLNL.GOV> loren@dweasel.llnl.gov (Loren Petrich) writes:
>
>	Here is a list of benchmarks for NN's that I have learned of.
>
>  [stuff deleted]
>
>	2 out of 3: if correct, return true, otherwise, return false.
>This seems like a linear problem.

Nope, the 2 out of 3 should return true *iff* 2 out of 3 inputs are true.
Try it, it's not a linear problem - the best set of perceptron weights gives
only 7 correct classifications of the 8 possible.

----------------------
Haakon Styri
Dept. of Comp. Sci.              ARPA: styri@cs.hw.ac.uk
Heriot-Watt University          X-400: C=gb;PRMD=uk.ac;O=hw;OU=cs;S=styri
Edinburgh, Scotland