styri@cs.hw.ac.uk (Yu No Hoo) (06/27/91)
I'm playing around with an algorithm for finding optimal linear discriminants: The Pocket Algorithm developed by Stepen Gallant. See [1], [2] and [3]. It's a nice variation of perceptron learning and very useful when you're using a non-separable data set. There are probably other simple methods for finding optimal linear discriminants. I would like to ask if any of you have tried the kind of parametric training method described by Nils Nilsson in [4]. (At least my data set is large enough to apply statistical methods.) It may be interesting to convert Nilsson's weights so they can be used as initial weights for the Pocket Algorithm. Of course, any other pointers/ideas are received with thanks. Replies received by mail will be reposted in a summary. [1] Gallant, S.: "Optimal Linear Discriminants", Proc. Eight Int. Conf. Patt. Rec., Paris, France, pp 849-852, Oct. 1986. [2] Gallant, S.: "Connectionist Expert Systems", CACM, vol 31 (2), pp 152-169, Feb. 1988. [3] Gallant, S.: "Perceptron-Based Learning Algorithms", IEEE Trans. Neural Nets., vol 1 (2) : pp 179-191, June 1990. [4] Nilsson, N.J.: "Learning Machines", (Chapter 3 - Parametric Training Methods), McGraw-Hill, 1965. ---------------------- Haakon Styri Dept. of Comp. Sci. ARPA: styri@cs.hw.ac.uk Heriot-Watt University X-400: C=gb;PRMD=uk.ac;O=hw;OU=cs;S=styri Edinburgh, Scotland