sietsma@latcs1.oz.au (Jocelyn Sietsma Penington) (07/02/90)
In article <1085@carol.fwi.uva.nl> smagt@fwi.uva.nl (Patrick van der Smagt) writes: >In article <11441@rasp.eng.cam.ac.uk> mww@uk.ac.cam.eng writes: >>A few weeks ago there was a summary of papers and books on neural >>networks. Several people referred to Richard Lippman's well known >>paper "An introduction to Computing with Neural Nets: IEEE ASSP >>magazine April 1987" >> >>More than one person said that this paper contained errors. As this >>is such an influential and often read paper, particularly by those who >>are new to the field, I wonder if anyone would care to say exactly >>what these errors are. > >Here they are (or, at least, some of them): [2 criticisms deleted] > p. 14 figure 14 (serious) > rather misleading. A ff-network with --**ONE**-- layer of > hidden units suffices. For references, see I have to disagree with you here a little. This geometric analysis is valid if the outputs of the units are approximated by step-functions (= hard-limiters). Using continous output functions gives more power and allows a single hidden layer to suffice, but this analysis was useful in giving an upper limit of 2 hidden layers 3 years before the 1-hidden-layer proof arrived. > [2 more criticisms deleted] > p. 16, second last paragraph "There should thus typically be more than three times as many nodes in the second as in the first layer." The 'first' and 'second' should be exchanged - 3 times as many nodes in the first as in the second layer. This is clearly what is intended from the analysis preceding. >Patrick van der Smagt /\/\ Jocelyn Sietsma USD, Materials Research Laboratory DSTO
smagt@fwi.uva.nl (Patrick van der Smagt) (07/03/90)
In article <1085@carol.fwi.uva.nl> smagt@fwi.uva.nl (Patrick van der Smagt) writes: >In article <11441@rasp.eng.cam.ac.uk> mww@uk.ac.cam.eng writes: >>I am posting this for someone else so please reply directly to him >>(mww@uk.ac.cam.eng) or to the network. >> >>A few weeks ago there was a summary of papers and books on neural >>networks. Several people referred to Richard Lippman's well known >>paper "An introduction to Computing with Neural Nets: IEEE ASSP >>magazine April 1987" >> >>More than one person said that this paper contained errors. As this >>is such an influential and often read paper, particularly by those who >>are new to the field, I wonder if anyone would care to say exactly >>what these errors are. > >Here they are (or, at least, some of them): > [4 criticism deleted] > > p. 19 box 7 (not major) > the issue of NORMALIZATION of the weights and inputs > is left out here. > This one is correct after all, of course. My previous confusion was due to the fact that in his 1982 paper, Kohonen uses m (t) + c.x(t) i m (t+1) = ------------------- i ||m (t) + c.x(t) || i whereas a better formula is (as he describes in his 1984 book) m (t+1) = m (t) + c(t).[x(t) - m (t)] i i i this one moves the weight vector more directly in the desired direction, and removes the need of normalization. Lippmann uses the latter method (as does everyone). Patrick van der Smagt /\/\ \ / Organization: Faculty of Mathematics & Computer Science / \ University of Amsterdam, Kruislaan 409, _ \/\/ _ NL-1098 SJ Amsterdam, The Netherlands | | | | Phone: +31 20 525 7466 | | /\/\ | | Telex: 10262 hef nl | | \ / | | Fax: +31 20 592 5155 | | / \ | | email: smagt@fwi.uva.nl | | \/\/ | | | \______/ | \________/ /\/\ \ / / \ \/\/
news@nprdc.arpa (news) (07/04/90)
"An introduction to Computing with Neural Nets: IEEE ASSP magazine April 1987" From: gollub@nprdc.navy.mil (Lewis Gollub) Path: gollub that has been discussed here recently. Has it been reprinted? Any other suggestion for finding it? (Our local library has a number of IEEE publications, but not this one.) Thanks.