[comp.ai.neural-nets] Stephen Hanson on backpropagation algorithm for neural nets

pratt@zztop.rutgers.edu (Lorien Y. Pratt) (09/21/88)

				 Fall, 1988  
		     Neural Networks Colloquium Series 
				 at Rutgers  

	     Some comments and variations on back propagation
	     ------------------------------------------------

			    Stephen Jose Hanson
				Bellcore
		Cognitive Science Lab, Princeton University

		    Room 705 Hill center, Busch Campus  
		  Friday September 30, 1988 at 11:10 am 
		    Refreshments served before the talk


                                   Abstract   

      Backpropagation is presently one of the most widely used learning
      techniques in connectionist modeling.  Its popularity, however, is
      beset with many criticisms and concerns about its use and potential
      misuse.  There are 4 sorts of criticisms that one hears:

	      (1) it is a well known statistical technique
		  (least squares)

	      (2) it is ignorant <about the world in which it is 
		  learning--thus design of i/o is critical>

	      (3) it is slow--(local minima, its NP complete)

	      (4) it is ad hoc--hidden units as "fairy dust"

      I believe these four types of criticisms are based on fundamental
      misunderstandings about the use and relation of learning methods to the
      world, the relation of ontogeny to phylogeny, the relation of simple
      neural models to neuroscience and  the nature of "weak" learning
      theories.  I will discuss these issues in the context of some
      variations on backpropagation.


P.S. Don't forget the talk this Friday (the 23rd) by Dave Touretzky
-- 
-------------------------------------------------------------------
Lorien Y. Pratt                            Computer Science Department
pratt@paul.rutgers.edu                     Rutgers University
                                           Busch Campus
(201) 932-4634                             Piscataway, NJ  08854