vu2jok@cs.tamu.edu (Jogen K Pathak) (07/31/90)
We are encountering problems while training the different paradigms , especially Back - Propagation paradigm. The training is very time consuming and tedious. Can anyone help to choose the training parameters' values that can reduce the training sessions. We are working in pattern classification of moderate size.e.g 100 input attributes. Any literature references also will be greatly appreciated. Jogen and Rajan.
ajayshah@aludra.usc.edu (Ajay Shah) (07/31/90)
In article <6985@helios.TAMU.EDU> vu2jok@cs.tamu.edu (Jogen K Pathak) writes: >We are encountering problems while training the different paradigms , especially >Back - Propagation paradigm. The training is very time consuming and tedious. >Can anyone help to choose the training parameters' values that can >reduce the training sessions. We are working in pattern classification of moderate size.e.g 100 input attributes. >Any literature references also will be greatly appreciated. >Jogen and Rajan. I'd like to describe my experience. I worked with a small sample of 60 observations, with a discrete endogenous variable (takes values 0/1/2) and 6 exogenous variables. I spent one evening trying to get a backprop network to produce sensible predictions on a seperate set of 15 observations and failed miserably. I used the BPS simulator program (a very nice program in case you haven't tried it, except for the lack of offline operation) on a 386/387. Each estimation took something like 15 minutes. I tried a diverse set of topologies and couldn't get anything which performed well. I don't know what I could be doing wrong. Does anyone have ideas on how to effectively converge upon backprop networks which work? -- _______________________________________________________________________________ Ajay Shah, (213)747-9991, ajayshah@usc.edu The more things change, the more they stay insane. _______________________________________________________________________________