[comp.ai.neural-nets] Increase the speed of back prop.

LAL102@PSUVM.BITNET (11/22/89)

About a few weeks ago, I posted a problem about my neural net simulation.
I have received many helpful responses ever since. Thank you guys.
And now my simulation is working fine. But sometimes, I feel that the training
process takes too long, or may not stop at all. I have the following questions:

1. Is there any way to increase the training process (back-prop) ?
2. Does it help to increse the speed by presenting the training pairs in
   different sequences each time, and how ?
3. What are the relationships between no. layers and the reliability of
   the network or "forgetness" of the network ?
4. What is  the significance of  the  no. of neurons in input, hidden and
   output layers  relative to the no. of inputs?
5. Does the network stop convergening after the error is reduced to a certain
   value? And how does this value depend on no. of training pairs and no. of
   layers (and/or neurons/layer) ?

   I am doing this simulation for an independent study course, and response
   from you guys is actually my main source of information. Therefore, I'd
   be most grateful if you could kindly send me a note if you know any of
   the above questions. Thank you.
                                                Sincerely,
                                                 Lik Alaric Lau
                                         Computer Engineering, Penn State U.

ravula@neuron.uucp (Ramesh Ravula) (11/22/89)

>LAL102@PSUVM.BITNET writes: how do you increase the computation speed of the
>backpropagation algorithm.

In his paper "Increased Rates of Convergence Through Learning Rate Adaptation",

(Neural Networks Vol. 1, pp. 295-307) the author Robert A. Jacobs gives some

heuristics which might help to acheive faster rates of convergence. I haven't

tried it yet, but you might. I will also be interested to know what kind of 

results you get.

Thanks


Ramesh Ravula
GE Medical Systems 
Waukesha, WI 53151.

email:    {att|mailrus|uunet|phillabs}!steinmetz!gemed!ravula

			 or

          {att|uwvax|mailrus}!uwmcsd1!mrsvr!gemed!ravula