[comp.ai.neural-nets] Scaling I/O in Backpropogation networks

pvk90@pine.circa.ufl.edu (KALLE) (06/26/91)

	I am quite a novice in the field of neural nets and am having a 
little trouble with BPN.
		I am using backpropogation network to map error parameters in
machine tools. This is nothing but, curve fitting. I use a sigmoidal activation
function and thus scale all my I/O between 0.1 and 0.9
	The scaling function that I use  is  -

				0.8 * (Fu - Fu_min)
		Fs   = 		-------------------    +   0.1
				(Fu_max  -  Fu_min)
	
	where
		Fu      - 	value to be scaled
		Fu_max  -	maximum value in the range of data.
		Fu_min  - 	minimum value in the range of data.
		Fs      -	data scaled between 0.1 and 0.9.

	I train the network for the desired convergence factor say 0.001
maximum error. After the training I test the network performance using inputs
which are different from those used in the training set.
	To evaluate the performance of the network I compute the percent error
(I know what the output should be for the test inputs)
	I observed that the percent error before unscaling the network output 
is different from the PE after unscaling. This sure makes sense. But at certain
points in the test set this difference is very high. For example for a certain
point I had an error of 0.008% before unscaling and the error after unscaling is
124% . This is quite disturbing. I realized that at certain points, although my
network predicts fairly well, unscaling the output proves otherwise.
	I wonder if I am doing something wrong. If anyone has some answer to
this problem I'd appreciate if they could throw some light on this matter.

	THANKS IN ADVANCE.

	Prashant Kalle
	Graduate Student
	Mechanical Engineering
	University of Florida.