[comp.ai.neural-nets] Learning arbitrary transfer functio

ryer@inmet.UUCP (11/21/88)

>    So, how do human's learn non linear functions ?
>  
>        : you learn that x^2, for instance, is X times X.
>  
>     And how about X times Y ? How do humans learn that ?
>  
>        : you memorize it, for single digits, and
>        : for more than a single digit, you multiply streams
>           of digits together in a carry routine.
>     joe@amos.ling.ucsd.edu

Although my knowledge of neural nets is limited, I won't buy what is
written above.  Most persons can, for example, throw a baseball more
or less at the target in spite of gravity.  This requires a non-linear
calculation.  This is not done via multiplication tables.  Sure it is
done by "experience", but so are neural network calculations.

Mike Ryer
Intermetrics, Inc.

joe@amos.ling.ucsd.edu (Shadow) (11/30/88)

In article <163400002@inmet> ryer@inmet.UUCP writes:
 
>>    So, how do human's learn non linear functions ?
>>  
>>        : you learn that x^2, for instance, is X times X.
>>  
>>     And how about X times Y ? How do humans learn that ?
>>  
>>        : you memorize it, for single digits, and
>>        : for more than a single digit, you multiply streams
>>           of digits together in a carry routine.
 
>Although my knowledge of neural nets is limited, I won't buy what is
>written above.  Most persons can, for example, throw a baseball more
>or less at the target in spite of gravity.  This requires a non-linear
>calculation.  This is not done via multiplication tables.  Sure it is
>done by "experience", but so are neural network calculations.
 
Hmm. I'm no expert on human learning, but I don't buy what's written above.

When I throw a baseball off the top of a ten-story building, I am very
bad at hitting that at which I aimed (e.g., students). This would lead
me to theorize that I have not learned a non-linear relationship.

All of this aside, I must note that the original article was misinterpreted.
That was unfortunate, as I was theorizing on ways to improve generalized
learning of non-linear mathematical relationships for data outside
of the training domain... results in this area were usally fairly dismal
in the experiments which I conducted.

Ideas:

	1. how about linear units on the output layer ?
	   (Idea care of Jeff Elman, ICS, CRL)
	2. sub-networks trained for sub-tasks.
	   (sub-networks mentioned to me in passing by Jeff Elman, ICS,CRL)

I welcome comments,
and actually, I would really like to hear from people who are experts on
human learning. This topic is obviously too hot for me to handle.

(feel free to send mail)

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
=					-				=
= "But why not play god ? "		-   joe@amos.ling.ucsd.edu	=
=		- un-named geneticist	-				=
=					-				=
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

aluko@Portia.Stanford.EDU (Stephen Goldschmidt) (12/01/88)

In article <5572@sdcsvax.UCSD.EDU> you write:

>All of this aside, I must note that the original article was misinterpreted.
>That was unfortunate, as I was theorizing on ways to improve generalized
>learning of non-linear mathematical relationships for data outside
>of the training domain... results in this area were usally fairly dismal
>in the experiments which I conducted.

I have done considerable work in modeling non-linear functions with
a program called ASPN (Algorithm for Synthesis of Polynomial Networks)
which I helped to develop at Barron Associates Inc. during 1986.
My experience was that polynomial functions (which is what ASPN
ultimately produces, though in the form of a network) are excellent 
for interpolations under certain conditions, but fail miserably
on extrapolation.  Part of the art is to configure your problem
so that the network is never asked to extrapolate.

An example:
  Suppose you want to predict the output of an unforced linear system
  of the form y'(t) = y(t) - b

  If you train your network to model the function y(t, b, y(0)) for t < 2
  and then evaluate the network on t = 3, you are asking it to extrapolate
  to values of t that it has never seen before.  This is too much to 
  ask of an economist, let alone a computer! :-)

  If, instead, you model the function y( y(t-1), y(t-2) )
  the network should discover that 
        y(t) = (1+e)*y(t-1) - e*y(t-2)
  which is not only an easier function to model, but also does not
  require explicit knowledge of b.  

  When you evaluate it on t=3, the network is not going to try to
  extrapolate (assuming that your input values of y(t-1) and y(t-2) 
  are in the range of the values used in training the network).

  Thus, it is often possible to turn an extrapolation problem into
  an interpolation problem.

>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>=					-				=
>= "But why not play god ? "		-   joe@amos.ling.ucsd.edu	=
>=		- un-named geneticist	-				=
>=					-				=
>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

       Stephen R. Goldschmidt
        aluko@portia.stanford.edu
  The opiniions herein are my own.