[comp.ai.neural-nets] Approximate Realisation of Piecewise Linear Functions

eeoglesb@cybaswan.UUCP (j.oglesby eleceng pgrad) (08/05/90)

------------------------------------------------------------------------------

I have recently had reason to consider the types of mappings that multilayer
feed-forward neural nets can perform when using weighted summation nodes
with hard limiting activation functions. All is not as clear as it might
be in the literature so I'd like to get a concensus of opinion.

I have got as far as :

1 Layer (no hidden nodes) -    gives a hyperplane that divides the input space
                               in to two parts.
3 Layer (two hidden layers) -  gives ANY piecewise linear division of the input
                               space.
OK thats the easy part, now

2 Layer (one hidden layer)  -  ANY single piecewise linear convex region, 

Now I can make some DISCONNECTED CONVEX regions and some DISCONNECTED 
CONCAVE regions , however I don't think I can make all disconnected concave
types of decision region with only one hidden layer.

If I can't then (changing subject here) how can a single hidden layer of
perceptrons with sigmoidal activation functions approximate arbitary decision
regions ?? Or is the approximations very bad, come to think of it a single
perceptron can __APPROXIMATE__ any function, it's just not a very good 
approximation !! 

Can anybody rationalise the decision boundaries for one hidden layer nets
with hardlimiting activation functions ? 

Can anybody tell me what good it is knowing that you can approximate functions
if the approximation to very bad ? (Is it one of those in the limit the error
goes to zero cases)

John.


------------------------------------------------------------------------------
 John Oglesby.                        UUCP  : ...!ukc!pyr.swan.ac.uk!eeoglesb 
 Digital Signal Processing Group,     JANET : eeoglesb@uk.ac.swan.pyr         
 Electrical Engineering Dept.,        Phone : +44 792 205678  Ex 4564         
 University of Wales,                 Fax   : +44 792 295686                  
 Swansea, SA2 8PP, U.K.               Telex : 48358                           
------------------------------------------------------------------------------

bill@hooey.unm.edu (william horne) (08/08/90)

In article <1938@cybaswan.UUCP> eeoglesb@cybaswan.UUCP (j.oglesby eleceng pgrad) writes:
>------------------------------------------------------------------------------
>
>1 Layer (no hidden nodes) -    gives a hyperplane that divides the input space
>                               in to two parts.
>3 Layer (two hidden layers) -  gives ANY piecewise linear division of the input
>                               space.
>OK thats the easy part, now
>
>2 Layer (one hidden layer)  -  ANY single piecewise linear convex region, 
>
>Now I can make some DISCONNECTED CONVEX regions and some DISCONNECTED 
>CONCAVE regions , however I don't think I can make all disconnected concave
>types of decision region with only one hidden layer.
>
>Can anybody rationalise the decision boundaries for one hidden layer nets
>with hardlimiting activation functions ? 
>

I had thought about this question a lot myself.  I realized what Lippmann
says wasn't right because I figured out how to make a donut shaped thing
with only a single hidden layer, but I couldn't figure out how to
generalize it.  I found some answers in,

J. Makhoul, A. El-Jaroudi, and R. Schwartz, "Formation of Disconnected
Decision Regions with a Single Hidden Layer", in IJCNN89, Vol I,
pp. 445-460.

I forget the details, but I thought it was a good paper at the time.
Hope this helps...

-Bill