[comp.ai.neural-nets] Lippmann paper - Retraction

sietsma@latcs1.oz.au (Jocelyn Sietsma Penington) (07/04/90)

In article <8245@latcs1.oz.au> sietsma@latcs1.oz.au I wrote:
>
>I have to disagree with you here a little.  This geometric analysis is 
>valid if the outputs of the units are approximated by step-functions 
>(= hard-limiters).  Using continous output functions gives more power
>and allows a single hidden layer to suffice, but this analysis was useful
>in giving an upper limit of 2 hidden layers 3 years before the 1-hidden-layer
>proof arrived.

I was wrong - Hornik, Stinchcombe & White proved that a network using a 
single hidden layer with an arbitrary squashing function can approximate
any Borel measurable function to any desired degree of accuracy.  This 
includes using a step-function.  ('Multilayer Feedforward Networks are 
Universal Approximators', _Neural Networks_, Vol 2, No 5, 1989).
I still think the geometric analysis is useful, particularly as, although
any problem _can_ be done with 2 layers, many would probably be solved 
"better" (fewer hidden units, maybe better error surface) with three or more.

(Thanks to Don Wunsch for correcting me)

Jocelyn

DSTO Materials Research Laboratory
Melbourne, Australia