[comp.ai.neural-nets] Neuron Digest V6 #1

neuron-request@HPLABS.HP.COM ("Neuron-Digest Moderator Peter Marvit") (01/10/90)

Neuron Digest   Tuesday,  9 Jan 1990
                Volume 6 : Issue 1

Today's Topics:
                            Lyapunov measures
                          RE: Lyapunov Measures
                          Re: Lyapunov measures
                          Re: Lyapunov measures
                          Re: Lyapunov measures
                 Re: Minimum # of internal nodes to form
                 Re: Minimum # of internal nodes to form
                 Re: Minimum # of internal nodes to form
          Minimum # of internal nodes to form boolean function
        Re: Minimum # of internal nodes to form boolean function
        Re: Minimum # of internal nodes to form boolean function
        Re: Minimum # of internal nodes to form boolean function
                        Re: MUSIC and Neural Nets
                      Re: Re: MUSIC and Neural Nets


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Lyapunov measures
From:    andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head )
Organization: National Semiconductor, Santa Clara
Date:    10 Oct 89 00:14:27 +0000 

Can anyone suggest texts which discuss Lyapunov functions/measures?  I know
very little about them, except that they seem to be popular in describing
global stability criteria (e.g. Cohen & Grossberg, Hopfield) and so would
be a useful addition to the NN analyst's toolkit.  Unfortunately, I've come
across no books which deal with this on a level accessible to undergrad
maths.

Thanks in advance,

...........................................................................
Andrew Palfreyman               and the 'q' is silent,
andrew@dtg.nsc.com              as in banana                     time sucks

------------------------------

Subject: RE: Lyapunov Measures
From:    worden@ut-emx.UUCP (worden)
Organization: The University of Texas at Austin, Austin, Texas
Date:    21 Oct 89 08:06:41 +0000 


Lyapunov functions and stability criteria are one of the mainstays of
control theory (aka, linear systems theory).

There are many, many control theory texts ... at both the undergraduate and
graduate levels ... and throughout the spectrum from pure applied to pure
theorem-proof.

An hour or two spent browsing in your local engineering library would be an
excellent starting place.

  Sue Worden
  Electrical & Computer Engineering
  University of Texas at Austin

------------------------------

Subject: Re: Lyapunov measures
From:    ted@nmsu.edu (Ted Dunning)
Organization: NMSU Computer Science
Date:    22 Oct 89 02:16:27 +0000 



   Can anyone suggest texts which discuss Lyapunov functions/measures?

look in texts on chaos.  guckenheimer and holmes is the most complete, but
it is hardly accessible or obvious.  devaney's book on chaos must mention
lyapunov exponents and it is certainly accessible.

   Unfortunately, I've come across no books which deal with this on
   a level accessible to undergrad maths.

and you may not depending on what you think of as undergraduate math.  but
here is a quick summary anyway...

the idea behind a lyapunov exponent (not measure), is that any smooth flow
defined by differential equations or an iterated map will transform some
small neighborhood around a fixed point to some other small neighborhood
around that same point.  if the original neighborhood is small enough, then
the transformation will be approximately linear (remember, a smooth flow or
continous and differentiable map).  this transformation can be described by
the directions in which stretching occurs and the amount of stretching.
the directions are called the eigenvectors and the amounts are called the
eigenvalues of the transformation.

the natural log of the largest of these eigenvalues is called the lyapunov
exponent.  if it is negative then the fixed point is stable since the
neighborhood around it must contract toward the fixed point.  if it is
positive, then the fixed point is at best a hyperbolic fixed point with
some flow toward it and some away (i.e. it is at a saddle bifurcation point
in the flow).

in the case of rotating flow, then we use what are called generalized
eigenvalues.  they still capture the essence of growth or shrinkage, but
not along a single axis.

ted@nmsu.edu
                        Dem Dichter war so wohl daheime
                        In Schildas teurem Eichenhain!
                        Dort wob ich meine zarten Reime
                        Aus Veilchenduft und Mondenschein

------------------------------

Subject: Re: Lyapunov measures
From:    demers@beowulf.ucsd.edu (David E Demers)
Organization: EE/CS Dept. U.C. San Diego
Date:    23 Oct 89 02:59:46 +0000 

[discussion of lyapunov exponents deleted]

Actually, I suspect the poster DID mean Lyapunov functions, NOT Lyapunov
exponent...  both stem from Lyapunov's :-) work around the turn of the
century.

A Lyapunov function is some function which obeys a number of properties.  I
have not seen any general introduction to Lyapunov functions, but haven't
looked hard either.

The key properties of a Lyapunov function for a physical system are that:

1. the function is bounded from below (or with a change of
sign, from above...),
2. (some smoothness criteria...)
3.  It can be shown that any state change the system undergoes
results in the value of the Lyapunov function not increasing.

Thus the system will eventually reach a stable state (could be a limit
cycle, there may be more than one point at which the function reaches a
minimum).

In the context of neural nets, Hopfield showed that his "Energy" function
is a Lyapunov function for a Hopfield net, thus proving that such a net
will eventually reach an equilibrium.  Kosko proved essentially the same
result for the BAM.

The trick, of course, for any system is FINDING a Lyapunov function.  If
you can show isomorphism with some physical system for your abstract
system, then perhaps you can use known properties of the physical system by
analogy.

Dave

------------------------------

Subject: Re: Lyapunov measures
From:    enorris@gmuvax2.gmu.edu (Gene Norris)
Organization: George Mason Univ. Fairfax, Va.
Date:    24 Oct 89 14:21:24 +0000 

(stuff deleted)
>>>   Can anyone suggest texts which discuss Lyapunov functions/measures?

A classic textbook is LaSalle and Lefshetz, Stability Theory of Ordinary
Differential Equations. A more current reference is Morris Hirsh's lovely
paper in Neural Networks, vol2 no. 5, (1989).

Eugene Norris
CS Dept George Mason University Fairfax, VA 22032 (703)323-2713
enorris@gmuvax2.gmu.edu

------------------------------

Subject: Re: Minimum # of internal nodes to form
From:    mehra@s.cs.uiuc.edu
Date:    13 Oct 89 16:40:09 +0000 


>
>/* ---------- "Minimum # of internal nodes to form" ---------- */
>
>  I've been told that Robert Heicht Neilson recently proved that any
>boolean function of n inputs to n outputs can be realized with a neural
>net having n input nodes, n output nodes and 2n-1 intermediate nodes
>(a total of 3 layers). Is there any truth to this statement? Please
>forgive me if this has been discussed here before.

See Hecht-Nielson's paper in ICNN 87 proceedings.  Briefly, the argument
uses Kolmogorov's theorem (1967) and Sprecher's theorem (1972).
Kolmogorov's paper solved the notorious tenth problem of Hilbert: whether a
function of several variables can be represented as a sum of functions
(possibly composite) of only one variable. The trouble is that if we use
Kolmogorov's theorem, then there is no restriction on the form of function
in each node.

Perhaps more relevant are papers of George Cybenko. He proves that a
two-layered network (having only one hidden layer) of sigmoidal units
having only a summation at the output is sufficient. However, there is no
limit on the size of the hidden layer. An important distinction between
Hecht-Nielson's and Cybenko's results is that whereas the former talks
about exact representation, the latter talks about arbitrarily accurate
approximation.

Abu-Mostafa and Eric Baum also have interesting results in this direction
but I won't get into that here.

Pankaj {Mehra@cs.uiuc.edu}

------------------------------

Subject: Re: Minimum # of internal nodes to form
From:    aam9n@uvaee.ee.virginia.EDU (Ali Minai)
Organization: EE Dept, U of Virginia, Charlottesville
Date:    13 Oct 89 18:55:34 +0000 


Also of interest in this regard is the work of Halbert White at UCSD. He
and his co-workers have shown that any Borel measurable function from
 n     m
R  to R  can be approximated with arbitrary accuracy using a single layer
of hidden nodes, finite in number and using one of a wide range of squashing
functions. In fact, from what I remember of Hal White's presentation at
IJCNN, he has now extended that to include any neuron transfer function
as long as:

1: It is not identically zero everywhere!
2: Its integral from - to + infinity is greater than 0.

Within those limits, neurons can have all kinds of squiggly transfer
characteristics.

Hecht-Nielsen too had a similar result to present at the IJCNN. His
argument was actually quite simple to understand: overlapping sigmoids can
form an approximation to sinusoids (since sigmoids are steps, and a large
number of them can be superposed to get an arbitrarily good "digitized"
approximation of a sinusoid), and we know that superposed sinusoids can
approximate any other function, so sigmoids can too.  Of course, we are
talking about LARGE NUMBERS of sigmoids. Unfortunately, no one knows how
many.

One *extremely* important point to remember here: THAT AN APPROXIMATION
EXISTS DOES NOT MEAN THAT BACKPROPAGATION (or some such procedure) WILL
NECESSARILY FIND IT.

For the references to the work I mentioned above, see the proceedings of
IJCNN 89. Hal White's paper has references to his older ones. Also, there
was a paper with a similar proof in Neural Networks a few months ago. I
cannot recall the author's name, but I think he was Japanese.

# Abu-Mostafa and Eric Baum also have interesting results in this direction
# but I won't get into that here.

Can you please post the references, or send them by e-mail. Thanks.

Ali Minai

------------------------------

Subject: Re: Minimum # of internal nodes to form
From:    mehra@s.cs.uiuc.edu
Date:    17 Oct 89 01:19:00 +0000 


># Abu-Mostafa and Eric Baum also have interesting results in this direction
># but I won't get into that here.
>
>Can you please post the references

Abu-Mostafa's paper on "Random problems" suggests using the difference
between Kolmogorov Complexity and Shannon Entropy as a measure of "hidden
complexity".

Baum and Haussler's paper is called "What size net gives valid
generalization?"  They bring in several ideas from computational learning
theory, including the relationship between the VCdimension of a
1-hidden-layer n/w of LTUs, and the number of examples it can "shatter".

You will probably find both papers in NIPS proceedings. I am not prepared
to answer any further questions on this matter.

Pankaj

------------------------------

Subject: Minimum # of internal nodes to form boolean function
From:    dave@boingo.med.jhu.edu (David Heath)
Organization: The Johns Hopkins Hospital-Body CT Imaging Lab, Baltimore
Date:    12 Oct 89 05:04:02 +0000 


  I've been told that Robert Heicht Neilson recently proved that any
boolean function of n inputs to n outputs can be realized with a neural net
having n input nodes, n output nodes and 2n-1 intermediate nodes (a total
of 3 layers). Is there any truth to this statement? Please forgive me if
this has been discussed here before.


Dave Heath      heath@cs.jhu.edu

------------------------------

Subject: Re: Minimum # of internal nodes to form boolean function
From:    ahmad@icsib6.Berkeley.EDU (Subutai Ahmad)
Organization: International Computer Science Institute, Berkeley
Date:    12 Oct 89 16:27:13 +0000 

In article <1989Oct12.050402.9790@boingo.med.jhu.edu>,
dave@boingo.med.jhu.edu (David Heath) writes:
> 
>   I've been told that Robert Heicht Neilson recently proved that any
> boolean function of n inputs to n outputs can be realized with a neural
> net having n input nodes, n output nodes and 2n-1 intermediate nodes
> (a total of 3 layers). Is there any truth to this statement? 


What transfer functions does he allow? Although it is true that any boolean
function can be realized with 3 layers, it was shown [1] that you need _at
least_ O(2^n/n^2) linear threshold units to implement an arbritrary boolean
function.  (Note that this result is for n-input to 1 output functions but
this is at least as difficult as the n-input to n-output case.)  It is
quite feasible that with higher order units you can bring this down by a
polynomial amount but I doubt that you can make much of a difference to the
exponential.

Subutai Ahmad
International Computer Science Institute
ahmad@icsi.berkeley.edu

[1] Volper, D.J. and Hampson, S.E.  Connectionist Models of Boolean 
Category Representation. Biological Cybernetics, #54, pp 393-406, 1986.

------------------------------

Subject: Re: Minimum # of internal nodes to form boolean function
From:    honavar@goat.cs.wisc.edu (A Buggy AI Program)
Organization: U of Wisconsin CS Dept
Date:    12 Oct 89 18:03:12 +0000 

[[ Regarding Hecht-Nielsen proof ]]

I don't know anything about the proof. However, I have made runs with backprop
on a net with 2 input units, 3 hidden units, and 16 output units (all possible
boolean functions of 2 variables) and have gotten it to learn successfully to
meet the preset criterion. I used real valued activations between 0 and 1,
sigmoid output function, and real valued weights.

Vasant Honavar
honavar@cs.wisc.edu

------------------------------

Subject: Re: Minimum # of internal nodes to form boolean function
From:    demers@beowulf.ucsd.edu (David E Demers)
Organization: EE/CS Dept. U.C. San Diego
Date:    16 Oct 89 04:40:27 +0000 

In a paper presented at the first ICNN, Hecht-Nielsen applied Kolmogorov's
superposition theorem and showed that a result is that any function from
{0,1}^n to R^m can be represented by m linear combinations of 2n+1
functions of the n inputs.  Unfortunately it is NOT a constructive proof.
In network terms, you have 2n+1 hidden units, EACH WITH ITS OWN TRANSFER
FUNCTION, and no means for finding these.  And each of the m output units
has a unique function of the hidden unit activations.  The only
restrictions are that the hidden unit transfer functions must be monotonic
and continuous. [see paper for more details, Hecht-Nielsen, Kolmogorov's
Mapping Neural Network Existence Theorem, Proc. First IEEE Conf. on Neural
Networks, III-11, 1987]

You might also look at Kolmogorov's paper [but you might not :-)]
Kolmogorov, On the Representation of Continuous Functions of Many Variables
by Superposition of Continuous Functions of One Variable and Additions,
Dokl. Akad. Nauk USSR, 114, 953 (1957).  There has been some other work
also claiming similar result, and I believe a better proof exists but
unfortunately don't have a cite handy.

Dave DeMers                     demers@cs.ucsd.edu
Dept. of Computer Science
UCSD
La Jolla, CA  92093

------------------------------

Subject: Re: MUSIC and Neural Nets
From:    andrew@computing-maths.cardiff.ac.uk (Andrew Jones)
Organization: University of Wales College of Cardiff, Cardiff, WALES, UK.
Date:    27 Nov 89 18:12:42 +0000 

I would like to thank all those who have e-mailed responses to my original
request concerning musical applications of Neural Networks. Each message
was very welcome. I have replied to each message I have received, so
hopefully the mailers are already aware of my gratitude.

Some of the mailers asked if I could summarize responses to the net. A
number of people pointed me towards the work of Teuvo Kohonen. He has
published the following paper:

A Self-Learning Musical Grammar, or 'Associative Memory of the Second Kind'
International Joint Conference on Neural Networks, 1989, Vol 1, p. I-1 to
I-5.

Apparently the music generated has something of the flavour of J.S. Bach.

I understand that Teuvo Kohonen will present a "Neural Concert" on Monday
Jan. 15th, 1990 at the 1990 IJCNN. I wish I could be there!

Apart from this, the Computer Music Journal has devoted Vol. 13 nos. 3 & 4
to this topic.

Rick Romero (rr2p+@edu.cmu.andrew) stated that he is considering getting
involved in this area; Andrew McCallum (andrewm@edu.dartmouth.eleazer)
informed me that Jamshed Bharucha, Prof. of Psychology at Dartmouth
College, has been doing research into NN's and the perception of music for
quite a while. He can be reached at bharucha@dartmouth.edu.

Finally, I have not yet received a posting by Eliot Handelman, except that
Darrell Schiebel included at least part of it in his posting (things are V
E R Y S L O W getting through to us at the moment). Eliot Handelman is
saying, I think, that non-musicians should avoid this subject area.  One of
the people who e-mailed me stated that he disagreed with this opinion. I
suppose it depends what you mean by "musician".

Anyway, thanks again to all those who expressed an interest!

ANDREW
======

------------------------------

Subject: Re: Re: MUSIC and Neural Nets
From:    eliot@phoenix.Princeton.EDU (Eliot Handelman)
Organization: Princeton University, NJ
Date:    02 Dec 89 07:43:15 +0000 

;Finally, I have not yet received a posting by Eliot Handelman, except
;that Darrell Schiebel included at least part of it in his posting (things
;are V E R Y  S L O W getting through to us at the moment). Eliot Handelman
;is saying, I think, that non-musicians should avoid this subject area.
;One of the people who e-mailed me stated that he disagreed with this
;opinion. I suppose it depends what you mean by "musician".


Everyone can do, of course, exactly what they like.

My definition of a musician is "a person who tries to make a living from
his musical activities." This person was probably in training for 10 or 20
years, has to teach piano or theory to get by, etc.

I sometimes read papers written by people in AI that argue for the
relevence of their research because of potential compositional utility.
This miffs me.  If the justification of research is that it generates
music, then anyone ought to be equally useful who can do the same, which
includes countless numbers of hard working composers who don't have the
same access to National Science Foundation grants.

On a music-theoretical, music-cognitive, music-perceptual front, there is a
ton of "research" floating around that is bloody amateurish from a musical
point of view, and I'm just wondering how it is that we musicians are being
usurped in our own professional capacities by non-professionals.

------------------------------

End of Neurons Digest
*********************