[comp.theory.cell-automata] Missing the anthill for the ants?

marek@iuvax.cs.indiana.edu (Marek W Lugowski) (02/17/91)

All this talk of ants and my recent experiments with Computational Metabolism
(C. Langton's _Artificial Life_, pp. 343-368), a tiling in motion, make me
wonder about the utility of looking at ants through the prism of their final,
or evolved, states.  After all, these states are only fixed attractors with
respect to the local conditions and the chaotic space.  How much is there
of interest, computationally, in such attractors?  What can you learn from
reaching them about any computation of interest, other than discrimination a
la pattern recognition/categorization in the way of Widrow-Hoff or a steepest
descent algorithm as found in simplest neural nets?  That's boring...

What I think is of interest is the path taken through the chaotic
space.  You may think of it as an informationally sensitive carrier
wave, a la FM-modulation.  There is enough richness specified in this
path to encode a lot more than simple partition of the input space in
a static recognition problem.  For this you don't even need evolution
to get started: I found out that for my tiles, which are no ants
believe you me :), for different initial arrangements, under a
particularly nasty set of rules that has the tiles leaping out of
their little selves to wrap each species (color) around each other
(you have to see the pictures...), I get stunningly different and
beautifully chaotic paths to ...you guessed it, an attractor, one
that, of course, I suspected anyway, since my rules don't change and I
designed them for a square gworld and ran on a torus one.  Now, if I
could only learn how to modulate the path to effect an effective
computation = controlled chaos...

In summary, worry about paths in chaotic space and modulating them (and
representing the process as a grammar, unfolding) instead of computing
arrivals.

Any thoughts, disagreements?

				-- Marek

cc'ed to alife@iuvax.cs.indiana.edu, *the* mailing list :)
(alife-request@iuvax for additions)

aboulang@bbn.com (Albert Boulanger) (02/19/91)

In article <1991Feb17.101444.17544@news.cs.indiana.edu> marek@iuvax.cs.indiana.edu
(Marek W Lugowski) writes:

   What I think is of interest is the path taken through the chaotic
   space.  You may think of it as an informationally sensitive carrier
   wave, a la FM-modulation.  There is enough richness specified in this
   path to encode a lot more than simple partition of the input space in
   a static recognition problem.  For this you don't even need evolution
   to get started: I found out that for my tiles, which are no ants
   believe you me :), for different initial arrangements, under a
   particularly nasty set of rules that has the tiles leaping out of
   their little selves to wrap each species (color) around each other
   (you have to see the pictures...), I get stunningly different and
   beautifully chaotic paths to ...you guessed it, an attractor, one
   that, of course, I suspected anyway, since my rules don't change and I
   designed them for a square gworld and ran on a torus one.  Now, if I
   could only learn how to modulate the path to effect an effective
   computation = controlled chaos...

   Any thoughts, disagreements?


Yes, two thoughts:

*****************************1************************************
Trajectories instead of basins for computation:

Asymmetric recurrent (Hopfield) neural nets have interesting (and
trainable) trajectories.

"Temporal Associations in Asymmetric Neural Networks", H Sompolinsky
and I.  Kanter, Phys Rev Lettr, Vol 57, No 22, 2861-2864

"Statistical Mechanics of Neural Networks", H. Sompolinsky, Physics
Today, Dec. 1988, 70-80

"Hebbian Learning Reconsidered: Representation of Static and Dynamic
Objects in Associative Neural Nets", A. Hertz, B. Sulzer, R. Kuhn,
and J.L. Hemmen, Biol. Cybern., Vol 60, 1989, 457-467

Freeman and others have been proposing an the use of periodic
attractors for associative memory:

"Associative Memory in a Simple Model of Oscillating Cortex" Bill
Baird, NIPS 2 Proceedings, D. Touretsky Ed, Morgan Kaufman.

Backprop has been modified to work with hidden-layer networks with
feedback connections and have the ability to learn phase-space
trajectories. Many people have worked on this one:

"Learning State Space Trajectories in Recurrent Neural Networks, Barak
Pearlmutter, CMU Computer Science Report CMU-CS-88-191, Dec. 31, 1988

"Generalization of Back-Propagation to Recurrent Neural Networks"
Fernando Pineda, Phys Rev Lettr, Vol 59, No 19, 2229-2232

Optical feedback with 4-wave mixing using photorefractive
materials have interesting associative cycling behavior.

*******************************2*********************************
Chaos and computation

Actually there are times that one wishes to use the ergodic properties
of chaos in computation. This is a way of doing search. There is an
annealing-like algorithm that makes use of this:


"Chaotic Optimization and the Construction of Fractals: Solution of an
Inverse Problem"
Giorgio Mantica & Alan Sloan
Complex Systems 3(1989), 37-62


Finally, here is some recent work by Crutchfield and Young in analyzing the
pattern generation properties (using grammars) of system on the verge
of chaos:

"Computation at the Onset of Chaos",  James Crutchfield and  Karl
Young, appearing in "Complexity, Entropy, and the Physics of
Information", W. Zurek, ed., Addison-Wesley, 1989/



Harnessing chaos,
Albert Boulanger
aboulanger@bbn.com