[comp.ai.neural-nets] Neuron Digest V5 #11

neuron-request@HPLABS.HP.COM (Neuron-Digest Moderator Peter Marvit) (03/05/89)

Neuron Digest	Saturday,  4 Mar 1989
		Volume 5 : Issue 11

Today's Topics:
   implementation of "traveling salesman algorithm" on connection machine
 Re: implementation of "traveling salesman algorithm" on connection machine
		 Re: implementation of "traveling salesman algorithm" on CM
 Re: implementation of "traveling salesman algorithm" on connection machine
 Re: implementation of "traveling salesman algorithm" on connection machine
	   Re: Re: implementation of "traveling salesman algorithm" on CM
					  Re: I've read this somewhere...
					 reference for intro to neural nets
					  principal components references?
					  Neural nets, TSP and Assignment
					  Probabilistic Logic Nodes (PLN)
					Re: Probabilistic Logic Nodes (PLN)
				 Inquiry about paper by Jeffrey and Rosner
				   Liapunov exponents & neural networks.
					  Query: ANN's in medical imaging?


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).

------------------------------------------------------------

Subject: implementation of "traveling salesman algorithm" on connection machine
From:    artzi@cpsvax.cps.msu.edu (Ytshak Artzi - CPS)
Organization: Michigan State University, Computer Science Department
Date:    Fri, 27 Jan 89 03:10:47 +0000 

[[ Editor's Note: While much of the (considerably edited) discussion below
about the traveling salesman is not strictly about neural nets, it is an
important problem which first provoked Hopfield and Tank in their now famous
paper.  This discussion also points out the weakness of seeing a single
technology as a panacea for all problems, as well as the dangers of
accepting everything one hears uncritically. -PM ]]

   Does anyone know about any implementation of efficient algorithms for the
Traveling salesman Problem (TSP) on the Connection Machine ?
  I will appreciate any reference to literature, papers, ideas, etc.
    Thanks,
            Izik Artzi
            CPS, Michigan State U.

------------------------------

Subject: Re: impl. of "traveling salesman algorithm" on connection machine
From:    root@yale.UUCP (Celray Stalk)
Organization: Yale University Computer Science Dept, New Haven CT 06520-2158
Date:    Fri, 03 Feb 89 23:44:20 +0000 


Anyone who answers that question "Yes" is not telling the truth. The TSP is
known to take at least exponential time, CM or no CM. It is also not clear
to me that the CM is a good machine for this problem.  Using a massively
parallel SIMD machine for a branch-and-bound problem usually means leaving a
great many of the processors idle.  Any simulated annealing or nueral-net
method would require the use of the general router on the CM (to deal with
an arbitrary weighted graph) and that would be very slow. The CM is awful at
communication that cannot be mapped to a N-dimensional grid.

                  H. Scott Berryman
                  Yale U. Computer Science Dept.
                  51 Prospect St.
                  New Haven, CT 06520
                  

------------------------------

Subject: Re: implementation of "traveling salesman algorithm" on CM
From:    songw@csri.toronto.edu (Wenyi Song)
Organization: University of Toronto, CSRI
Date:    Tue, 07 Feb 89 04:40:53 +0000 

In article <1233@usfvax2.EDU> pollock@usfvax2.UUCP (Wayne Pollock) writes:
>In article <1637@cps3xx.UUCP> artzi@cpsvax.cps.msu.edu (Ytshak Artzi) writes:
>
>Funny you should ask; we spent an entire lecture today discussing this
>very problem in my Parallel Processing and Distributed Computing class.
>The short answer is for M cities, an M X M neural network can find very
>close to optimal solutions in a very short time.  In AT&T Bell Labs, I've
>heard that they have designed hardware, had it manufactured, and used it
>to find the solution for a large problem.  The whole time used (including
						^^^^^^^^^^^^^^^^^^^^^^^^^^
>manufaturing the chip) was less than the time it took using traditional
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>methods on a Cray-1!  (I am assuming that if the story is true, the
^^^^^^^^^^^^^^^^^^^^
>Cray-1 method found an optimal solution; even so, this is impressive!)
>Hopfield and Tank[1985] describe a case involving over 100 cities in which
							^^^^^^^^^^
>a neural network computes a solution which is only a fraction of 1% longer
>than the optimal solution, in only a few milliseconds of computation.  They
>remark that it would have taken weeks to have found an optimal solution
>even using a Cray.


I gather what you referred to is

J. J. Hopfield and D. W. Tank, "Neural" Computation of Decisions in
		Optimization Problems, Biological Cybernetics, 52, 141-152, 1985

If that's the case, then ...

They showed simulation results of the 10-city problem. It needs 100 neurons.
They also showed some simulation results of the 30-city problem, which
requires 900 neurons. But nothing about the 100 city problem.

Not only the simulation of the 100-city problem is difficult, in terms of
computing time cost, but also the hardware implementation of their orginal
model. This is because it needs 10,000 neurons. If they are fully inter-
connected, then you have to put another (100,000,000 - 10,000) synapses into
the chip. For comparison, DRAM is probably just reaching the density of 64
Mbits in a chip.


So how about the comparison in terms of speed? Let's look at the well known
Lin-Kernighan algorithm.

S. Lin and B. W. Kernighan, An Effective Heuristic Algorithm for the Traveling-
		Salesman Problem, Operations Research, 21, 498-516, 1973

As reported, a typical 100-city problem requires about three minutes on their
GE635 computer to obtain the optimum with above 95% confidence. Note they
didn't use a Cray-1.

Note we should conduct a fair comparison. If one insists on using Gray to
enumerate all possible paths for the TSP problem, then ...

------------------------------

Subject: Re: impl. of "traveling salesman algorithm" on connection machine
From:    jbn@glacier.STANFORD.EDU (John B. Nagle)
Organization: Stanford University
Date:    Tue, 07 Feb 89 17:56:42 +0000 

In article <1233@usfvax2.EDU> pollock@usfvax2.UUCP (Wayne Pollock) writes:
>(Quoted without permission from my class notes (by Dr. Stark:)
>Hopfield and Tank[1985] describe a case involving over 100 cities in which
>a neural network computes a solution which is only a fraction of 1% longer
>than the optimal solution, in only a few milliseconds of computation.  They
>remark that it would have taken weeks to have found an optimal solution
>even using a Cray.

       The same hype appeared in an article in Business Week.  Comparing the
time required to find a near-optimum solution with a hardware neural net to
that required to find an optimal solution on a Cray is misleading, if not
fraudulent.  While finding an optimal solution is expensive, finding a
near-optimal solution on a digital computer is quite efficient.  I can get
solutions for a 100-node network on an IBM PC/AT (6MhZ!) in well under two
minutes, using the Bell Labs algorithm.  This is the right result to compare
with the neural net, not the brute-force optimal solution.

       The algorithm is quite simple.  Construct an arbitrary path
connecting all the nodes.  Compute its length.  Attempt to shorten the path
by picking two random edges on the path, and cut the path into three
sections by removing those edges.  Try the paths that can be constructed
from the three sections and pick the one with the shortest length.  That
path becomes the current working path.  Iterate the shortening process until
no improvement is observed for some length of time.  Output the working
path.

       You don't need a Connection Machine for this.

					John Nagle

------------------------------

Subject: Re: implementation of "traveling salesman algorithm" on connection machine
From:    sdo@linus.UUCP (Sean D. O'Neil)
Organization: The MITRE Corporation, Bedford MA
Date:    Wed, 08 Feb 89 01:52:33 +0000 


Bravo!  Finally someone brings some sense into this discussion.  John is
absolutely right about this.  In point of fact, there are very good reasons
NOT to use Hopfield and Tank's neural net solution to the TSP aside from the
fact that Lin and Kernighan's algorithm already gives an adequate answer
using relatively cheap computing equipment (such as the fact that the
coefficients don't stay the same for different size problems).

The real point of Hopfield and Tank's work is that they took a 'hard'
problem and were able to get good solutions to it using the constrained
quadratic minimization performed by the Hopfield network.  This suggests
that one might be able to handle other similar 'hard' problems this way,
problems for which no good approximate algorithm like Lin and Kernighan's
exists.  The trick, of course, is finding a way to map whatever problem you
might have into a quadratic minimization suitable for a Hopfield net.  More
likely than not, this is a *heuristic* mapping, so good performance is not
guaranteed.

Good work I have seen in this area includes a Hopfield net for doing
multiple target data association for tracking, by Sengupta and Iltis at UCSB
(see ICASSP '88 proceedings) and a neural net for superresolution of
ultrasonic images by Jack Winters at Bell Labs (see ICNN '88).  An
interesting fact is that the outputs of both these networks are interpreted
as analog values.  This contrasts sharply with the 0-1 interpretation
usually forced on Hopfield nets.


Sean O'Neil

------------------------------

Subject: Re: Re: implementation of "traveling salesman algorithm" on CM
From:    andrew@nsc.nsc.com (andrew)
Organization: National Semiconductor, Santa Clara
Date:    Sat, 11 Feb 89 03:12:05 +0000 

A recently addressed "solution" by supercomputer to a long-standing maths
conjecture - that of the finite projective plane - now exists for planes up
to order 10 (Science News, Dec 24 & 31, 1988), whereby 1000's of hours of
Cray time was needed! This looks like a nice place for a net and a Cray to
do battle... the constraints are simply expressed:

	- for order k, construct a square matrix with k**2 + k + 1 rows/columns
	- fill the matrix with 0's and 1's, such that:
	- every row contains exactly k + 1  1's
	- every possible pair of rows has exactly one column in which
	  both have a '1'

I have successfully solved this for low orders using the linear "schema" cs
(constraint satisfaction) program from "Explorations...PDP".  Boltzmann is
lousy, but I haven't tried Harmony yet.

The net scales as order O(k**4) in # nodes and O(k**4 - k**6) (to be
resolved) in # weights. For order 10, one needs about 12K neurons and about
2M weights. Out there exists hardware (which I don't have) to get over 1M
cps, so it's obviously in range of current virtual net simulators.

Interested parties contact me at:

	Andrew Palfreyman, MS D3969		PHONE:  408-721-4788 work
	National Semiconductor				408-247-0145 home
	2900 Semiconductor Dr.			there's many a slip
	P.O. Box 58090				'twixt cup and lip
	Santa Clara, CA  95052-8090

	DOMAIN: andrew@logic.sc.nsc.com  
	ARPA:   nsc!logic!andrew@sun.com
	USENET: ...{amdahl,decwrl,hplabs,pyramid,sun}!nsc!logic!andrew

------------------------------

Subject: Re: I've read this somewhere...
From:    aboulang@bbn.com (Albert Boulanger)
Date:    Tue, 31 Jan 89 21:08:12 +0000 


In article <GRIFF.89Jan30135458@intelob.biin.com> griff@intelob.biin.com 
(Richard Griffith) writes:
>
>I seem to recall reading something like the following....
>
>   some researchers in neural-networking have created a [net|language|
>system|...] which was fed the equivilancy of the basis for mathmetical
>set theory, within a short time (read < 1 day) the system completed the 
>formation of all algebraic theorems, and was going on to calculus....

[[ Editor's note: In all probability, this person was thinking of Doug
Lenat's "AM" project, a "traditional" AI system done in the 70's. -PM ]]

I know of one neural-net reference that did number-theoretic discovery stuff:

"An Associative Hierarchal Self-Organizing System"
Barry R. Davis, IEEE Trans. Systems, Man, & Cybernetics, SMC-15, No.  4,
July/August 1985, pp 570-579.

He uses some ideas developed by Stuart Geman.

Albert Boulanger
BBN Systems & Technologies Corp.
aboulanger@bbn.com

------------------------------

Subject: reference for intro to neural nets
From:    artzi@cpsvax.cps.msu.edu (Ytshak Artzi - CPS)
Organization: Michigan State University, Computer Science Department
Date:    Fri, 10 Feb 89 01:27:16 +0000 


  For every one interested in a comprehensive introduction to the neural
field I can recommend a very interesting book:

   "Neural Computers",  

     edited by Rolf Eckmiller,  Christoph v. d. Malsburg
     NATO ASI series
     Series F: Computer and Systems Sciences, Vol 41  (1988)


   -- Itzik Artzi --- Michigan State University ---

------------------------------

Subject: principal components references?
From:    Timothy J Van <pitt!cisunx!vangelde@CADRE.DSL.PITTSBURGH.EDU>
Organization: Univ. of Pittsburgh, Comp & Info Sys
Date:    11 Feb 89 16:02:09 +0000 

It seems that increasingly, people are using principal components analysis
(not to mention cluster analysis) as a way of interpreting, for example,
sets of patterns over hidden units.

For those of us who managed to miss that part of the requisite undergraduate
training in mathematics, can anyone recommend references that give a clear
and accessible introduction to the area?

Thanks,
Tim van Gelder

------------------------------

Subject: Neural nets, TSP and Assignment
From:    jal@pan (Jason Leigh)
Organization: Computer Science Department, Wayne State University
Date:    Sun, 12 Feb 89 03:14:38 +0000 

I am trying to solve the Assignment problem (maximum matching on minimal
weight in a bi-partite graph) using Neural Nets (more specifically the
Boltzmann Machine).  Has anyone done this already?  Is there a good way to
approach this besides converting the problem to TSP and then solving that?

If you have any ideas on a good implementation or energy function for the
assignment problem, please let me know.

Thanks.
jal@zeus.cs.wayne.edu
Jason Leigh

------------------------------

Subject: Probabilistic Logic Nodes (PLN)
From:    artzi@cpsvax.cps.msu.edu (Ytshak Artzi - CPS)
Organization: Michigan State University, Computer Science Department
Date:    Sun, 12 Feb 89 20:54:42 +0000 


   I read a very interesting paper by I. Aleksander, Department of
Computing, Imperial College of Science and Tech., London:

     "Logical Connectionist Systems".

    He shows a neuron model called PLN (Probabilistic Logic Node). If, by
chance anyone else has read this paper (or knows the subject) maybe they can
answer some questions (or comments):

    1. Is there any research being made on the relationship between
        Neural Nets and Probabilistic Automata ?

    2. Is there anyone else making research on PLNs ?

    3. Has anyone tried to simulate PLN (in software) ?
       (I do)

 The above mentioned article can be found in "Neural Comnputers", NATO ASI
Series, Series F:Comp and Sys Sci, vol. 41

 Itzik Artzi, Comp. Sci., Michigan State University

 artzi@cpsvax.cps.msu.edu

------------------------------

Subject: Re: Probabilistic Logic Nodes (PLN)
From:    ian@shire (Ian Parberry)
Organization: Penn State University
Date:    Mon, 13 Feb 89 16:37:26 +0000 

As far as computation with probabilistic neural networks goes (as opposed to
learning), some elementary observations have been made in:

Parberry and Schnitger, "Relating Boltzmann Machines to Conventional Models
of Computation'', Neural Networks, Vol. 2., pp. 59-79, 1989.

I don't know whether the details carry over to your model, but you might
want to check it out.  For some background on the relationship between
complexity theory (for those unfamiliar with the term, it can be viewed as
an outgrowth of automata theory concerned with resource- bounded
computation) and neural networks you might want to consult:

Parberry and Schnitger, "Parallel Computation with Threshold Functions",
Journal of Computer and System Sciences, Vol. 36, No. 3, 1988.

Parberry, "A Primer on the Complexity Theory of Neural Networks", in "A
Sourcebook on Formal Techniques in Artificial Intelligence", R. B. Banerji
(Ed.), Elsevier, To Appear in 1989 (hopefully).

I apologize for appearing to have written a commercial for myself.  All of
the references on the subject known to me are mentioned in my articles
above.  Call by reference is more efficient than call by value.  :-) 

			Ian Parberry
  "The bureaucracy is expanding to meet the needs of an expanding bureaucracy"
  ian@psuvax1.cs.psu.edu  ian@psuvax1.BITNET  ian@psuvax1.UUCP  (814) 863-3600
 Dept of Comp Sci, 333 Whitmore Lab, Penn State Univ, University Park, Pa 16802

------------------------------

Subject: Inquiry about paper by Jeffrey and Rosner
From:    dmocsny@uceng.UC.EDU (daniel mocsny)
Organization: Univ. of Cincinnati, College of Engg.
Date:    Tue, 14 Feb 89 20:50:37 +0000 

W. Jeffrey and R. Rosner published a paper in the proceedings of the AIP
1986 neural network conference called "Neural Network Processing as a Tool
for Function Optimization." In this brief paper, they compared several
methods for minimizing several functions. They also promised to follow this
paper with a larger article in a reviewed journal. Does anyone know whether
these authors have done any more work in this area?

I am also interested in obtaining other references to recent work in
function optimization with neural nets. My main interest in this area is
with mixed-integer nonlinear programming.

Thanks.

Dan Mocsny
dmocsny@uceng.uc.edu

------------------------------

Subject: Liapunov exponents & neural networks.
From:    "Matthew B. Kennel" <phoenix!mbkennel@PRINCETON.EDU>
Organization: Princeton University, NJ
Date:    15 Feb 89 06:05:19 +0000 

Hello all.  I am writing a senior thesis on the application of various types
of artificial "neural networks" to predicting chaotic time series.

According to a theorem of Takens (I believe) one can reduce the time
evolution of a continuous system of dissipative differential equations to an
iterated map acting on a sampled time series of the phase space.

Specifically, calling "x" the phase space vector,

x(t+d) = f( x(t), x(t-d), x(t-2*d), x(t-3*d)....,x(t-m*d) )

where "d" is the sampling interval and "m" depends on the dimension of the
attractor.

Now, I use some form of neural network or other functional basis (I'm
actually using a hybrid of a neural network and radial basis functions) to
approximate f, given some input time series of data.

Suppose I have converged for the weights of my network and have approximated
f as best I can.  I want to compute Liapunov exponents from this map.  As f
is in some continuous form, I can calculate derivatives with respect to its
inputs, hence the Jacobian matrix of this map.

Question #1: Suppose, for simplicity, that the phase space x is
1-dimensional.  If f only depended on one previous input, (i.e.  was a map
of R to R) then the Jacobian is obvious: a 1x1 "matrix" which is df/dx.

What if f depends on more than one past input, for example,
    x(t+1) = f( x(t), x(t-1), x(t-2) ).

Now, the Jacobian isn't even square...or do I just use df/dx_0, i.e. the
deriviative w.r.t the first input?  This seems a likely choice.


Question #2: Assuming that I can get the Jacobian (i.e. derivative) for a
1-d phase space, one can calculate the liapunov exponent by (from
_Deterministic Chaos_, Schuster 1988) (p. 25)

lambda = limit(N->infinity) {  1/N  Sum(i=1..N) { log | df/dx | } }

This is fairly easy to do, just the average over the attractor of the log of
the absolute value of the derivative.  This also has the property of being
numerically easy, i.e. nothing blows up as N -> infinity.

In the case of more than 1-d phase space the analogous expression is (p. 113)

(exp(lambda1), exp(lambda2)..., exp(lambdam)) =
          limit(N->infinity) { magnitude of the eigenvalues of
					{Product(i=1..N) J_i(x)) }} ^(1/N).

For m=1, you easily regain the earlier formula if you take the logarithm of
both sides.

The problem I have with the second formula is: If J_n (the jacobian matrix
at time i) has eigenvalues with magnitude greater than one (i.e. the system
has exponential divergence of trajectories) then the product of many such
matrices will blow up exponentially with N, i.e. the elements of the running
product matrix will explode.  This is a problem computer-wise.

All I care about are the eigenvalues.  Is there anyway to compute the
eigenvalues of a product of matrices if this product blows up with
increasing N?  The eigenvalues of a product aren't the product of the
eigenvalues, right?

I know I can compare the divergence of 2 slightly separated trajectories,
but that only gives the largest Liapunov exponent; I'd like all of them so I
can use the Kaplan-Yorke formula. Also, considering I have derivatives
available it seems like a waste not to use them.


Question#3: Suppose I do get the Liapunov exponents for this iterated map
"f".  What relates them to the exponents of the original dynamical system
(i.e. the differential equation)?  It seems like there must be an easy
formula.


I'd appreciate any and all comments; either post or send e-mail to

mbkennel@phoenix.princeton.edu

6 x 10^23 thanks,
  Matt Kennel

ps: I am aware of Lapedes & Farber's work on this from 1987 but would like
to know about anything later or results from other people working on similar
problems.

------------------------------

Subject: Query: ANN's in medical imaging?
From:    lpress@venera.isi.edu (Laurence I. Press)
Organization: USC-Information Sciences Institute
Date:    Sat, 18 Feb 89 19:25:55 +0000 

I have been asked to investigate the possibility of using ANNs to analyze
and classify mamograms.  Can anyone refer me to projects analyzing mamograms
or related images?  Can anyone supply pointers to relevant texts or articles
on generic pattern recognition using ANNs?  I am more interested in
practical engineering applications of ANNs than theory.

Thanks,
Larry Press
Internet: lpress@venera.isi.edu
or
10726 Esthher Avenue, Los Angeles, CA 90064, (213)475-6515 or 516-3579

------------------------------

End of Neurons Digest
*********************