[comp.ai.neural-nets] implementation of "traveling salesman algorithm" on connection machine

artzi@cpsvax.cps.msu.edu (Ytshak Artzi - CPS) (01/27/89)

   Does anyone know about any implementation of efficient
algorithms for the Traveling salesman Problem (TSP) on the
Connection Machine ?
  I will appreciate any reference to literature, papers, ideas,
etc.
    Thanks,
            Izik Artzi
            CPS, Michigan State U.

jbn@glacier.STANFORD.EDU (John B. Nagle) (02/04/89)

      There's a lot of hype about this.  The problem isn't that hard if you
are willing to settle for a result that's almost optimal, rather than going
for the true optimum, which is what the "neural" approaches do anyway.
The business about it taking hours on a mainframe to solve a 30-node net
is pure bunk.  It takes less than 5 seconds on a PC/AT, using the 1970-vintage
Bell Labs algorithm.

					John Nagle

root@yale.UUCP (Celray Stalk) (02/04/89)

In article <1637@cps3xx.UUCP> artzi@cpsvax.cps.msu.edu (Ytshak Artzi - CPS) writes:
>
>   Does anyone know about any implementation of efficient
>algorithms for the Traveling salesman Problem (TSP) on the
>Connection Machine ?

Anyone who answers that question "Yes" is not telling the truth. The
TSP is known to take at least exponential time, CM or no CM. It is also
not clear to me that the CM is a good machine for this problem. 
Using a massively parallel SIMD machine for a branch-and-bound 
problem usually means leaving a great many of the processors idle.
Any simulated annealing or nueral-net method would require the use
of the general router on the CM (to deal with  an arbitrary weighted graph)
and that would be very slow. The CM is awful  at communication 
that cannot be mapped to a N-dimensional grid.

                  H. Scott Berryman
                  Yale U. Computer Science Dept.
                  51 Prospect St.
                  New Haven, CT 06520
                  

jwl@ernie.Berkeley.EDU (James Wilbur Lewis) (02/04/89)

In article <49632@yale-celray.yale.UUCP> berryman-harry@CS.YALE.EDU (Harry Berryman) writes:
-In article <1637@cps3xx.UUCP> artzi@cpsvax.cps.msu.edu (Ytshak Artzi - CPS) writes:
->
->   Does anyone know about any implementation of efficient
->algorithms for the Traveling salesman Problem (TSP) on the
->Connection Machine ?
-
-Anyone who answers that question "Yes" is not telling the truth. The
-TSP is known to take at least exponential time, CM or no CM. 
                      ^^^^^^^^^^^^^^^^^^^^

Excuse me?!! Has someone proven P <> NP and I just haven't heard about
it yet?

-- Jim Lewis
   U.C. Berkeley

Krulwich-Bruce@cs.yale.edu (Bruce Krulwich) (02/06/89)

>>   Does anyone know about any implementation of efficient
>>algorithms for the Traveling salesman Problem (TSP) on the
>>Connection Machine ?
>
>Anyone who answers that question "Yes" is not telling the truth. The
>TSP is known to take at least exponential time, CM or no CM. 

Is that true if there are O(2^n) processors available for solving an n-city
TSP??  This may very well be the case if using a CM.

Bruce Krulwich


 

root@yale.UUCP (Celray Stalk) (02/06/89)

>Is that true if there are O(2^n) processors available for solving an n-city
>TSP??  This may very well be the case if using a CM.
>
>Bruce Krulwich

2^16 = 64k = the biggest CM. Are we going to restrict ourselves to 16-city
problems?

                     Scott Berryman
                     Yale University Computer Science Dept
                     51 Prospect 
                     New Haven CT  06520

pollock@usfvax2.EDU (Wayne Pollock) (02/07/89)

In article <1637@cps3xx.UUCP> artzi@cpsvax.cps.msu.edu (Ytshak Artzi) writes:
>   Does anyone know about any implementation of efficient
>algorithms for the Traveling salesman Problem (TSP) on the
>Connection Machine ?
>  I will appreciate any reference to literature, papers, ideas, etc.
>    Thanks,
>            Izik Artzi
>            CPS, Michigan State U.

Funny you should ask; we spent an entire lecture today discussing this
very problem in my Parallel Processing and Distributed Computing class.
The short answer is for M cities, an M X M neural network can find very
close to optimal solutions in a very short time.  In AT&T Bell Labs, I've
heard that they have designed hardware, had it manufactured, and used it
to find the solution for a large problem.  The whole time used (including
manufaturing the chip) was less than the time it took using traditional
methods on a Cray-1!  (I am assuming that if the story is true, the
Cray-1 method found an optimal solution; even so, this is impressive!)

(Quoted without permission from my class notes (by Dr. Stark:)
Hopfield and Tank[1985] describe a case involving over 100 cities in which
a neural network computes a solution which is only a fraction of 1% longer
than the optimal solution, in only a few milliseconds of computation.  They
remark that it would have taken weeks to have found an optimal solution
even using a Cray.

Wayne Pollock (The MAD Scientist)	pollock@usfvax2.usf.edu
Usenet:		...!{uflorida, codas}!usfvax2!pollock
GEnie:		W.POLLOCK

mbkennel@bogey.Princeton.EDU (Matthew B. Kennel) (02/07/89)

In article <1233@usfvax2.EDU> pollock@usfvax2.UUCP (Wayne Pollock) writes:
>Funny you should ask; we spent an entire lecture today discussing this
/very problem in my Parallel Processing and Distributed Computing class.
>The short answer is for M cities, an M X M neural network can find very
>close to optimal solutions in a very short time.  In AT&T Bell Labs, I've
>heard that they have designed hardware, had it manufactured, and used it
>to find the solution for a large problem.  The whole time used (including
>manufaturing the chip) was less than the time it took using traditional
>methods on a Cray-1!  (I am assuming that if the story is true, the
>Cray-1 method found an optimal solution; even so, this is impressive!)
>
>(Quoted without permission from my class notes (by Dr. Stark:)
>Hopfield and Tank[1985] describe a case involving over 100 cities in which
>a neural network computes a solution which is only a fraction of 1% longer
>than the optimal solution, in only a few milliseconds of computation.  They
>remark that it would have taken weeks to have found an optimal solution
>even using a Cray.
>
>Wayne Pollock (The MAD Scientist)	pollock@usfvax2.usf.edu
>Usenet:		...!{uflorida, codas}!usfvax2!pollock
>GEnie:		W.POLLOCK


I took professor Eric Baum's seminar in neural computation this past
semsester, and I recall reading a paper in which the authors were unable
to reproduce Hopfield & Tank's results in the TSP case; they even used a
scanner to analyze the published graph of H & T's cities and use their
exact coordinates (up to some limits, of course) but couldn't find as
good a solution as Hopfield and Tank claimed.  In other cases, they
claimed that they could not find as good tours either.  Unfortunately, I
don't remember the authors, but I believe the paper was titled something
like "On the Stability of Hopfield and Tank's Solution to the TSP
Problem".  

My judgement is totally neutral---I know Prof. Hopfield is an excellent
and honest scientist---but I thought I should add that there may be a
second side to the story.

Also, I believe there are some indications that the TSP problem is, for
some reasons or another, "easier" than some other kinds of interesting
problems and perhaps the proposed solutions do not translate as well.
(this is hearsay)


Matt Kennel

(this is line counter fodder)





 


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

manj@brand.usc.edu (B. S. Manjunath) (02/08/89)

In article <6173@phoenix.Princeton.EDU> mbkennel@bogey.UUCP (Matthew B. Kennel) writes:
 >In article <1233@usfvax2.EDU> pollock@usfvax2.UUCP (Wayne Pollock) writes:
 >>Funny you should ask; we spent an entire lecture today discussing this
 ......
 >>
 >>Wayne Pollock (The MAD Scientist)	pollock@usfvax2.usf.edu
 >>Usenet:		...!{uflorida, codas}!usfvax2!pollock
 >>GEnie:		W.POLLOCK
 >
 >
 >I took professor Eric Baum's seminar in neural computation this past
  [deleted]
 >don't remember the authors, but I believe the paper was titled something
 >like "On the Stability of Hopfield and Tank's Solution to the TSP
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 >Problem".  
 ^^^^^^^^
 >
 I believe the reference is :
  " On the stability of the TSP problem algorithm of Hopfield and 
    Tank" by G.V. Wilson and G.S. Pawley (dept. of physics,Univ. of 
    Edinburgh, UK)   in Biological Cybernetics,58 ,63-70 1988.

 >My judgement is totally neutral---I know Prof. Hopfield is an excellent
 >and honest scientist---but I thought I should add that there may be a
 >second side to the story.

 >
 >Matt Kennel

 >(this is line counter fodder)

 bs manjunath

jbn@glacier.STANFORD.EDU (John B. Nagle) (02/08/89)

In article <1233@usfvax2.EDU> pollock@usfvax2.UUCP (Wayne Pollock) writes:
>(Quoted without permission from my class notes (by Dr. Stark:)
>Hopfield and Tank[1985] describe a case involving over 100 cities in which
>a neural network computes a solution which is only a fraction of 1% longer
>than the optimal solution, in only a few milliseconds of computation.  They
>remark that it would have taken weeks to have found an optimal solution
>even using a Cray.

       The same hype appeared in an article in Business Week.  Comparing
the time required to find a near-optimum solution with a hardware neural
net to that required to find an optimal solution on a Cray is misleading,
if not fraudulent.  While finding an optimal solution is expensive,
finding a near-optimal solution on a digital computer is quite efficient.  
I can get solutions for a 100-node network on an IBM PC/AT (6MhZ!) in well 
under two minutes, using the Bell Labs algorithm.  This is the right result 
to compare with the neural net, not the brute-force optimal solution.

       The algorithm is quite simple.  Construct an arbitrary path connecting
all the nodes.  Compute its length.  Attempt to shorten the path by picking
two random edges on the path, and cut the path into three sections 
by removing those edges.  Try the paths that can be constructed from the 
three sections and pick the one with the shortest length.  That path 
becomes the current working path.  Iterate the shortening process until
no improvement is observed for some length of time.  Output the working path.

       You don't need a Connection Machine for this.

					John Nagle

sdo@linus.UUCP (Sean D. O'Neil) (02/08/89)

In article <18090@glacier.STANFORD.EDU> jbn@glacier.STANFORD.EDU (John B. Nagle) writes:
>In article <1233@usfvax2.EDU> pollock@usfvax2.UUCP (Wayne Pollock) writes:
>>(Quoted without permission from my class notes (by Dr. Stark:)
>>Hopfield and Tank[1985] describe a case involving over 100 cities in which
>>a neural network computes a solution which is only a fraction of 1% longer
>>than the optimal solution, in only a few milliseconds of computation.  They
>>remark that it would have taken weeks to have found an optimal solution
>>even using a Cray.
>
>...  Comparing
>the time required to find a near-optimum solution with a hardware neural
>net to that required to find an optimal solution on a Cray is misleading,
>if not fraudulent.  While finding an optimal solution is expensive,
>finding a near-optimal solution on a digital computer is quite efficient.  
>I can get solutions for a 100-node network on an IBM PC/AT (6MhZ!) in well 
>under two minutes, using the Bell Labs algorithm.  This is the right result 
>to compare with the neural net, not the brute-force optimal solution.


Bravo!  Finally someone brings some sense into this discussion.  John
is absolutely right about this.  In point of fact, there are very good
reasons NOT to use Hopfield and Tank's neural net solution to the TSP
aside from the fact that Lin and Kernighan's algorithm already gives
an adequate answer using relatively cheap computing equipment (such
as the fact that the coefficients don't stay the same for different
size problems).

The real point of Hopfield and Tank's work is that they took a 'hard'
problem and were able to get good solutions to it using the constrained
quadratic minimization performed by the Hopfield network.  This suggests
that one might be able to handle other similar 'hard' problems this way,
problems for which no good approximate algorithm like Lin and Kernighan's
exists.  The trick, of course, is finding a way to map whatever problem
you might have into a quadratic minimization suitable for a Hopfield net.
More likely than not, this is a *heuristic* mapping, so good performance
is not guaranteed.

Good work I have seen in this area includes a Hopfield net for doing 
multiple target data association for tracking, by Sengupta and Iltis 
at UCSB (see ICASSP '88 proceedings) and a neural net for 
superresolution of ultrasonic images by Jack Winters at Bell Labs 
(see ICNN '88).  An interesting fact is that the outputs of both these 
networks are interpreted as analog values.  This contrasts sharply
with the 0-1 interpretation usually forced on Hopfield nets.


Sean O'Neil

pollock@usfvax2.EDU (Wayne Pollock) (02/09/89)

In article <18090@glacier.STANFORD.EDU> jbn@glacier.STANFORD.EDU (John B. Nagle) writes:
>
>       The same hype appeared in an article in Business Week.  Comparing
>the time required to find a near-optimum solution with a hardware neural
>net to that required to find an optimal solution on a Cray is misleading,
>if not fraudulent.  ...
> [description of simple algorithm which does the same job quickly on
>  traditional hardware]
>
>       [therefore] You don't need a Connection Machine for this.
>
>					John Nagle

I think you're missing the point.  You're quite right, you don't need
a connection (neural network) machine for this.  The point is, it can
be done on a connection machine.  It is interesting to see what other
problems can be solved on a connection machine, that have generally
thought to be hard or difficult.  Perhaps some insight to these problems
or to computing in general might result.

Wayne Pollock (The MAD Scientist)	pollock@usfvax2.usf.edu
Usenet:		...!{uflorida, codas}!usfvax2!pollock
GEnie:		W.POLLOCK