[comp.ai.neural-nets] Help with TSP

jal@wsu-cs.uucp (Jason Leigh) (03/15/89)

I have a problem with trying to understand how to correctly implement
the Travelling Salesman Problem (TSP) using the Boltmann machine.

I have read the pertinent parts of the PDP book and compared numerous
papers from the IEEE Conference 87 but there still seems to be something
boggling me.

From what I understand, the objective is to minimize an energy
function that consists of components pertinent to forming a minimal
permutation matrix.

This energy function E' must be recast to a form similar to that of
the Boltzmann machine E.  This means that the weights must be designed
to present hypotheses described by E'.  What I have read seems to suggest
that the rate of E' is to be used as the weights for E.  But the question is
when we use the Boltzmann algorithm, (flipping some state Si etc..)
do the weights need to be readjusted since the Si that pertains to E
should also pertain to E' and that would cause a change in E' and 
hence a change in the weight.

Am I missing something fundamentally simple?

If any one can give me some assistance on this, perhaps elaborate
on how this is correctly done/interpreted, I would appreciate it.
Any additional references would also be helpful.


Thanks in anticipation.

Jason Leigh

marcoz@MARCOZ.BOLTZ.CS.CMU.EDU (Marco Zagha) (03/16/89)

See J.J. Hopfield and D.W. Tank, "Neural Computation of Decisions in
Optimization Problems," Biological Cybernetics 52(1985) p. 141-152.


>  [...] But the question is
>  when we use the Boltzmann algorithm, (flipping some state Si etc..)
>  do the weights need to be readjusted since the Si that pertains to E
>  should also pertain to E' and that would cause a change in E' and
>  hence a change in the weight.

The weights are fixed.  Only activations are adjusted.  The Hopfield
& Tank paper explains their TSP energy function.

== Marco (marcoz@cs.cmu.edu)
--