[comp.ai.neural-nets] simulated annealing vs genetic algs.

androula@cb.ecn.purdue.edu (Ioannis Androulakis) (11/01/90)

   I have the following questions :

   1) How would you compare genetic algorithms with
   simulated annealing ? For some people, it appears
   that due to the lack of a sound theoretical background,
   that could quarantee asymptotic convergence, GA's
   have nothig new to say. Further, GAs, should be
   classified as a "heuristic" rather than just 
   a "stochastic" search technique.
   Do you happen to have any comments on these or any
   other issue concerning the comparison between GA's and
   SA ?

   2) A lot of people are using simulated annealing techniques
   to search continuous domains. From what I know, the
   proof of asymptotic convergence of SA, under suitable
   annealing schedules etc., was established under the assumption
   that the state transitions where performed in a discrete
   domain. If this is so, how legitimate is it to use a convergence
   result, derived for discrete state transitions, to problems
   where the state transitions are performed in a continous
   space ? Or, in other words, how can we prove convergence of
   continuous SA ?

   3) Are you aware of any applications of simulated annealing
   to constrained non-linear optimization ?
      -----------

   Thank you for your attention,

   ioannis

   androula@lips.ecn.purdue.edu

kingsley@hpwrce.HP.COM (Kingsley Morse) (11/02/90)

Comparing GAs to SA is good. I know John Holland showed that GAs search their 
search "optimally", in a sense. Dave Whitley at Colorado State (?) has solved 
a "traveling salesman" problem with more cities (47?) than any other algorithm,
SA included. The main question I have is, which is more computationally
efficient for large problems. IE: which scales better for applications with
many variables. 

landman@hanami.Eng.Sun.COM (Howard A. Landman) (11/03/90)

In article <1990Oct31.172833.18197@ecn.purdue.edu> androula@cb.ecn.purdue.edu (Ioannis Androulakis) writes:
>   If this is so, how legitimate is it to use a convergence
>   result, derived for discrete state transitions, to problems
>   where the state transitions are performed in a continous
>   space ? Or, in other words, how can we prove convergence of
>   continuous SA ?

As long as you're using a digital computer, it has only finitely
many (discrete) values for floating point numbers.  QED  (Unless
you're using arbitrary precision arithmetic.)

>   3) Are you aware of any applications of simulated annealing
>   to constrained non-linear optimization ?

Standard cell (or gate array) placement.  E.g., Timberwolf.

--
	Howard A. Landman
	landman@eng.sun.com -or- sun!landman

reczko@is6.ifistg.uucp (Martin Reczko) (11/07/90)

In article <1990Oct31.172833.18197@ecn.purdue.edu> androula@cb.ecn.purdue.edu (Ioannis Androulakis) writes:
>   2) A lot of people are using simulated annealing techniques
>   to search continuous domains. From what I know, the
>   proof of asymptotic convergence of SA, under suitable
>   annealing schedules etc., was established under the assumption
>   that the state transitions where performed in a discrete
>   domain. If this is so, how legitimate is it to use a convergence
>   result, derived for discrete state transitions, to problems
>   where the state transitions are performed in a continous
>   space ? Or, in other words, how can we prove convergence of
>   continuous SA ?

I compared SA vs GA for small feedforward nets. The following reference
describes a modified SA-algorithm for continuous optimization (using
an interesting, but for NN-applications too space consuming (O(n^2),
n=problem-dimension=#weights) continuous state generation method).
I think the `continuous convergence proof' was contained.

@article(Vanderbilt:84,
author={Vanderbilt, D. and S.C. Louie},
title={A Monte Carlo Simulated Annealing Approach
to Optimization over Continuous Variables},
journal={Journal of Computational Physics},
volume={36},
year={1984},
pages={259-271})

-- Martin
-- reczko@is.informatik.uni-stuttgart.de