[comp.ai] Genetic algorithms don't seem to work

nagle@well.UUCP (John Nagle) (07/05/89)

      David Goldberg's "Genetic Algorithms" offers a good overview of the
field, along with some sample programs.  The interesting thing is that
the sample programs, which work fine on trivial functions, don't do well
on complex ones.  In particular, it is claimed that genetic algorithms
do well in searching for maxima in hill-climbing problems dominated by
local maxima.  (Goldberg, p. 4, esp. figure 1.3).  But this does not
seem to be the case in practice.  What seems to happen is that as soon
as any individual in the population finds a good spot with a high fitness,
it reproduces rapidly and takes over the population.  The resulting
population is homogeneous, so crossover does nothing, and only mutation
can bring further progress.  Since mutation progresses by single-bit 
invrersions, the space that can be explored by mutation has only as many
positions as there are bits in the chromosone.  Unless one of these few
positions is better than the good spot already found, which is unlikely
for many interesting functions, no further exploration of the space will
ever take place, and the algorithm remains stuck in a local maximum.
In any case, exploration by mutation is extremely inefficient.

      So where do people get the idea that this works as an optimization
technique for hill-climbing problems dominated by local maxima?

     					John Nagle

giant@lindy.Stanford.EDU (Buc Richards) (07/06/89)

In article <12553@well.UUCP> nagle@well.UUCP (John Nagle) writes:
>
>      David Goldberg's "Genetic Algorithms" offers a good overview of the
>field, along with some sample programs.  The interesting thing is that
>the sample programs, which work fine on trivial functions, don't do well
>on complex ones.  In particular, it is claimed that genetic algorithms
>do well in searching for maxima in hill-climbing problems dominated by
>local maxima.  (Goldberg, p. 4, esp. figure 1.3).  But this does not
>seem to be the case in practice.  What seems to happen is that as soon
>as any individual in the population finds a good spot with a high fitness,
>it reproduces rapidly and takes over the population.  The resulting
>population is homogeneous, so crossover does nothing, and only mutation
>can bring further progress.  Since mutation progresses by single-bit 
>inversions, the space that can be explored by mutation has only as many
>positions as there are bits in the chromosome.  Unless one of these few
>positions is better than the good spot already found, which is unlikely
>for many interesting functions, no further exploration of the space will
>ever take place, and the algorithm remains stuck in a local maximum.
>In any case, exploration by mutation is extremely inefficient.
>
>      So where do people get the idea that this works as an optimization
>technique for hill-climbing problems dominated by local maxima?


Your dilemma is probably that your fitness proportionate reproduction
is biased too much towards the fit.  A research issue is how best to
balance "exploration and exploitation".  The more biased the breeding
rule is towards more fit individuals the more exploitation is going on
at the expense of exploration.  The limit is that if only the best
individuals were mated then this would correspond closely will simple
hill-climbing.

On the other hand if an individuals fitness was not used at all in
deciding how much mating is was going to be involved in then random
search would nearly be the result.  This is a search that is too
exploratory.

So in your experiments it appears that you biased the search towards
exploitation too much.  You may wish to modify how the "roulette
wheel" is divided.  Also this could be a symptom of a starting with
too small a population or one not randomly generated.

	Rob Richards
	Stanford University