[comp.parallel] Function Minimization references?

walker@hpl-opus.hp.com (Rick Walker) (11/20/89)

I am taking a graduate level course in parallel programming from 
Cal State Chico.  For the term project I am doing a research paper
in the area of Parallel Methods of Non-Linear-Function Minimization
without using explicit knowledge of derivatives.

I would like to ask the members of this group for any leads in
this area; whether papers, textbooks, or conference proceedings.

What follows is a brief description of the problem:

  The application driving this work is the optimization of electronic
  circuits using a simulator program such as SPICE.  Using SPICE
  it is possible to evaluate the performance of a circuit with a
  particular set of element values and calculate a "figure of merit".
  
  By exploring the N-dimensional space formed by the various potential
  combinations of the N circuit element values, it is possible to converge
  to an (at least locally) optimal design.
  
  This application is somewhat different than the classical minimization
  formulation in that the derivative of the objective surface is not
  explicitly known.  This rules out many of standard techniques like
  Gauss-Seidel, Newton-Raphson, and Marquardt's method.  What is needed
  is some parallelizable algorithm that only uses function evaluations
  at various points in the space to efficiently converge to a minimum.

  I have considered the Downhill Simplex method of Nelder and Mead, and
  also Powell's method, but neither of these techniques seem to be
  amenable to parallelization.  One possible contender would be variants
  of the "simulated annealing" method, but I think that this is so
  inherently slow that it would not be very likely to give much of a
  speed up over more optimal uni-processor algorithms.

  I am mainly interested in algorithms suitable for coarse grain
  parallelism, i.e., multiple jobs running on a cluster of workstations.
  The typical time for one "function" evaluation.  (A SPICE job) is
  from 1-10 minutes, so communication overhead is minimal.  The number
  of processors available is in the range of 2-30.

If you have any information on any work that sounds related, I would
greatly appreciate hearing from you.  I can be reached at hplabs!walker
by email or 415-857-3754 by phone.  I will be happy to forward a summary
of any information I receive to all interested parties.

Thanks for your time and consideration,

Rick Walker

landman@hanami.Eng.Sun.COM (Howard A. Landman x61391) (12/04/89)

In article <7133@hubcap.clemson.edu> hplabs!walker@hpl-opus.hp.com (Rick Walker) writes:
>  ...
>  By exploring the N-dimensional space formed by the various potential
>  combinations of the N circuit element values, it is possible to converge
>  to an (at least locally) optimal design. ....

You should look into Bill Nye's (sp?) work at Berkeley a few years ago.
I believe the program was called "Delight" or something like that.  His
top level optimization algorithm was quite general and was applied to
several different problem domains (SPICE-based circuit optimization was
one of them).  I don't know how parallelizable it is.

	Howard A. Landman
	landman%hanami@eng.sun.com