[comp.ai.neural-nets] optimization using neural networks

schultz@halley.est.3m.com (John C. Schultz) (02/10/91)

Background: Given a set of experimental data where the researcher varied
	the set of input paramters and measured the quality of the output.
 
Problem: How to recommend new input control parameters which would
	result in "BETTER" output(s).

My (non-optimal) Solution:

I trained a back-prop network on the existing data, attempting to accurately
model the "N dimensioanl response surface" of the experimental data.  I can
then twiddle the input parameters to the trained network about the
experimental optimum(s).  Using these simulated experiments I can then look for
improved output(s) and recommend new experimental settings.

This approach is very crude, particularly for large numbers of input
variables. Does anyone have suggestions on ways to improve the search
efficiency?

The literature I have found on optimization with neural networks seems to deal
exclusively with the traveling salemen problem.  However I don't think that
the TSP is a good simulation for my situation since I do not have a cost
function to move from one data point to the next.

Thank you for any suggestions.
--
John C. Schultz                    EMAIL: schultz@halley.serc.3m.com
3M Company,  Building 518-01-1     WRK: +1 (612) 733-4047
1865 Woodlane Drive, Dock 4,       Woodbury, MN  55125
   How to include the taste of Glendronach in a multi-media system?

ajr@eng.cam.ac.uk (Tony Robinson) (02/11/91)

Newsgroups: comp.ai.neural-nets
Subject: Re: optimization using neural networks
Summary: 
Expires: 
References: <SCHULTZ.91Feb9161614@halley.est.3m.com>
Sender: 
Followup-To: 
Distribution: comp.ai.neural-nets
Organization: Cambridge University Engineering Department, UK
Keywords: 

In article <SCHULTZ.91Feb9161614@halley.est.3m.com> schultz@halley.est.3m.com (John C. Schultz) writes:
#
#Background: Given a set of experimental data where the researcher varied
#	the set of input paramters and measured the quality of the output.
# 
#Problem: How to recommend new input control parameters which would
#	result in "BETTER" output(s).
#
#My (non-optimal) Solution:
#
#I trained a back-prop network on the existing data, attempting to accurately
#model the "N dimensioanl response surface" of the experimental data.  I can
#then twiddle the input parameters to the trained network about the
#experimental optimum(s).  Using these simulated experiments I can then look for
#improved output(s) and recommend new experimental settings.
#
#This approach is very crude, particularly for large numbers of input
#variables. Does anyone have suggestions on ways to improve the search
#efficiency?

As you know that you can start near a good solution, why not back-propagate
the errors back to the input of your network.  By keeping the weights fixed,
these errors can be used to do gradient decent in the input space which is a
reasonable search strategy as locally the space should be fairly simple.

Several people have suggested this technique, one good reference is [1], what
is the experience of the net in using it in practice?


Tony Robinson

[1] A Linden and J Kindermann, "Inversion of Multilayer Nets", pp II425-II430,
    Proceedings of the INternational Joint Conference on Neural Networks,
    Washington DC, June 18-22, 1989.