[comp.ai.neural-nets] B.P. nets as associative memory nets.

nadi@janus.berkeley.edu (Fariborz Nadi) (12/15/88)

Fellow neural-neters:

I am presently doing modeling of microfabrication processes using neural-nets,
specifically back-propagation nets to learn the nonlinear relation between the
input and output nodes. This is the first part of the story, Now having a model
I would like to use an associative memory type network to learn the groups of
the input output pairs chosen by an expert. This is to divide the space of the
model into subspaces that are interesting to an expert in terms of some optimal
choices in his/her mind. The second net will help a novice make close to optimalchoices for a given partial-input partial-output pair.

I am trying to use again a second back-propagation net as an associative type
net. The way it works , I will use a net that as input has the input and output
of the first net, and as the output has the same, Therefore creating a mapping
between similar patterens. Kind of like 8-3-8 network (coding decimal to binary
back to decimal). After the network has learned the mapping given a partial
input and partial output( the same ) we can lock the values of these nodes and
holding the weights and thresholds constant, change the values of the unknown
input and output nodes, given that the corresponding input-output nodes should
change together. This can be done using an optimization technique, not necessa-
rily a locally computable one.

Now I have two questions:
1) What do you think about the use of backpropagation nets as an associative
   type net, given the method discribed above or some other?
2) Is this work done before and if so where is it published? What would be very
   interesting to me is finding an optimization technique for the second part
   that would be locally computable.

I will publish all the replies together.