[comp.ai.neural-nets] Interpreting a Neural Net.

ssen@rnd.GBA.NYU.EDU (Sahana Sen) (10/15/90)

I'm exploring using neural networks for modeling hierarchical decision 
processes - whether the network can uncover the sequence of attributes
or decision variables used on its own without building in a prespecified 
structure.  Also is it posiible to interpret the connection weights in an
unstuctured net to understand any relationship between the inputs and
outputs ?  Any pointers would be appreciated. Send mail to :
ssen@rnd.gba.nyu.edu 

Thanks a lot.
S.Sen