rjf@ukc.ac.uk (Robin Faichney) (09/12/89)
I know next-to-nothing about NNs, so if you can't stand stupid questions, skip the rest of this. I want an application -- maybe an NN, maybe not -- to simulate a physical system. I feel that maybe either the simplest sort of NN, or else a statistical approach, would do. It would be great if someone could explain the solution to me in terms an averagely competent programmer could implement, because I have to keep time spent on this to an absolute minimum. The input and output patterns of this system are analogous to large bitfields with a sparse distribution -- probably > 95% of bits will be 0 on any given input/output event. The goal is that the NN will recognise sub-patterns within the input pattern and produce the appropriate output. Training consists of a set of examples, each of which is an input/output pattern pair. It would be difficult to arrange feedback. Instead of an initial training period followed by performance, training and performance are finely interleaved throughout the life of the net (so it will respond to changing requirements) and the consequent very poor early performance is tolerated. Though the input/output relationship may be quite complex, a rough approximation of correct performance is acceptable. Any takers? I'll leave it to your discretion whether posting or email is more appropriate. Thank you very much! Robin Faichney