[comp.ai.neural-nets] Expert systems and neural nets

tshu@cs.warwick.ac.uk (Tim Shuttleworth) (01/10/91)

I am currently working on a project in the area of expert systems and neural
networks. I am expanding on some work that was done previously on replacing
the rule-base and inference engine of an expert system (written in Lisp)
with an artificial neural network (actually implemented using NeuralWare
Inc's Networks II software for the (dare I say it) PC). Since there seems to
be some interest in the subject, I thought I'd tell about my "contribution to
the field".

The original expert system was written in Lisp (for its sins), and was a
forward-chaining production rule system, with no real frills; it had the
usual "why" (explaining why a question is being asked) and "how" (explaining
how a conclusion was reached) commands. The expert system was actually
designed to aid the user in interpreting features of X-ray topographic
slides, used in the study of (commonly, silicon) crystals. (And that is
where my knowledge of X Topography ends :-)

A former student at Warwick University, Stephen Pye, developed a network
which would attempt to replace the inference engine of the original system.
At a simple level, this is not such a horribly complex proposition. The
conclusion reached by the original expert system depended on the responses
to the questions which the user gave. Hence a particular output was due to a
pattern of responses to questions, and the network simply had to perform a
mapping between a set of question responses to a set of conclusions. Mr Pye
produced a system which did just this. The network he designed is based on our
old friend the 3 layer back-propagation network.

However, this system had a number of disadvantages (which is where I come
in). Firstly, the training data for the network had to be produced by hand,
so I designed a system which would produce training patterns for the network
given the Lisp source of the rules. This works by producing a graph
(virtually a tree, but technically not) representing the possible
question/answer paths leading from the first question to each conclusion and
then doing a depth-first traversal of the graph. That's the easy bit!

The biggest problem with the system is the user interface, or rather lack of
it. In the original system, the user answered a question, and based on their
response was asked further, relevant questions. The neural network, however,
would like all the relevant questions to be answered at the start so it can
work in parallel. However, if we give the user the option to input all
responses at the start, the user is not given any sense of "focusing" on the
solution, and will additionally enter responses to questions which are
irrelevant to the case in hand, thus possibly providing noise which will
degrade system performance.

This is the problem that I am currently working on. My current idea is to
build a second network which takes as its input the output of the first
network and has as its output layer a set of units which represent each
question. The system would start by asking the first question, as if that
were the only question answered. The original network will give an output
which can be fed to the second network, which will give as its output the
next question to ask. I also plan to use a third network which reads the
output of the first network and produces an output indicating the stability
of the output (i.e. whether a stable conclusion has been reached). The
system will reach its conclusion by a method of successive approximations.

While my idea sounds simple enough, I do have a problem in deciding how to
train the second network. I am just starting to play with the idea of using
a bottom-up traversal of the graph I used to generate the training patterns
for the first network. Intuitively, I don't think it will work terribly
effectively (I could end up producing excessive amounts of training data and
I can`t help feeling "there must be a better way"!). Any suggestions would
be gratefully received! :-)

Well, I'm sure I've taken up enough of your valuable disk space with my
waffle, but if anyone has any questions, has any ideas or has any large sums
of money to put to me, I'd be pleased to hear from you.

		-tshu(the "gosh, what a long waffle I did")
                Tim Shuttleworth,
		tshu@uk.ac.warwick.cs

/-----+----------------------------------------------------------------+-----\
| @ @ | "I think therefore I am" -Decartes                             | + + |
|  V  | "I perceive therefore you are" -tshu()                         |  V  |
| \_/ | "I wish I had a third quote to complete the triplet" -tshu(;-) | \_/ |
\-----+----------------------------------------------------------------+-----/

greenba@gambia.crd.ge.com (ben a green) (01/10/91)

In article <1991Jan9.184813.15560@warwick.ac.uk> tshu@cs.warwick.ac.uk (Tim Shuttleworth) writes:

   A former student at Warwick University, Stephen Pye, developed a network
   which would attempt to replace the inference engine of the original system.
   At a simple level, this is not such a horribly complex proposition. The
   conclusion reached by the original expert system depended on the responses
   to the questions which the user gave. Hence a particular output was due to a
   pattern of responses to questions, and the network simply had to perform a
   mapping between a set of question responses to a set of conclusions. Mr Pye
   produced a system which did just this. The network he designed is based on our
   old friend the 3 layer back-propagation network.

	.
	.
	.

   The biggest problem with the system is the user interface, or rather lack of
   it. In the original system, the user answered a question, and based on their
   response was asked further, relevant questions. The neural network, however,
   would like all the relevant questions to be answered at the start so it can
   work in parallel. However, if we give the user the option to input all
   responses at the start, the user is not given any sense of "focusing" on the
   solution, and will additionally enter responses to questions which are
   irrelevant to the case in hand, thus possibly providing noise which will
   degrade system performance.

With apologies to the writer, isn't this an example of using a wrench for a
hammer? Surely where rules exist, we should use them. Where they don't,
enter neural networks.

Think of rule-based systems as lecture and neural networks as lab. One relates
to theory; the other to experience. The whole history of science since 1605
is that we need both.

So, let's use expert systems to apply knowledge where we have it and call
on neural networks for what must be perceived directly. The expert system shell
used around here lets you call an external program for data. We let it
call a neural net for pattern recognition and use the result (a pattern name)
in the rule-based system.

I guess it's OK to let 1000 flowers bloom, but on the other hand, life is
short.

--
Ben A. Green, Jr.              
greenba@crd.ge.com
  Speaking only for myself, of course.