[comp.ai.digest] Software Development and Expert Systems

sross@CS.UCL.AC.UK ("Simon.Ross") (01/22/88)

Request for information please:

I am looking into Performance Measures for Knowledge-Based Systems
(including Expert Systems). In particular, I am interested in what software
development techniques/measures etc for conventional software may be useful
for knowledge-based software. Furthermore, what are the special problems of
evaluating, testing and measuring the performance of knowledge-based systems
which make conventional tools and methods inappropriate.

Any information regarding this subject (even if it is informed
anecdotes) will be gratefully received.

Depending on the response I may get back to you on this.

Simon Ross
                Department of Computer Science
                University College London
                London WC1E 6BT
                Phone: (+44) 1 387 7050 Ext. 3701

                ARPA :  sross@cs.ucl.ac.uk
....if this does not work try;
           EAST COAST:  sross%cs.ucl.ac.uk@relay.cs.net
           WEST COAST:  sross%cs.ucl.ac.uk@a.isi.edu
                UUCP :  mcvax!ukc!ucl-cs!sross

jacob@NRL-CSS.ARPA (Rob Jacob) (01/27/88)

To: sross@cs.ucl.ac.uk

Saw your message about software engineering techniques for expert
systems on the AIList.  This may not be quite what you had in mind,
but, here at the Naval Research Laboratory Judy Froscher and I have
been working on developing a software engineering method for expert
systems.  We are interested in how rule-based systems can be built so
that they will be easier to change.  Our basic solution is to divide
the set of rules up into pieces and limit the connectivity of the
pieces.

I'm going to attach a short abstract about our work to the end of this
message and some references.   Hope it's of use to you.

Good luck,
Rob Jacob

ARPA:	jacob@nrl-css.arpa
UUCP:	...!decvax!nrl-css!jacob
SNAIL:	Code 5530, Naval Research Lab, Washington, D.C. 20375


    Developing a Software Engineering Methodology for Rule-based Systems

			    Robert J.K. Jacob
			   Judith N. Froscher

			Naval Research Laboratory
			    Washington, D.C.

Current expert systems are typically difficult to change once they are
built.  The objective of this research is to develop a design
methodology that will make a knowledge-based system easier to change,
particularly by people other than its original developer.  The basic
approach for solving this problem is to divide the information in a
knowledge base and attempt to reduce the amount of information that
each single programmer must understand before he can make a change to
the expert system.  We thus divide the domain knowledge in an expert
system into groups and then attempt to limit carefully and specify
formally the flow of information between these groups, in order to
localize the effects of typical changes within the groups.

By studying the connectivity of rules and facts in several typical
rule-based expert systems, we found that they seem to have a latent
structure, which can be used to support this approach.  We have
developed a methodology based on dividing the rules into groups and
concentrating attention on those facts that carry information between
rules in different groups.  We have also developed algorithms for
grouping the rules automatically and measures for coupling and cohesion
of alternate rule groupings in a knowledge base.  In contrast to the
homogeneous way in which the facts of a rule-based system are usually
viewed, the new method distinguishes certain facts as more important
than others with regard to future modifications of the rules.

                           REFERENCES

R.J.K. Jacob and J.N. Froscher, "Facilitating Change in Rule-based
Systems," pp. 251-286 in Expert Systems: The User Interface, ed. J.A.
Hendler, Ablex Publishing Co., Norwood, N.J. (1988).

R.J.K. Jacob and J.N. Froscher, "Software Engineering for Rule-based
Systems," Proc. Fall Joint Computer Conference pp.  185-189, Dallas,
Tex. (1986).

J.N. Froscher and R.J.K. Jacob, "Designing Expert Systems for Ease of
Change," Proc. IEEE Symposium on Expert Systems in Government pp.
246-251, Washington, D.C. (1985).

R.J.K. Jacob and J.N. Froscher, "Developing a Software Engineering
Methodology for Rule-based Systems," Proc. 1985 Conference on
Intelligent Systems and Machines pp. 179-183, Oakland University
(1985).

R.J.K. Jacob and J.N. Froscher, "Developing a Software Engineering
Methodology for Knowledge-based Systems," NRL Report 9019,  Naval
Research Laboratory, Washington, D.C. (1987).

will@chorus.fr (Will Neuhauser) (01/31/88)

In article <8801270004.AA12634@nrl-rjkj.arpa>, jacob@NRL-CSS.ARPA
(Rob Jacob) writes:
> Saw your message about software engineering techniques for expert
> systems on the AIList.  [...]  Our basic solution is to divide
> the set of rules up into pieces and limit the connectivity of the
> pieces.   [...]

This would appear to be the basic definition of "modularity", and
the usual hints should apply given a little thought.


To achieve greater modularity in a prototype expert system, in C++,
I created classes for "expert systems" (inference engines),
for rule-bases, and for fact-bases.

Inference Engines.
-----------------
Separate engines allows the user to select an appropriate inference
engine for the tasks at hand.  It would have been nice to add a
language construct for defaulting engines; as it was you had to
code these.  The expert sytems could be organized hierarchically.
Each system had a pointer to its parent(s) and vice versa.  In truth,
I only ever used one engine because I was really more interested in
modularizing the rule-base, but the potential was there, right?

Fact-bases.
----------
Separate fact-bases allowed for the use of the same rule-base in
different situations: when a rule-base appeared more that once,
it was given a new fact-base, and the rule-base was re-used.
Hypothetically, the separate fact-bases could have been useful
in "trial and error" situations: one could create new fact-base
instances (objects) and then throw them away when they didn't
pan out.  I never had a chance to try it out.

Rule-bases.
----------
Separate rule-bases were the important factor in this current
discussion, and my main interest.  I used a very simple default, 
that could obviously be extended to provide finer control.

The separate rule-bases were very useful for modularizing the
total rule-base.  Each "coherent set of rules" was located in a different
file, and when read in, was read into a separate rule-base instance
(it was a prototype so don't give me too much grief!).  The
default rule for connection was that terminal goals, those which
never appeared on the left-hand side of a rule, were automatically
exported to the calling expert system(s) (via the parent-pointers).
This was sort of nice in that when a sub-expert-system had new
goals added, they were automatically made a part of the callers
name space.  (Of course there could be conflicts, but in the prototype
I just lived with the problem and the new meanings suddenly given to
existing names, but, again, I was just trying out some modularization
concepts in a prototype.)

Aside from the obvious advantages of modularization to reduce the
size of the name space and thus the difficulty of understanding
a single giant  set of rules (actually, it seemed that 100 rules
was hard for one person to remember for long), I had another
reason for wanting modularization: I wanted to clearly separate the
experts, facts, and rules into different computational tasks (coherent
systems of sub rules) so that one could divide the rule-base up onto
separate processors in a multi-processor computer.  (Again, never tried.)

kohen@ROCKY.STANFORD.EDU (Abe Kohen) (02/03/88)

The query and responses seem to be geared to custom-built systems.
I'd like to ask about s/w development for expert systems using 
commercially available tools.

How do tools like S.1, Art, or Nexpert lend themselves to good s/w
engineering. Are some tools better for s/w engineering? Are they 
better (whatever that means) at the expense of clear and efficient
data representation.

It seems that S.1 has the potential for providing a good s/w engineering
environment, but it fails on data representation, and is lacking forward
chaining (vaporware not withstanding). Art has good data representation,
but doesn't (yet) integrate well into a workstation (read: Sun) environment.

How does Nexpert perform?

kohen@rocky.stanford.edu
kohen@sushi.stanford.edu