[comp.ai] Explanations in expert systems

arrgh@ihuxv.UUCP (03/05/87)

I have to put in my $0.02 into the expert systems discussion.

In real life, an expert system probably will not be used unless it possesses
a sound explanation facility.  For most users, this does not mean merely 
dumping rules or whatever, e.g., "the system is trying to satisfy rule-518", but
rather being able to turn the knowledge encoded in each unit of representation
into meaningful natural language.

An example may make this requirement clearer.  One of the systems I have
built is the Michael Reese-IIT Stroke Consultant.  This program is a large
neurology expert system designed to assist house physicians with the
diagnosis and treatment of stroke. 

One of the treatments this system recommends is to prep the patient for surgery,
take him into the OR, remove the back of his head, and proceed to dig around
in the cerebellum for a hematoma.  

Naturally enough, any reasonable physician will want to ask the machine "WHY?"
it recommends such a radical treatment, and expect an answer in a form that
a physician (not a computer scientist) can understand.  The explanation system
will furnish: an English statement of the problem, e.g., "diagnosis is
hemorrhage into the cerebellum", and justifications for the treatment, e.g.,
"Evacuation of cerebellar hematoma is recommended because it greatly reduces
mortality when the following signs are present... Refer to the following
references [references to the neurology literature are cited]."

Lets take a more common case.  Last spring, I built an expert system that
is designed to diagnose problems in candy wrapping machinery.  In fact, if
you eat candy bars, you have almost certainly eaten candy wrapped on one of
these machines.  The operators of these machines needed additional help
in diagnosing and troubleshooting problems in this new equipment, and we
built an expert system for this specific task.

Machine operators, unlike many of you, have absolutely no understanding of
production rules, and moreover, they are not interested.  This system had to be
able to furnish the following explanations on line at all times: (1) how to
use system itself, (2) how the candy wrapping equipment was supposed to
operate (an on line tutorial on the machine), (3) how to answer the questions
the system was asking, e.g., where was IC 9 pin 7 on the Micro controller "A"
board, and (4) an explanation of the reasoning the system was using at that
time.

The moral of this rather long posting is that if you want to build expert
systems that will actually be used by real people you will need a good
explanation facility.  While this is necessary, it is of course, not
sufficient.  The knowledge engineer will need good debugging facilities
(something not provided on most tools today).

Hope this clears up some confusion.
-- 
Howard Hill, Ph.D.

tanner@osu-eddie.UUCP (03/07/87)

In article <1800@ihuxv.ATT.COM> arrgh@ihuxv.ATT.COM (Hill) writes:
>
>In real life, an expert system probably will not be used unless it possesses
>a sound explanation facility.  For most users, this does not mean merely 
>dumping rules or whatever, e.g., "the system is trying to satisfy
>rule-518", but 
>rather being able to turn the knowledge encoded in each unit of representation
>into meaningful natural language.
>

While I agree that spitting out rules is generally inadequate for
explanation I disagree that explanations *must* be in natural language.
For some kinds of explanation drawing and pointing is more useful.

"I think the wonkus is broken.  Try replacing it."
"Wonkus!?  What's that?"
"Take a look at the zweeble smasher.  See this gizmo?  That's the wonkus."

I'm not saying natural language is useless.  But the above interaction
would have taken a lot more words without the picture.  (With the
picture it might have needed no words at all.  But I don't know what a
zweeble smasher is, much less how to draw one.)  Sometimes a picture
really is worth a thousand words.

Keep in mind that when you talk about explanation as giving back rules
you're assuming expert systems are simple, flat rule-bases.  This is
not necessarily true.  If all your expert system knows is rules then:
	(a) the system isn't doing anything interesting 
	OR
	(b) you're actually using the rule language as a general
	    purpose programming language (because rules qua rules
	    don't give you the control features needed to navigate a
	    large knowledge base) 

In case (a), there's no need to worry about real world usefulness.  In
case (b), there should be no surprise that the rules themselves are
not informative explainers.  No more than a listing of code would, in
general, be an explanation of a program.

-- mike

ARPA:  tanner@ohio-state.arpa
UUCP:  ...cbosgd!osu-eddie!tanner