[comp.ai] MYCIN, TEIRESIAS and Explanation

cmiller@SRC.Honeywell.COM (Chris Miller) (01/30/91)

Okay, here is my vague understanding:

At various points during the development of the MYCIN medical diagnostic
expert system, attempts were made to provide MYCIN with the ability to
explain/justify its decisions by providing an account of the reasoning
which led to the decision.  This account was based on a trace of the rules
which fired during the construction of the recommendation.  This
explanatory ability either was the product of, or was augmented by the
ancillary system TEIRESIAS.  The general consensus was that this approach
to making "why"-type explanations was less than satisfactory.

Here are my questions:

1.  What's right or wrong about the above paragraph?

2.  Assuming that the last sentence is accurate, what was wrong with the
approach?

3.  What's been done since??


Reading recommendations are thoroughly welcome.  Another vague source told
me that there was a full issue of an AI journal devoted to this topic a
while back, but couldn't remember what journal, how far back, or much else--
does this ring any bells for anyone?

geb@dsl.pitt.edu (Gordon E. Banks) (01/30/91)

In article <1991Jan29.210221.7984@src.honeywell.com> cmiller@SRC.Honeywell.COM (Chris Miller) writes:
>
>Okay, here is my vague understanding:
>
>At various points during the development of the MYCIN medical diagnostic
>expert system, attempts were made to provide MYCIN with the ability to
>explain/justify its decisions by providing an account of the reasoning
>which led to the decision.  This account was based on a trace of the rules
>which fired during the construction of the recommendation.  This
>explanatory ability either was the product of, or was augmented by the
>ancillary system TEIRESIAS.  The general consensus was that this approach
>to making "why"-type explanations was less than satisfactory.
>
>Here are my questions:
>
>2.  Assuming that the last sentence is accurate, what was wrong with the
>approach?
>
A recapitulation of the rules that have fired in a backward chaining
system doesn't always provide a satisfactory explanation to the
clinician who is using the system.  It's much better than nothing,
but it is a very brittle explanatory system, and it is not robustly
interactive.

>3.  What's been done since??
>

A lot.  Check Joanna Moore's work (her thesis was done under Bill
Swartout at USC).  She has an explanation system for Lisp tutoring.

vansoest@cs.utwente.nl (Dick van Soest) (01/31/91)

A researcher in our group did her PhD thesis on explanation, in which,
among other topics about explanation, the topic of the last sentence of
your first paragraph is discussed.
The reference is:

P.M. Wognum, 1990
Explanation of automated reasoning: How and why?
PhD thesis University of Twente, Enschede, The Netherlands

Her email address is wognum@cs.utwente.nl

Abstract:

Automated-reasoning systems need large amounts of knowledge to solve
complex problems.  Knowledge engineering focuses on techniques for
acquiring, structuring, and representing knowledge, and on
techniques for reasoning with the knowledge. Users may wish a computer
to explain how the reasoning has been performed and why it made
certain statements.  This thesis adresses the topic of explanation of
the reasoning performed by an automated reasoning system.

We describe how the reasoning performed by a computer may serve as the
basis for explanation. First, we show that a resolution proof which is
not very transparent can be transformed into a natural-deduction proof
that is more suitable for explanation. Through such a transformation,
a resulution-based automated theorem prover can combine the efficiency
of resolution with the transparency of natural-deduction. Second, we
describe our model of reasoning which defines the architecture a
knowledge-based system must have to reason in an understandable way.
We show that this architecture is suitable to produce reasoning traces
which can be used to generate a wide range of explanations.

In the literature, the importance of explanation in knowledge-based
systems has frequently been emphasized but has hardly been assessed in
practice. This thesis contains the results of investigations to
determine the importance of explanation. We describe the results of a
study of the use made of explanation in a number of knowledge-based
systems which are actually used in the Netherlands.  Second, we
describe a study of the impact of explanation on users' decisions in a
complex domain. Third, we describe how
we used our model of reasoning to analyze explanations that can be
found in the medical literature. This analysis has yielded criteria for
a knowledge-based system with explanation that is acceptable to
physicians. 

The results presented in this thesis offer a knowledge engineer useful
guidelines for acquiring and structuring knowledge for knowledge-based
systems which are transparent to their users.



--
Dick van Soest
University of Twente
Computer Science Department	Internet: vansoest@cs.utwente.nl
P.O. Box 217			Bitnet: vansoest@utwente.nl
7500 AE Enschede		SURF-net: UTRCV1::VANSOEST
The Netherlands
Tel. +31 53 893736/893690	FAX: +31 53 339605