[comp.ai] dear abby....

roy@arcsun.UUCP (02/28/87)

Dear Abby.  My friends are shunning me because i think that to call
a program an "expert system" it must be able to explain its decisions.
"The system must be able to show its line of reasoning", I cry.  They
say "Forget it, Roy... an expert system need only make decisions that
equal human experts.  An explanation facility is optional".  Who's
right?

Signed,

Un*justifiably* Compromised

Roy Masrani, Alberta Research Council
3rd Floor, 6815 8 Street N.E.
Calgary, Alberta CANADA  T2E 7H7
(403) 297-2676

UUCP:  ...!{ubc-vision, alberta}!calgary!arcsun!roy
CSNET: masrani%noah.arc.cdn@ubc.csnet

-- 
Roy Masrani, Alberta Research Council
3rd Floor, 6815 8 Street N.E.
Calgary, Alberta CANADA  T2E 7H7
(403) 297-2676

UUCP:  ...!{ubc-vision, alberta}!calgary!arcsun!roy
CSNET: masrani%noah.arc.cdn@ubc.csnet

yerazuws@rpics.UUCP (03/02/87)

In article <178@arcsun.UUCP>, roy@arcsun.UUCP (Roy Masrani) writes:
> 
> Dear Abby.  My friends are shunning me because i think that to call
> a program an "expert system" it must be able to explain its decisions.
> "The system must be able to show its line of reasoning", I cry.  They
> say "Forget it, Roy... an expert system need only make decisions that
> equal human experts.  An explanation facility is optional".  Who's
> right?

While you're developing an expert system, you have to know not just 
that it inferred something incorrectly, but WHY it inferred it incorrectly.
Looking through 4,000 rules trying to find the one with a typo is 
no fun, no fun at all.
	
Secondly, once you and your expert have convinced yourself that the 
system is right, you must now convince your first set of users that the
system is right, too.  These users may not be as expert as *your* expert,
but they have some knowledge of the subject.  Perhaps a few of them are even
more expert than your expert in some narrow subfield.  
	
It behooves you to gain acceptance and knowledge from this group, and 
if they perceive that the expert system is a "black box", they will have
no encouragement to assist in the final tweaking and debugging.  To be 
useful, your system must not only be correct.  It must be accepted and
used!  
	
Personal experience- People, including the expert whose knowledge has been
captured, don't like (maybe don't trust?) a black-box expert system, if
they can't ask it why it gave the answer it did.  
	
	-Bill  Yerazunis
	"...these guys had "Thugs 'R' Us" stencilled all over them"

saal@sfsup.UUCP (03/02/87)

In article <178@arcsun.UUCP> roy@arcsun.UUCP (Roy Masrani) writes:
>
>Dear Abby.  My friends are shunning me because i think that to call
>a program an "expert system" it must be able to explain its decisions.
>"The system must be able to show its line of reasoning", I cry.  They
>say "Forget it, Roy... an expert system need only make decisions that
>equal human experts.  An explanation facility is optional".  Who's
>right?
>Signed,
>Un*justifiably* Compromised
>Roy Masrani, Alberta Research Council

It all depends.  During development it is absolutely necessary
for the system to give its reasoning, if only as a useful
debugging tool. (Is the system using the correct logic to get to
the decision.)  Once it is "in production" (the field) it may not
be as important tot give an explanation every time.  This is
particularly the case when the expert system is used to help do
some of the more mundane tasks on a very frequent basis.  There
are 2 reasons for this. (1) the user may be able to agree
intuitively after deriving the answer -  the machine has just
helped speed the process. OR (2) If a production ES has been
converted to a compiled language,  the code to express the
rationale may be removed to speed up run time.

Sam Saal

tanner@osu-eddie.UUCP (03/02/87)

Leaving aside the utility of explanations in developing a system and
in convincing users it is behaving properly there is this:

     Experts are capable of explaining their reasoning, justifying
     conclusions, etc.  Hypothesis:  they are able to do this partly
     because of the way their knowledge is organized and used in
     problem-solving.  

Therefore, if your expert system is incapable of explaining itself you
probably haven't got the knowledge organization and problem solving
strategy right.  (Granted, it's only a hypothesis.  It seems right to
me.  I'm in the process of working on a PhD dissertation on how
knowledge organization and problem-solving strategy can help produce
good explanations.  Doesn't exactly support the hypothesis, but it
should clarify it a bit.)

This assumes you're interested in how knowledge-based problem-solving
works.  If all you want is an expert system, ie, a system which gets
right answers, then you're back to utility arguments for explanation.
(Though, I don't think you'll be successful at getting good
performance without this understanding.)

-- mike

ARPA:  tanner@ohio-state.arpa
UUCP:  ...cbosgd!osu-eddie!tanner

franka@mntgfx.UUCP (03/02/87)

In article <178@arcsun.UUCP> roy@arcsun.UUCP (Roy Masrani) writes:
>"expert system" ... must be able to explain its decisions.
VS.
>... expert system need only make decisions that equal human experts.
> An explanation facility is optional".


Well, given the level of explaination most human experts give (e.g., "Well,
I did it this way because it felt right," or "Gosh, I don't know, it
seemed like a good idea at the time."), I tend to agree with number two.
In fact, has anyone done an expert system which automatically spits out
one of the above phrases (or any number of similar phrases) as an
"explaination"?  Could bring the damn things closer to Turing capability
as percieved by the user...  "What the hell are YOU asking for," might
get the proper amount of arrogance I've seen in most experts (:-).

Frank Adrian
Mentor Graphics, Inc.

rob@arcsun.UUCP (03/02/87)

In article <178@arcsun.UUCP>, roy@arcsun.UUCP (Roy Masrani) writes:
> 
> Dear Abby.  My friends are shunning me because i think that to call
> a program an "expert system" it must be able to explain its decisions.
> "The system must be able to show its line of reasoning", I cry.  They
> say "Forget it, Roy... an expert system need only make decisions that
> equal human experts.  An explanation facility is optional".  Who's
> right?
> 
> Signed,
> 
> Un*justifiably* Compromised
> 
Dear Mr. Compromised:

   You should ask yourself whether you want a complete, intelligible
explanation facility, or just the basics (i.e. "The answer is X because
Rule Y says so"). If it is the latter, your friends are wrong and you
should tell them so. If the former, your friends are probably programmers
and lazy ones at that. You should find new friends.

Abby.
> Roy Masrani, Alberta Research Council
> Roy Masrani, Alberta Research Council
P.S. You don't need to specifically include a .signature

coffee@aero.UUCP (03/03/87)

In article <3269@osu-eddie.UUCP> tanner@osu-eddie.UUCP (Mike Tanner) writes:

>If all you want is an expert system, ie, a system which gets
>right answers, then you're back to utility arguments for explanation.

I agree with everything else Mike said about this issue, but it seems to
me that the label "expert system" _should_ mean something _more_ than "a
system that gets right answers." We've had useful programs, implicitly
applying "expert" knowledge, for a long time: the new label should reflect
new capabilities. Hayes-Roth et alia, in _Building_Expert_Systems_, say the
following:

"...[E]xpert systems differ from the broad class of AI tasks in several 
respects...they employ self-knowledge to reason about their own inference
processes and provide explanations or justifications for conclusions
reached."

This is one of the milestone texts in the field, and definitions are useful
things: it seems to me that disputes over whether explanation is "needed"
before you can call it an expert system are missing the point. We _have_ what
seems to me to be a mainstream definition for the term; if we want
to talk about a system that _doesn't_ do explanation, can't we just call
it a computer program (or a parser, or a pattern recognizer, or whatever)
instead of trying to stretch the popular label to fit it?

Constructively, I hope, Peter C.

rosa@cheviot.newcastle.ac.uk ( U of Dundee) (03/05/87)

Dear Abby,
      My problem is that I think I may be schizophrenic..
When I say "expert system" I mean a program which advises
or searches for solutions in a restricted domain of data.  
Since I am British this program would be written at first 
in prolog. When others use the phrase "Expert System"
they mean some kind of all singing, all dancing REAL WORLD
EXPERT ... a human being not a program....
I have the same mismatch problem with the words "knowledge based",
"knowledge aquisition", "intelligent", and most
importantly with explanations...
If a friend wants an "expert system" to help diagnose faults
in cooking(say), I write a program to choose oven settings
and help out with sensible advice for drooping souffles.
When they ask for "the reason why" should I have written
a huge explanation database instead of relying on the
programming language internal logic control???????
Abby please help me decide if I should use a different,
more technical phrase like advice giving database program
instead of the confusing and misunderstood "expert system"
or join a less demanding profession like brain surgery?
yrs, a sad hacker.

michaelm@bcsaic.UUCP (03/06/87)

In article <1147@sfsup.UUCP> saal@/guest5/saalUUCP (45444-AUG871-S.Saal) writes:
>In article <178@arcsun.UUCP> roy@arcsun.UUCP (Roy Masrani) writes:
>>Dear Abby.  My friends are shunning me because i think that to call
>>a program an "expert system" it must be able to explain its decisions.
>>"The system must be able to show its line of reasoning", I cry.  They
>>say "Forget it, Roy... an expert system need only make decisions that
>>equal human experts...
>
>...Once it is "in production" (the field) it may not
>be as important to give an explanation every time.  This is
>particularly the case when the expert system is used to help do
>some of the more mundane tasks on a very frequent basis.  There
>are 2 reasons for this. (1) the user may be able to agree
>intuitively after deriving the answer -  the machine has just
>helped speed the process. OR (2) If a production ES has been
>converted to a compiled language,  the code to express the
>rationale may be removed to speed up run time.

I'm not an ES expert, but when I talk to a human expert in a field, I commonly
ask "why?" or "what alternatives are there?" (which is the same thing for the
user, I think, although perhaps not for the expert).  This is even true in
"mundane" or frequently performed tasks.

An example is when I went to the AAA to ask what the best route was to drive
from Seattle to Miami in early spring.  Since I'm going to an expert for the
solution, there's a reason, and almost by definition it's not routine. 
I may have asked them how to drive from A to B many times, but in this case I
asked why they routed me the way they did, because I'm unsure of
the weather conditions over passes in Montana and Colorado.

If the ES is to not just "make decisions that equal human experts" but
replace and/or augment a human, I would want to be able to ask it the same
questions.  Hence I think that while point (2)--by deleting explanation
code we can speed up the run time system--may be true, it is beside the point
(pun).  If anything, it is an argument for faster hardware.

Or maybe I'm just suspicious...
-- 
Mike Maxwell
Boeing Advanced Technology Center
	arpa: michaelm@boeing.com
	uucp: uw-beaver!uw-june!bcsaic!michaelm

lyang%jennifer@Sun.COM (Larry Yang) (03/07/87)

In article <886@rpics.RPI.EDU> yerazuws@rpics.RPI.EDU (Crah) writes:
>In article <178@arcsun.UUCP>, roy@arcsun.UUCP (Roy Masrani) writes:
>> 
>> Dear Abby.  My friends are shunning me because i think that to call
>> a program an "expert system" it must be able to explain its decisions.
>> "The system must be able to show its line of reasoning", I cry.  They
>> say "Forget it, Roy... an expert system need only make decisions that
>> equal human experts.  An explanation facility is optional".  Who's
>> right?

In medical decision systems, the ability to explain the decision
is very important.  I believe that most medical 'expert' systems
(MYCIN and INTERNIST come to mind) have a 'why' or 'explain'
feature.  My understanding is that these systems were to
have applications in teaching, and such a feature would help
medical students understand the medical decision-making process.

But beyond the educational application, it seems that an 'expert'
system will gain greater acceptance if it had an 'explain'
feature.  Would you accept a solution that some black-box,
electronic oracle offered you, without any why or wherefore?
Imagine two doctors diagnosing a condition.  Suppose one were
asking the other for his/her advice.  Would the first doctor
accept just a diagnosis from the second, or would he/she also
ask for an explanation?

================================================================================

--Larry Yang [lyang@sun.com,{backbone}!sun!lyang]|   A REAL _|> /\ |
  Sun Microsystems, Inc., Mountain View, CA      | signature |   | | /-\ |-\ /-\
  "Build a system that even a fool can use and   |          <|_/ \_| \_/\| |_\_|
   only a fool will want to use it."             |                _/          _/