[mod.ai] Definition of Expert System

Laws@SRI-STRIPE.ARPA.UUCP (03/01/87)

Why must an expert system explain its reasoning?  1) To aid system
building and debugging; 2) to convince users that the reasoning is
correct; and 3) to force conformance to a particular model of
human reasoning.

Reason 1 is hardly a sine qua non.  It is necessary that the line
of reasoning be debuggable, of course, but that can be done with
checkpoints, execution traces, and other debugging tools.  Forcing
the system to "explain" its own reasoning adds to the complexity of
the system without directly improving performance.  An explanation
capability may reduce the time, effort, and expertise required to
build and maintain or modify the system -- particularly if domain
experts instead of programmers are doing the work -- but the real
issue is what knowledge is encoded and how it is used.  We have been
guilty of defining the field by the things that happened to be easy
to implement in a few early programs, just as we sometimes define AI
as that which is easy to do in LISP.

Reason 2, convincing the user, is a worthy goal and perhaps necessary
in consulting applications, but contains some traps.  The real test of
a system is its performance.  If adequate (or exceptional) performance
can be documented, many customers will have no interest about what
goes on in the black box.  If performance is documentably poor, adding
an explanatory mechanism is just a marketing gimick: an expert con.
The explanations are really only needed if some of the decisions are
faulty and it is possible to recognize which ones from the explanation.

Further, there are different types of explanation that should be
considered.  The traditional form is basically a trace of how a
particular diagnosis was reached.  This is only appropriate when
the reasoning is sequential and depends strongly on a few key facts,
the kind of reasoning that humans are able to follow and "desk check".
Reasoning that is strongly parallel, non-deterministic, or depends
on subtle data distinctions (without linguistic names) are not amenable
to such explanations.  This sort of problem often arises in pattern
recognition.  In image segmentation, for instance, it is typically
unreasonable (for anyone but a programmer) to ask the system "By what
sequence of operations did you extract this region?".  It is reasonable,
however, to ask how the target region differs from each of its neighbors,
and how it might now be extracted easily given that one knows its
distinguishing characteristics.  In other words, the system should
answer questions in light of its current knowledge instead of trying
to backtrack to its knowledge at the time it was making decisions.
The system's job is to extract structure from chaos, and explanations
in terms of half-analyzed chaos are not helpful.

Reason 3, adherence to a particular knowledge engineering methodology
is really the sticking point.  Some would claim that rule-based
reasoning and its attendant explanatory capability is fundamentally
different from other paradigms and perhaps even fundamental to human
reasoning; it therefore deserves a special name ("expert system").
Others would claim that rule-based systems are only one model of
expert reasoning and that the name should apply to any attempt at
a psychologically based or knowledge-based program.  A third group,
mostly those selling software, claim performance alone as the criterion.

I believe that explanatory capability, as currently feasible, is a
correlate of the rule-based approach and is not central in theory; it
may, however, be the key ingredient to making a particular application
feasible or marketable.  I don't believe that every optimal algorithm
is AI, so I reject the pure performance criterion for expert systems.
As to whether expert systems include only rule-based systems or all
knowledge-based system, I can't say -- that is a matter of convention
and has to be settled by those in the expert system field.

					-- Ken Laws
-------