[mod.ai] queries about expert systems

LIN@XX.LCS.MIT.EDU (09/19/86)

Maybe some AI guru out there can help with the following questions:

1. Production systems are the implementation of many expert systems.
In what other forms are "expert systems" implemented?

[I use the term "expert system" to describe the codification of any
process that people use to reason, plan, or make decisions as a set of
computer rules, involving a detailed description of the precise
thought processes used.  If you have a better description, please
share it.]

2. A production system is in essence a set of rules that state that
"IF X occurs, THEN take action Y."  System designers must anticipate
the set of "X" that can occur.  What if something happens that is not
anticipated in the specified set of "X"?  I assert that the most
common result in such cases is that nothing happens.  Am I right,
wrong, or off the map?

Thanks.

Herb Lin

hamscher@HT.AI.MIT.EDU (Walter Hamscher) (09/20/86)

   Date: Thu, 18 Sep 1986  17:10 EDT
   From: LIN@XX.LCS.MIT.EDU

   1. Production systems are the implementation of many expert systems.
   In what other forms are "expert systems" implemented?

   [I use the term "expert system" to describe the codification of any
   process that people use to reason, plan, or make decisions as a set of
   computer rules, involving a detailed description of the precise
   thought processes used.  If you have a better description, please
   share it.]

``Expert System'' denotes a level of performance, not a technology.
The particularly important aspirations are generality and robustness.
Every program strives for some degree of generality and robustness, of
course, but calling a program an expert system means it's supposed to
be able to do the right thing even in situations that haven't been
explicitly anticipated, where ``the right thing'' might just be to
gracefully say ``I dunno'' when, indeed, the program doesn't have the
knowledge needed to solve the problem posed.

Production systems, or, more accurately, programs that work by running
a simple interpreter over a body of knowledge represented as IF-THEN
rules, ease the construction of simple expert systems because it's
possible to encode the knowledge without having to commit to a
particular order or context of using that knowledge.  The interpreter
determines what rule to apply next at runtime, and so long as you
don't include contradictory rules or assume a particular order of
application, such systems are easy to construct and work pretty well,
i.e. can be general (solve a wide variety of problem instances) and
robust (degrade gracefully by saying ``i dunno'' (no rules, or only
very general rules apply) in unusual situations, rather than trapping
out with an error).

That may not have seemed like an answer to question #1, so let me
return to it explicitly.  Production systems are not the only
technology for building expert systems, but pattern-directed
invocation is a theme common to all expert systems, whatever
technology is used.  Let me explain.  Another popular technology for
expert systems (in the medical domain, especially) might be called
Frames and Demons.  Facts are organized in a specialization hierarchy,
and attached to each fact may be a bunch of procedures (demons) that
are run when the fact is asserted, or denied, when the program needs
to figure out whether the fact is true or not, etc.  Running a demon
may trigger other demons, or add new facts, or new demons, and so the
system grinds away.  The underlying principle is the same as in
production systems: there is a large body of domain specific
knowledge, plus a simple interpreter that makes no initial commitment
to the order or context in which the facts are going to be used.  The
name of the game is pattern-directed invocation: the next action to
take is selected from among the ``rules'' or ``methods'' or ``demons''
that are relevant to the current situation.  This characteristic is
not unique to expert systems, but (I think) every program that has
ever been called an expert system has this characteristic in common,
and moreover that it was central to its behavior.

   2. A production system is in essence a set of rules that state that
   "IF X occurs, THEN take action Y."  System designers must anticipate
   the set of "X" that can occur.  What if something happens that is not
   anticipated in the specified set of "X"?  I assert that the most
   common result in such cases is that nothing happens.  Am I right,
   wrong, or off the map?

In most implementations of production systems, if the current
situation is such that no rules match it, nothing happens (maybe the
program prints out the atom 'DONE :-).  If the system is working in a
goal-directed fashion (e.g. it's trying to find out under what
circumstances it can take action Y (action Y might be "conclude that Z
has occurred")) and there aren't any rules that tell it anything about
Y, again, nothing happens: it can't conclude Z.  In practice, there
are always very general rules that apply when nothing else does.
Being general, they're probably not very helpful: "IF () THEN SAY
Take-Two-Aspirin-And-Call-Me-In-The-Morning."  The same applies to any
brand of pattern-directed invocation.

However, it's getting on the hairy edge of matters to say "System
designers must anticipate the set of X that can occur."  The reason is
that productions (methods, demons) are supposed to be modular;
independent of other productions; typically written to trigger on only
a handful of the possibly thousands of features of the current
situation.  So in fact I don't need to anticipate all the situations
that occur, but rather ``just'' figure out all the relevant features
of the space of situations, and then write rules that deal with
certain combinations of those features.  It's like a grammar: I don't
have to anticipate every valid sentence, except in the sense that I
need to figure out what all the word categories are and what local
combinations of words are legal.

Now, to hone your observation a bit, I suggest focusing on the notion
of ``figuring out all the relevant features of the space of
situations.''  That's what's difficult.  Experts (including
carbon-based ones) make mistakes when they ignore (or are unaware of)
features of the situation that modify or overrule the conclusions made
from other features.  The fundamental problem in building an expert
system that deals with the real world is not entirely in cramming
enough of the right rules into it (although that's hard), it's
encoding all the exceptions, or, more to the point, remembering to
include in the program's model of the world all the features that
might be relevant to producing exceptions.

End of overly long flame.

	Walter Hamscher

P.S. I am not an AI guru, rather, a mere neophyte disciple of the bona
fide gurus on my thesis committee.

ralph@lasso.UUCP (Ralph P. Sobek) (09/25/86)

Herb,

  >1. Production systems are the implementation of many expert systems.
  >In what other forms are "expert systems" implemented?

	I recommend the book "A Guide to Expert Systems," by Donald 
Waterman.  It describes many expert systems, which fall more or less 
into your definition, and in what they are implemented.  Production
Systems (PSs) can basically be divided into forward-chaining (R1/XCON) and 
backward-chaining (EMYCIN); mixed systems which do both exist.  Other
representations include frame-based (SRL), semantic nets (KAS), object-
oriented, and logic-based.  The representation used often depends on what
is available in the underlying Expert System Tool.  Tools now exist which
provide an intergrated package of representation structures for the expert
system builder to use, e.g., KEE and LOOPS.  Expert systems are also written
in standard procedural languages such as Lisp, C, Pascal, and even Fortran.

  >2. A production system is in essence a set of rules that state that
  >"IF X occurs, THEN take action Y."  System designers must anticipate
  >the set of "X" that can occur.  What if something happens that is not
  >anticipated in the specified set of "X"?  I assert that the most
  >common result in such cases is that nothing happens.

	In both forward-chaining and backward-chaining PSs nothing happens.
If the PS produces "X" then we can verify if "X" is never used.  In the 
general case, if "X" comes from some arbitrary source there is no
guarantee that the PS (or any other system) will even see the datum.

	Ralph P. Sobek

UUCP:	mcvax!inria!lasso!ralph@SEISMO.CSS.GOV
ARPA:	sobek@ucbernie.Berkeley.EDU	(automatic forwarding)
BITNET:	SOBEK@FRMOP11