[comp.ai.digest] Seminar - A Theory of Prediction and Explanation

olender@MALIBU.AI.SRI.COM (Margaret Olender) (02/26/88)

   WHEN:  FRIDAY, MARCH 4th
   TIME:  10:30am
  WHERE:  EJ228 
SPEAKER:  LEORA MORGENSTERN / BROWN UNIVERSITY.



			 WHY THINGS GO WRONG:
	    A FORMAL THEORY OF PREDICTION AND EXPLANATION

                        Leora Morgenstern
                        Brown University

This talk presents a theory of Generalized Temporal Reasoning.  We focus
on the related problems of:
(1) Temporal Projection - figuring out all the facts that are true
in some chronicle, given a partial description of that chronicle
                    and
(2) Explanation - figuring out what went wrong if an expected
outcome didn't occur.

Standard logics can't handle temporal projection due to such problems
as the frame problem (and qualification problem).  Simplistic
applications of non-monotonic logics also won't do the trick, as the
Yale Shooting Problem demonstrates.  During the past several years, a
number of solutions have been proposed to the Yale Shooting Problem,
which either use extensions of default logics (Shoham,Kautz), or which
circumscribe over predicates specific to a theory of action
(Lifschitz, Haugh).  We show that these solutions - while perfectly
valid for the Yale Shooting Problem - cannot handle the general
temporal projection problem, because they all handle either forward or
backward projection improperly.

We present a solution to the generalized temporal projection problem
based on the notion that actions only happen if they are *motivated*.
We handle the non-monotonicity using only preference criteria on
models, and avoid both modal operators and circumscription axioms.  We
show that our theory handles both forward projection and backward
projection properly, and in particular solves the Yale Shooting
Problem and a set of benchmark problems which other theories can't
handle.  An advantage of our approach is that it lends itself to an
intuitive model for the explanation task.  We present such a model,
give several characterizations of explanation within that model, and
show that these characterizations are equivalent.
 
This talk reports on joint work done with Lynn Stein of Brown
University.

-------