[mod.ai] new Technical Reports

judea@LOCUS.UCLA.EDU (Judea Pearl) (10/07/86)

The following technical reports are now available from the
                 Cognitive Systems Laboratory
                 Room 4712, Boelter Hall
                 University of California
                 Los-Angeles, CA, 90024
           or: judea@locus.ucla.edu

_______

Pearl, J., ``Bayes and Markov Networks:    a  Comparison  of  Two
Graphical Representations of Probabilistic Knowledge,'' Cognitive
Systems Laboratory Technical Report (R-46), September 1986.

			ABSTRACT

This paper deals with the task of configuring effective graphical
representation  for intervariable dependencies which are embedded
in a probabilistic model.  It first uncovers the axiomatic  basis
for the probabilistic relation ``  x  is independent of y , given
 z  ,'' and offers it as a formal definition for  the qualitative
notion of informational dependency.  Given an initial set of such
independence relationships, the axioms established permit  us  to
infer  new  independencies by non-numeric, logical manipulations.
Using this axiomatic basis, the paper determines those properties
of  probabilistic  models  that  can  be  captured  by  graphical
representations and compares  the  characteristics  of  two  such
representations,  Markov  Networks  and  Bayes Networks. A Markov
network  is  an  undirected  graph  where  the  links   represent
symmetrical  probabilistic dependencies, while a Bayes network is
a directed  acyclic  graph  where  the  arrows  represent  causal
influences  or  object-property relationships.  For each of these
two network types, we establish:  1) a  formal  semantic  of  the
dependencies   portrayed   by   the  networks,  2)  an  axiomatic
characterization of the class of dependencies capturable  by  the
network, 3) a method of constructing the network from either hard
data or expert judgments and 4) a summary of properties  relevant
to  its  use  as  a  knowledge representation scheme in inference
systems.

_______

Zukerman, I. & Pearl, J.,  ``Comprehension-Driven  Generation  of
Meta-Technical  Utterances  in  Math  Tutoring,''  UCLA  Computer
Science Department Technical Report CSD-860097  (R-61).  

			ABSTRACT

A technical discussion often contains conversational  expressions
like  ``however,''  ``as  I  have stated before,'' ``next,'' etc.
These expressions, denoted Meta-technical Utterances (MTUs) carry
important  information  which  the  listener uses to speed up the
comprehension process. In this research we model the  meaning  of
MTUs  in  terms  of  their  anticipated  effect  on  the listener
comprehension, and use these predictions to select MTUs and weave
them  into  a  computer  generated  discourse.  This paradigm was
implemented  in  a  system  called   FIGMENT,   which   generates
commentaries on the solution of algebraic equations.

_______

Pearl,  J.,  ``Jeffrey's  Rule  and  the  Problem  of  Autonomous
Inference  Agents,''  UCLA Cognitive Systems Laboratory Technical
Report (R-62), June 1986, UCLA CSD #860099, June 1986.

			ABSTRACT

Jeffrey's rule of belief revision was devised by philosophers  to
replace  Bayes conditioning in cases where the evidence cannot be
articulated propositionally.  This paper shows  that  unqualified
application  of this rule often leads to paradoxical conclusions,
and that to determine whether or not the rule  is  valid  in  any
specific  case,  one  must first have topological knowledge about
one's belief structure.  However, if such  topological  knowledge
is, indeed, available, belief updating can be done by traditional
Bayes conditioning; thus, arises the question of  whether  it  is
ever  necessary  to  use  Jeffrey's  rule  in  formalizing belief
revision.

_______

Pearl, J., ``Distributed Revision of Belief Commitment in  Multi-
Hypotheses  Interpretation,''  UCLA  Computer  Science Department
Technical Report CSD-860045 (R-64), June 1986; presented  at  the
2nd  AAAI  Workshop  on  Uncertainty  in Artificial Intelligence,
Philadelphia, PA., August 1986.

			ABSTRACT

This paper extends the applications of belief-networks models  to
include the revision of belief commitments, i.e., the categorical
instantiation of a subset of hypotheses which constitute the most
satisfactory  explanation of the evidence at hand.  We show that,
in singly-connected networks, the most  satisfactory  explanation
can  be  found  in  linear  time  by  a message-passing algorithm
similar to  the  one  used  in  belief  updating.   In  multiply-
connected networks, the problem may be exponentially hard but, if
the network is sparse, topological considerations can be used  to
render  the  interpretation  task tractable.  In general, finding
the most probable combination of hypotheses is  no  more  complex
than   computing   the   degree  of  belief  for  any  individual
hypothesis.

_______

Geffner, H. & Pearl, J., ``A Distributed Approach to Diagnosis,''
UCLA   Cognitive  Systems  Laboratory  Technical  Report  (R-66),
October 1986;  

			ABSTRACT

The paper describes a distributed scheme  for  finding  the  most
likely  diagnosis  of  systems  with multiple faults.  The scheme
uses the independencies embedded in a  system  to  decompose  the
task  of  finding a best overall interpretation into smaller sub-
tasks of finding the best interpretations  for  subparts  of  the
net,  then  combining them together.  This decomposition yields a
globally-optimum diagnosis by local and  concurrent  computations
using  a message-passing algorithm.  The proposed scheme offers a
drastic   reduction   in   complexity   compared    with    other
methods:  attaining linear time in singly-connected networks and,
at worst,  exp ( |  cycle-cutset  | )  time in multiply-connected
networks.

_______

Pearl, J., ``Evidential Reasoning Using Stochastic Simulation  of
Causal  Models,''  UCLA  Cognitive  Systems  Laboratory Technical
Report (R-68-I), October 1986.

			ABSTRACT

Stochastic simulation is a method of computing  probabilities  by
recording  the  fraction  of  time  that events occur in a random
series of scenarios generated from some causal model.  This paper
presents  an  efficient,  concurrent  method  of  conducting  the
simulation which guarantees that all generated scenarios will  be
consistent  with  the  observed  data.   It  is  shown  that  the
simulation can  be  performed  by  purely  local  computations, 
involving   products   of   parameters   given
with the initial specification of the model.   Thus,  the  method
proposed  renders  stochastic  simulation a powerful technique of
coherent  inferencing,  especially  suited  for  tasks  involving
complex,  non-decomposable models where ``ballpark'' estimates of
probabilities will suffice.

_______

Pearl, J., ``Legitimizing Causal Reasoning  in  Default  Logics''
(note),  UCLA  Cognitive  Systems Laboratory Technical Report (R-
69), September 1986.

			ABSTRACT

The purpose of this note is to draw attention to certain  aspects
of  causal  reasoning  which  are pervasive in ordinary discourse
yet, based on the author's  scan  of  the  literature,  have  not
received  due  treatment  by  logical  formalisms of common-sense
reasoning. In a nutshell, it appears that  almost  every  default
rule  falls  into one of two categories:   expectation-evoking or
explanation-evoking.  The  former  describes  association   among
events  in the outside world (e.g., Fire is typically accompanied
by smoke.); the latter describes how we reason  about  the  world
(e.g.,  Smoke  normally  suggests  fire.).   This  distinction is
clearly and reliably recognized by all people and  serves  as  an
indispensible  tool for controlling the invocation of new default
rules. This note questions  the  ability  of  formal  systems  to
reflect   common-sense   inferences  without  acknowledging  such
distinction and outlines a way in which the flow of causation can
be summoned within the formal framework of default logic.

_______

Dechter, R. & Pearl, J., ``The Cycle-Cutset Method for  Improving
Search  Performance  in AI Applications,'' UCLA Cognitive Systems
Laboratory Technical Report (R-67); submitted  to  the  3rd  IEEE
Conference on Artificial Intelligence Applications.

			ABSTRACT

This paper introduces a new way of improving search performance by
exploiting an efficient method available for solving tree-structured
problems.  The scheme is based on the following observation:  If, in
the course of a backtrack search, we remove from the constraint-graph 
the nodes corresponding to instantiated variables and find that the 
remaining subgraph is a tree, then the rest of the search can be 
completed in linear time.  Thus, rather than continue the search 
blindly, we invoke a tree-searching algorithm tailored to the topology 
of the remaining subproblem.  The paper presents this  method in detail
and evaluates its merit both theoretically and experimentally.