SASW%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU ("Steven A. Swernofsky") (12/16/86)
From: Rosemary B. Hegg <ROSIE at XX.LCS.MIT.EDU>
UNCERTAINTY SEMINAR ON MONDAY
Date: Monday, November 10, 1986
Time: 3.45 pm...Refreshments
4.00 pm...Lecture
Place: NE43-512A
UNCERTAINTY IN AI:
IS PROBABILITY EPISTEMOLOGICALLY AND HEURISTICALLY ADEQUATE?
MAX HENRION
Carnegie Mellon
ABSTRACT
New schemes for representing uncertainty continue to
proliferate, and the debate about their relative merits seems to
be heating up. I shall examine several criteria for comparing
probabilistic representations to the alternatives. I shall
argue that criticisms of the epistemological adequacy of
probability have been misplaced. Indeed there are several
important kinds of inference under uncertainty which are
produced naturally from coherent probabilistic schemes, but are
hard or impossible for alternatives. These include combining
dependent evidence, integrating diagnostic and predictive
reasoning, and "explaining away" symptoms. Encoding uncertain
knowledge in predictive or causal form, as in Bayes' Networks,
has important advantages over the currently more popular
diagnostic rules, as used in Mycin-like systems, which confound
knowledge about the domain and about inference methods.
Suggestions that artificial systems should try to simulate human
inference strategies, with all their documented biases and
errors, seem ill-advised. There is increasing evidence that
popular non-probabilistic schemes, including Mycin Certainty
Factors and Fuzzy Set Theory, perform quite poorly under some
circumstances. Even if one accepts the superiority of
probability on epistemological grounds, the question of its
heuristic adequacy remains. Recent work by Judea Pearl and
myself uses stochastic simulation and probabilistic logic for
propagating uncertainties through multiply connected Bayes'
networks. This aims to produce probabilistic schemes that are
both general and computationally tractable.
HOST: PROF. PETER SZOLOVITS