[comp.ai.digest] Plausibility reasoning

DON@atc.bendix.COM.UUCP (07/08/87)

>From: Jenny <ISCLIMEL%NUSVM.BITNET@wiscvm.wisc.edu>
>Subject: so what about plausible reasoning ?

>As I read articles on plausible reasoning in expert systems, I come to the
>conclusion that experts themselves do not exactly work with numbers as they
>solve problems. 

You are correct in several senses.  One, the psychology literature has
shown time and time again that human belief revision does not conform to
Bayesian evidence accumulation (e.g., Edwards, 1968; Fischhoff &
Beyth-Marom, 1983; Robinson & Hastie, 1985; Schum, Du Charme, & DePitts,
1973; Slovic & Lichtenstein, 1971).  Two, it does not appear that
humans literally use any of the methods.

However, the humans do appear to be weighing alternatives.  Although,
for a period, it may seem that the humans are performing sequential
hypothesis testing, for stochastic domains with non-trivial uncertainty,
humans gather support for a large set of hypotheses at the same time.
They may appear to only gather support for their "favorite"; however, if
asked for an ordering over the alternatives or if asked how much they
believe the alternatives, it is obvious that they have allowed the
evidence to change their beliefs about the non-favorite hypotheses
(e.g., Robinson & Hastie, 1985). 

The question becomes, "what are they doing?"  For the sake of argument,
let's take your assertion and say they are not explicitly manipulating
numbers -- it does seem absurd that the automobile mechanic who can't
add simple integers without a calculator could possibly perform the
complex aggregations necessary to use numbers.  

Another possibility is that they are performing a type of non-monotonic
logic with the choice of assumptions and generation and testing of
possible worlds. This possibility suggests that, if the human is not
using numbers at any level, the human's choice of one assumption over
another uses a simple set of context sensitive rules.  The only time the
human should change assumptions (generate an alternative path or
possible world) is if the current assumptions are defeated or if some
magical attentional process causes the human to arbitrarily try another
path.  When choosing another path, there should be a fixed set of
rules guiding the choice of alternative -- there can be no idea of
"this looks a little stronger than that" because such comparisons
require a comparison metric which is not built into non-monotonic
logics.

The psychological research on human search strategies (especially for
games such as chess) suggests that humans often abandon one search path
to test another which looks like it might be as strong or stronger and
then return to the original path.  This return to the original path
leads to a rejection of the hypothesis that humans maintain a set of
assumptions until evidence refutes those assumptions.  By my previous
argument, then, if non-monotonic logics model human decision making, the
humans must be choosing to change path generation based on an
attentional mechanism.  If numbers are not involved, then the
attentional mechanism is probably rule-driven.

Of course, I've laid out a straw man.  I've said it's either numbers
or rules; however, there are probably many other possibilities. 
The most likely possibility is an analog process something akin to
comparisons of weights.  If we were to model this process in a computer,
we would use numbers; so, we're back to numbers.  The trouble with
just using numbers, of course, is determining how to combine them
under different circumstances and how to interpret them.  Plausibility
reasoning has been used because it, at least, suggests methods for
both of these processes.  Something, even an approximation, which
has validity at some level, is better than nothing.

Rather than turn this into a thesis, let's go on to your next point.

>And many of them are not willing to commit themselves into
>specifying a figure to signify their belief in a rule. 

Hum, this sounds like something from Buchanan and Shortliffe.  Let's
think about the implications of this argument.  You're saying, if
humans find it difficult to generate numbers to represent their degrees
of belief, then numbers must be ineffective.  Perhaps even at a
higher-level, if humans find some piece of knowledge or knowledge
artifact difficult to specify, then it probably is ineffective. 
What evidence do we have for these claims?  What are the implications
of these claims?  From a personal standpoint, I find any knowledge,
beyond the trivial, is difficult to specify in some external formalism
(including writing, rules, and probabilities).  It seems unlikely
that we will ever generate external formalisms which allow painless
knowledge transfer.  Does that imply that knowledge transfer is
hopeless?  Let's hope not because that is the modus operandi of the
human species.  Granted, it will not be perfect, it will be painfull,
it will take time, but does that imply that it is worthless?

We "know" that human experts have knowledge which is effective. 
There is growing evidence that purely logical formalisms for
representing this knowledge will not work for all problem domains
due to the stochastic nature of the domains or the incomplete
understanding of the domain.  Does this mean that automated problem
solving must be limited to non-stochastic domains in which there
is a full and complete understanding of the causal relations and
elements?

I fear that I have left the primary argument which I wanted to use in
response to your statement.  I looked at statements such as these and
asked myself whether "comfort" was a legitimate metric for determining
the effectiveness of knowledge.  This question suggested an experiment
in which different sets of experts were asked to generate the
comfortable MYCIN confidence factors, the uncomfortable but definable
conditional and a priori probabilities needed for Bayes' theorem, and
the interesting, but perhaps not well-defined, probability bounds for
the typical Dempster-Shafer formulation.  

I ran this experiment in which the experts were matched for knowledge in
the domain.  Each expert was asked to provide the parameters needed for
only one of the plausibility reasoning formulisms. The results were
that, at a superficial level, humans can provide better MYCIN and
Dempster-Shafer parameters than Bayesian numbers.  However, when
considering how these numbers are used and how errors in the numbers
propagate through repeated applications of the aggregation formulae, the
Bayesian parameters led to more effective automated decision making than
the MYCIN parameters.  The performance of the Demspter-Shafer parameters
was not significantly better or worse than either system in this test.
(This research is documented in two papers -- ask me for references.)
The conclusion: the domain expert's comfort is not a legitimate
determinant of knowledge effectiveness. 

>If one obtains two conclusions with numbers indicating some significance,
>say 75 % and 80 %, can one say that the conclusion with 80% significance is
>the correct conclusion and ignore the other one ? 

There is a fundamental problem here.  If you are refering to
percentages, then the numbers cannot add to more than 100.  You are
correct in that a decision theory for plausibility reasoning must
take into account the accuracy of the parameters, and I believe that
some researchers have not considered this problem; however, most
plausibility reasoning researchers consider the decision theory to
be an important component which must be given strict attention.

>These numbers do not seem to mean much since they are just beliefs or
>probabilties. 

I alluded to this problem earlier.  Actually, if they are probabilities,
they mean a lot.  Probabilities have clear operational and theoretical
definitions.  Some, for example Shafer (1981), have suggested that
the definition of probabilities can be extended to better account
for the subjective nature of the probabilities used in most decision
support systems.  The real problem is with the MYCIN style confidence
factors.  Although Heckman (1986) has developed a formal interpretation
of confidence factors, the interpretation is ad hoc and it seems
difficult to imagine that domain experts use this interpretation.
The meaningfulness of the numbers is an important criterion for
determining the successful application of the numbers and is one
of the strongest arguments for using probabilities and perhaps for
using Bayes' theorem.

Donald H. Mitchell	  	Don@atc.bendix.com
Bendix Aero. Tech. Ctr.		Don%atc.bendix.com@relay.cs.net
9140 Old Annapolis Rd.		(301)964-4156
Columbia, MD 21045