jbn@GLACIER.STANFORD.EDU (John B. Nagle) (11/02/86)
Proper mathematical logic is very "brittle", in that two axioms
that contradict each other make it possible to prove TRUE=FALSE, from
which one can then prove anything. Thus, AI systems that use
traditional logic should contain mechanisms to prevent the introduction
of new axioms that contradict ones already present; this is referred
to as "truth maintenance". Systems that lack such mechanisms are prone
to serious errors, even when reasoning about things which are not
even vaguely related to the contradictory axioms; one contradiction
in the axioms generally destroys the system's ability to get useful
results.
Non-monotonic reasoning is an attempt to make reasoning systems
less brittle, by containing the damage that can be caused by
contradiction in the axioms. The rules of inference of non-monotonic
reasoning systems are weaker than those of traditional logic. There
is not full agreement on what the rules of inference should be in
such systems. There are those who regard non-monotonic reasoning as
hacking at the mathematical logic level. Non-monotonic reasoning
lies in a grey area between the worlds of logic and heuristics.
John Nagleether.allegra@btl.CSNET.UUCP (11/06/86)
John Nagle, in a recent posting, writes: > Non-monotonic reasoning is an attempt to make reasoning systems > less brittle, by containing the damage that can be caused by > contradiction in the axioms. The rules of inference of non-monotonic > reasoning systems are weaker than those of traditional logic. Most nonmonotonic reasoning formalisms I know of (default logic, autoepistemic logic, circumscription, NML I and II, ...) incorporate a first-order logic as a subset. Their rules of inference are thus *stronger* than traditional logics'. I think Nagle is thinking of Relevance Logic (see Anderson & Belnap), which does make an effort to contain the effects of contradiction by weakening the inference rules (avoiding the paradoxes of implication). As for truth-maintenance systems, contrary to Nagle and popular mythology, these systems typically do *not* avoid contradictions per se. What they *do* do is prevent one from 'believing' all of a set of facts explicitly marked as contradictory by the system using the TMS. These systems don't usually have any deductive power at all, they are merely constraint satisfaction devices. David W. Etherington AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974-2070 ether%allegra@btl.csnet