[mod.ai] Seminar - Learning by Failing to Explain

JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU (10/21/86)

                  LEARNING BY FAILING TO EXPLAIN

                      Robert Joseph Hall
             MIT Artificial Intelligence Laboratory


Explanation-based Generalization depends on having an explanation on
which to base generalization.  Thus, a system with an incomplete or
intractable explanatory mechanism will not be able to generalize some
examples.  It is not necessary, in those cases, to give up and resort
to purely empirical generalization methods, because the system may
already know almost everything it needs to explain the precedent.
Learning by Failing to Explain is a method which exploits current
knowledge to prune complex precedents and rules, isolating their
mysterious parts.  This paper describes two techniques for Learning by
Failing to Explain: Precedent Analysis, partial analysis of a
precedent or rule to isolate the mysterious new technique(s) it
embodies; and Rule Re-analysis, re-analyzing old rules in terms of new
rules to obtain a more general set.

Thursday, October 23, 4pm
NE-43, 8th floor playroom