[comp.ai] MBR Workshop Query

ethan@ATC.BOEING.COM (Ethan Scarl) (03/21/89)

Pat,

Your note indicating confusion about the scope of model-based reasoning
(MBR) suggests that the area seems undefined and abitrarily wide,
encompassing all of AI.  This seems odd.  For example, the first two issues
noted in the Call for Papers are:

 * What inference mechanisms take best advantage of explicit structural and
   behavioral representations? 
 * Conversely, what do these inference mechanisms require of a structural
   representation or a behavioral simulation?  

It is not easy to see how these (or the others) are confusable with all of
AI.

Certainly, MBR may profitably use: 
>> mechanical inference, qualitative reasoning, reasoning under uncertainty, 
>> machine perception and temporal knowledge representation, among others.

But "to use" does not imply "to encompass," or "to be no different from."
MBR is a subfield of AI and has interconnections like all other subfields.
The boundaries may not always be razor sharp, but that does not imply that
MBR is indistinguishable from other subfields, much less from all of AI.

If the name itself is a source of confusion, then take "Model-Based
Reasoning" to abbreviate lengthier phrases such as "Reasoning from Models
of Structure and Behavior."  

In diagnosis, the core idea used in MBR is to compare predictions from a
simulation model of a device with the behavior of the physical device.
Discrepancies are taken to indicate device malfunction.  A common
computational technique is to use dependency traces in the simulation model
to track down possibly malfunctioning components and to generate behavioral
hypotheses to render the simulation's predictions compatible with
observations.  One interesting consequence of this approach is that a
malfunction is indicated by anything except the expected behavior; this is
quite different from traditional fault modeling, which requires a set of
pre-specified faults.

Simulation models have been used previously, of course; the ones used here
are constrained by the desire to trace dependencies and generate behavioral
hypotheses.  This in turn encourages building models that structurally
mimic the device and the constraints upon its operation.  E.g., each
component of the model corresponds to a component of the actual device.  A
component's behavior is modeled as a transfer function, whose result is
passed to components structurally connected to it.

This is markedly different from modeling systems using differential
equations.  It is also clearly distinguished from associative reasoning of
the sort typically captured in production rules, and avoids some serious
difficulties encountered by those approaches (e.g., in discovering and
operating with untrustworthy sensory data).

Beyond diagnosis, the approach has been applied to:  generating diagnostics
and distinguishing tests, plan debugging, design, and theory debugging
(supposing that the model is wrong when device and model differ).

As for references, work under this general rubric has been going on for
almost a decade, and there are several landmark papers; an easy place
to start is from Davis and Hamscher's survey in Shrobe's "Exploring AI"
(Morgan Kaufmann, San Mateo, 1988) collection of survey talks from AAAI.

Ethan Scarl						Randy Davis
Walter Hamscher						Dan Dvorak

- on behalf of the MBR Workshop program committee

dambrosi@mist.cs.orst.edu (Bruce D'Ambrosio) (03/22/89)

I think the problem pointed to by Pat Hayes is more fundamental than
your response might indicate.  The key question is: "what is a model?"
Seems like just about any symbolic (and non-symbolic?) representation
intended to correspond to some aspect of what's "out there" is a model.

Notions like "structure" and "function" help pin things down somewhat,
but unfortunately are also very vague and all encompassing when examined
closely.  

My thesis was about fuzzy extensions to QP theory.  I told people I was
working in model-based reasoning.  I now work on resource-bounded 
construction and evaluation of decision-bases.  I still say I am working
in model-based reasoning.

This is not to say I object to the title of the workshop, I think I know
what you mean, but it might be an interesting exercise in the workshop
to attempt to formulate a definition of the field that captures the 
intuition.  It might be difficult

Bruce D'Ambrosio
dambrosi@cs.orst.edu