[mod.ai] Seminar - Representational Alignment

admin%cogsci.Berkeley.EDU@UCBVAX.BERKELEY.EDU.UUCP (03/11/87)

SESAME  Colloquium 10/16

Jeff Shrager
Xerox Palo Alto Research Center
Monday 16 March 1987
2515 Tolman Hall
4:00 pm

Abstract

Analogy and conceptual combination deal with more than one knowledge
structure.  Only structures which are based on the same terms and
relations can generally be combined by these mechanisms.   In order to
make conceptual combination work smoothly with large representationally
heterogeneous knowledge bases, I am working toward automated high-level
to high-level representational alignment.  My approach is based upon the
intuitive model of how two speakers would communicate if they had
incompatible understandings of some domain.  The process involves
"grounding" terms and relations in the high-level representations into
common lower-level representations and then constructing constraints
based upon the structure of this grounding trace.   This talk will focus
on the cognitive motivations for grounding and ground-directed alignment
and on the cognitive implications of the requirements imposed on mental
models by ground-directed alignment.  Grounding highlights the
difference in the content terms of mental models: grounded terms versus
ungrounded terms, which have a counterpart in the difference between
empirical and derived terms in qualitative mental models.   I show how
the grounding of such models into animations gives us a concrete handle
on the relationship between imagery and the symbolic processes.