[mod.ai] ai.bib42AB

leff%smu@csnet-relay.UUCP (12/02/86)

%A Ralph Grishman
%A Richard Kittredge
%T Analyzing Language in Restricted Domains
%I Lawrence Erlbaum Associates Inc.
%C Hillsdale, NJ
%K AI02 AA01
%D 1986
%X 0-89859-620-3 1986 264 pages $29.95
.TS
tab(~);
l l.
N.Sager~T{
Sublanguage: Linguistic Phenomenon, Computational Tool
T}
J. Lehrberger~Sublanguage Analysis
E. Fitzpatrick~T{
The Status of Telegraphic Sublanguages
T}
J. Bachenko
D. Hindle
J. R. Hobbs~Sublanguage and Knowledge
D. E. Walker~T{
The Use of Machine Readable Dictionaries in Sublanguage Analysis
T}
R. A. Amsler
C. Friedman~T{
Automatic Structuring of Sublanguage Information: Application to
Medical Narrative
T}
E. Marsh~T{
General Semantic Patterns in Different Sublanguages
T}
C. A. Montgomery~T{
A Sublanguage for Reporting and Analysis of Space Events
T}
B. C. Glover
T. W. Finin~T{
Constraining the Interpretation of Nominal Compounds in a Limited Context
T}
G. Dunham~T{
The Role of Syntax in the Sublanguage of Medical Diagnostic Statements
T}
J. Slocum~T{
How One Might Automatically Identify and Adapt to a Sublanguage
T}
L. Hirschman~T{
Discovering Sublanguage Structures
T}
.TE

%A Janet L. Kolodner
%A Christopher K. Riesbeck
%T Experience, Memory and Reasoning
%I Lawrence Erlbaum Associates Inc.
%C Hillsdale, NJ
%D 1986
%K AT15
%X 0-89859-664-0 1986 272 pages $29.95
.TS
tab(~);
l l.
R.Wilensky~T{
Knowledge Representation - A critique and a Proposal
T}
T{
R. H. Granger
.br
D. M. McNulty
T}~T{
Learning and Memory in Machines and Animals that
Accounts for Some Neurobiological Data
T}
T{
V. Sembugamoorthy
.br
B. Chandrasekaran
T}~T{
Functional Representation of Devices and Compilation of
Diagnostic Problem Solving Systems
T}
.TE

%T Recovering from Execution Errors in \s-1SIPE\s0
%A  David E. Wilkins
%J Computational Intelligence
%V 1
%D 1985
%K AI07 AI09
%X
In real-world domains (a mobile robot is used as a motivating example), things
do not always proceed as planned.  Therefore it is important to develop better
execution-monitoring techniques and replanning capabilities. This paper
describes the execution-monitoring and replanning capabilities of the
\s-1SIPE\s0 planning system.  (\s-1SIPE\s0 assumes that new information to the
execution monitor is in the form of predicates, thus avoiding the difficult
problem of how to generate these predicates from information provided by
sensors.)  The execution-monitoring module takes advantage of the rich
structure of \s-1SIPE\s0 plans (including a description of the plan rationale),
and is intimately connected with the planner, which can be called as a
subroutine.  The major advantages of embedding the replanner within the
planning system itself are:
.IP 1.
The replanning module can take advantage of the efficient frame reasoning
mechanisms in \s-1SIPE\s0 to quickly discover problems and potential fixes.
.IP 2.
The deductive capabilities of \s-1SIPE\s0 are used to provide a reasonable
solution to the truth maintenance problem.
.IP 3.
The planner can be called as a subroutine to solve problems after the
replanning module has inserted new goals in the plan.
.LP
Another important contribution is the development of a general set of
replanning actions that will form the basis for a language capable of
specifying error-recovery operators, and a general replanning capability that
has been implemented using these actions.

%T  Plan Parsing for Intended Response Recognition
in Discourse
%A  Candace L. Sidner
%J Computational Intelligence
%V 1
%D 1985
%K Discourse task-oriented dialogues intended meaning AI02
speaker's plans discourse understanding plan parsing discourse markers
%X
In a discourse, the hearer must recognize the response intended by the speaker.
To perform this recognition, the hearer must ascertain what plans the speaker
is undertaking and how the utterances in the discourse further that plan. To do
so, the hearer can parse the initial intentions (recoverable from the
utterance) and recognize the plans the speaker has in mind and intends the
hearer to know about. This paper reports on a theory of parsing the intentions
in discourse. It also discusses the role of another aspect of discourse,
discourse markers, that are valuable to intended response recognition.

%T  Knowledge Organization and its Role
in Representation and Interpretation for
Time-Varying Data: The \s-1ALVEN\s0 System
%A  John K. Tsotsos
%J Computational Intelligence
%V 1
%D 1985
%K Knowledge Representation, Expert Systems, Medical Consultation
Systems, Time-Varying Interpretation, Knowledge-Based Vision. AI01 AI06 AA01
%X
The so-called first generation'' expert systems were rule-based and offered a
successful framework for building applications systems for certain kinds of
tasks. Spatial, temporal and causal reasoning, knowledge abstractions, and
structuring are among topics of research for second generation'' expert
systems.
.sp
It is proposed that one of the keys for such research is \fIknowledge
organization\fP.  Knowledge organization determines control structure design,
explanation and evaluation capabilities for the resultant knowledge base, and
has strong influence on system performance.  We are exploring a framework for
expert system design that focuses on knowledge organization for a specific
class of input data, namely, continuous, time-varying data (image sequences or
other signal forms).  Such data is rich in temporal relationships as well as
temporal changes of spatial relations and is thus a very appropriate testbed
for studies involving spatio-temporal reasoning.  In particular, the
representation facilitates and enforces the semantics of the organization of
knowledge classes along the relationships of generalization / specification,
decomposition / aggregation, temporal precedence, instantiation, and
expectation-activated similarity.
.sp
A hypothesize-and-test control structure is
driven by the class organizational principles, and includes several interacting
dimensions of research (data-driven, model-driven, goal-driven temporal, and
failure-driven search).  The hypothesis ranking scheme is based on temporal
cooperative computation with hypothesis fields of influence'' being defined
by the hypotheses' organizational relationships. This control structure has
proven to be robust enough to handle a variety of interpretation tasks for
continuous temporal data.
.sp
A particular incarnation, the \s-1ALVEN\s0 system, for left ventricular
performance assessment from X-ray image sequences, will be highlighted in this
paper.

%T  On the Adequacy of Predicate Circumscription
for Closed-World Reasoning
%A  David W. Etherington
%A Robert E. Mercer
%A Raymond Reiter
%J Computational Intelligence
%V 1
%D 1985
%K AI15 AI16
%X We focus on McCarthy's method of predicate circumscription in order to
establish various results about its consistency, and about its ability to
conjecture new information. A basic result is that predicate circumscription
cannot account for the standard kinds of default reasoning.  Another is that
predicate circumscription yields no new information about the equality
predicate. This has important consequences for the unique names and domain
closure assumptions.

%T   What is a Heuristic?
%A  Je\(ffry Francis Pelletier and Marc H.J. Romanycia
%J Computational Intelligence
%V 1
%N 2
%D MAY 1985
%K AI16
%X From the mid-1950's to the present, the notion of a heuristic has
played a crucial role in AI researchers' descriptions of their
work.  What has not been generally noticed is that different
researchers have often applied the term to rather different aspects
of their programs.  Things that would be called a heuristic by
one researcher would not be so called by others.  This is because
many heuristics embody a variety of different features, and the
various researchers have emphasized different ones of these
features as being essential to being a heuristic.  This paper
steps back from any particular research programme and investigates
the question of what things, historically, have been thought to be
central to the notion of a heuristic, and which ones conflict with
others.  After analyzing the previous definitions and examining
current usage of the term, a synthesizing definition is provided.
The hope is that with this broader account of `heuristic' in hand,
researchers can benefit more fully from the insights of others,
even if those insights are couched in a somewhat alien vocabulary.

%T  Analysis by Synthesis in Computational Vision
with Application to Remote Sensing
%A  Robert Woodham
%A E. Catanzariti
%A  Alan Mackworth
%J Computational Intelligence
%V 1
%N 2
%D MAY 1985
%K AI06
%X
The problem in vision is to determine surface properties from
image properties.  This is difficult because the problem, formally
posed, is underconstrained.  Methods that infer scene properties
from image properties make assumptions about how the world
determines what we see.  In this paper, some of these assumptions
are dealt with explicitly, using examples from remote sensing.
Ancillary knowledge of the scene domain, in the form of a digital
terrain model and a ground cover map, is used to synthesize an
image for a given date and time.  The synthesis process assumes
that surface material is lambertian and is based on simple models of
direct sun illumination, diffuse sky illumination and atmospheric path
radiance.  Parameters of the model are estimated from the real image.
A statistical comparison of the real image and the synthetic image
is used to judge how well the model represents the mapping from
scene domain to image domain.
.sp 1
The methods presented for image synthesis are similar to those
used in computer graphics.  The motivation, however is different.
In graphics, the goal is to produce an effective rendering of the
scene domain.  Here, the goal is to predict properties of real
images.  In vision, one must deal with a confounding of effects
due to surface shape, surface material, illumination, shadows
and atmosphere.  These effects often detract from, rather than
enhance, the determination of invariant scene characteristics.




%T  A Functional Approach to Non-Monotonic Logic
%A Erik Sandewall
%J Computational Intelligence
%V 1
%N 2
%D MAY 1985
%K AI15 AI16
%X
Axiom sets and their extensions are viewed as functions from the
set of formulas in the language, to a set of four truth-values \fIt\fP,
\fIf\fP, \fIu\fP for undefined, and \fIk\fP for contradiction.	Such functions
form a lattice with `contains less information' and the partial
order \(ib, and `combination of several sources of knowledge' as the
least-upper-bound operation \(IP.
We demonstrate the relevance of this
approach by giving concise proofs for some previously known results
about normal default rules.  For non-monotonic rules in general
(not only normal default rules), we define a stronger version of the
minimality requirement on consistent fixpoints, and prove that
it is sufficient for the existence of a derivation of the fixpoint.

%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%T Generating paraphrases from
meaning-text semantic networks
%A Michel Boyer
%A Guy Lapalme
%K T02
%X
This paper describes a first attempt to base a paraphrase generation system
upon Mel'cuk and Zolkovskij's linguistic Meaning-Text (\s-1MT\s0) model whose
purpose is to establish correspondences between meanings, represented by
networks, and (ideally) all synonymous texts having this meaning.  The system
described in the paper contains a Prolog implementation of a small explanatory
and combinatorial dictionary (the \s-1MT\s0 lexicon) and, using unification
and backtracking, generates from a given network the sentences allowed by the
dictionary and the lexical transformations of the model.  The passage from the
net to the final texts is done through a series of transformations of
intermediary structures that closely correspond to \s-1MT\s0 utterance
representations (semantic, deep-syntax, surface-syntax and morphological
representations).  These are graphs and trees with labeled arcs.  The Prolog
unification (equality predicate) was extended to extract information from these
representations and build new ones.  The notion of utterance path, used by many
authors, is replaced by that of covering by defining subnetworks''.


%T Spatiotemporal inseparability in early vision:
Centre-surround models and velocity selectivity
%A David J. Fleet
%A Allan D. Jepson
%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%K AI08 AI06
%X
Several computational theories of early visual processing, such as Marr's
zero-crossing theory, are biologically motivated and based largely on the
well-known difference of Gaussians (\s-1DOG\s0) receptive field model of early
retinal processing.  We examine the physiological relevance of the \s-1DOG\s0,
particularly in the light of evidence indicating significant spatiotemporal
inseparability in the behaviour of retinal cell type.
.LP
From the form of the inseparability we find that commonly accepted functional
interpretations of retinal processing based on the \s-1DOG\s0, such as the
Laplacian of a Gaussian and zero-crossings, are not valid for time-varying
images.  In contrast to current machine-vision approaches, which attempt to
separate form and motion information at an early stage, it appears that this is
not the case in biological systems.  It is further shown that the qualitative
form of this inseparability provides a convenient precursor to the extraction
of both form and motion information.  We show the construction of efficient
mechanisms for the extraction of orientation and 2-D normal velocity through
the use of a hierarchical computational framework.  The resultant mechanisms
are well localized in space-time, and can be easily tuned to various degrees of
orientation and speed specificity.

%T A theory of schema labelling
%A William Havens
%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%K AI16 AI06 AA04
%X
Schema labelling is a representation theory that focuses on composition and
specialization as two major aspects of machine perception.  Previous research
in computer vision and knowledge representation have identified computational
mechanisms for these tasks.  We show that the representational adequacy of
schema knowledge structures can be combined advantageously with the constraint
propagation capabilities of network consistency techniques.  In particular,
composition and specialization can be realized as mutually interdependent
cooperative processes which operate on the same underlying knowledge
representation.  In this theory, a schema is a generative representation for
a class of semantically related objects.  Composition builds a structural
description of the scene from rules defined in each schema.  The scene
description is represented as a network consistency graph which makes
explicit the objects found in the scene and their semantic relationships.
The graph is hierarchical and describes the input scene at varying levels
of detail.  Specialization applies network consistency techniques to refine
the graph towards a global scene description.  Schema labelling is being used
for interpretating hand-printed Chinese characters, and for recognizing
\s-1VLSI\s0 circuit designs from their mask layouts.

%T Hierarchical arc consistency:
Exploring structured domains
in constraint satisfaction problems
%A Alan K. Mackworth
%A Jan A. Mulder
%A  William S. Havens
%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%K AI03 AI16 AI06
%X
Constraint satisfaction problems can be solved by network consistency
algorithms that eliminate local inconsistencies before constructing global
solutions.  We describe a new algorithm that is useful when the variable
domains can be structured hierarchically into recursive subsets with common
properties and common relationships to subsets of the domain values for related
variables.  The algorithm, \s-1HAC\s0, uses a technique known as hierarchical
arc consistency.  Its performance is analyzed theoretically and the conditions
under which it is an improvement are outlined.	The use of \s-1HAC\s0 in a
program for understanding sketch maps, Mapsee3, is briefly discussed and
experimental results consistent with the theory are reported.

%T  Expression of Syntactic and Semantic Features
in Logic-Based Grammars
%A  Patrick Saint-Dizier
%J Computational Intelligence
%V 2
%N 1
%D February 1986
%K AI02
%X In this paper we introduce and motivate a formalism to represent syntactic
and semantic features in logic-based grammars.	We also introduce technical
devices to express relations between features and inheritance mechanisms.
This leads us to propose some extensions to the basic unification mechanism
of Prolog.  Finally, we consider the problem of long-distance dependency
relations between constituents in Gapping Grammar rules from the point of
view of morphosyntatic features that may change depending on the position
occupied by the moved'' constituents.  What we propose is not a new
linguistic theory about features, but rather a formalism and a set of tools
that we think to be useful to grammar writers to describe features and their
relations in grammar rules.

%T  Natural Language Understanding and
Theories of Natural Language Semantics
%A  Per-Kristian Halvorsen
%J Computational Intelligence
%V 2
%N 1
%D February 1986
%K AI02
%X
In these short remarks, I examine the connection between Montague grammar, one
of the most influential theories of natural language semantics during the past
decade, and natural language understanding, one of the most recalcitrant
problems in \(*AI and computational linguistics for more than the last decade.
When we view Montague grammar in light of the requirements of a theory
natural language understanding, new traits become prominent, and highly touted
advantages of the approach become less significant.  What emerges is a new
set of criteria to apply to theories of natural language understanding.  Once
one has this measuring stick in hand, it is impossible to withstand the
temptation of also applying it to the emerging contender to Montague grammar
as a semantic theory, namely situation semantics.

%T  Unrestricted Gapping Grammars
%A  Fred Popowich
%J Computational Intelligence
%V 2
%N 1
%D February 1986
%K AI02
%X
Since Colmerauer's introduction of metamorphosis grammars (MGs), with
their associated type \fIO\fP\(milike grammar rules, there has been a desire
to allow more general rule formats in logic grammars.  Gap symbols were added
to the MG rule by Pereria, resulting in extraposition grammars (XGs).
Gaps, which are referenced by gap symbols, are sequences of zero or more
unspecified symbols which may be present anywhere in a sentence or in a
sentential form.  However, XGs imposed restrictions on the position of gap
symbols and on the contents of gaps.  With the introduction of gapping
grammars (GGs) by Dahl, these restrictions were removed, but the rule was
still required to possess a nonterminal symbol as the first symbol on the
left-hand side.  This restriction is removed with the introduction of
unrestricted gapping grammars.	FIGG, a Flexible Implementation of Gapping
Grammars, possesses a bottom-up parser which can process a large subset of
unrestricted GGs for describing phenomena of natural languages such as free
word order, and partially free word or constituent order.  It can also be used
as a programming language to implement natural language systems which are
based on grammars (or metagrammars) that use the gap concept, such
as Gazdar's generalized phrase structure grammars.