E1AR0002@SMUVM1.BITNET (10/31/86)
TECHNICAL NOTE: 121\hfill PRICE: \$20.00\\[0.01in]
\noindent TITLE: MSYS: A SYSTEM FOR REASONING ABOUT SCENES\\
AUTHORS: HARRY G. BARROW and J. MARTIN TENENBAUM\\
DATE: APRIL 1976\\[0.01in]
ABSTRACT: MSYS is a system for reasoning with uncertain
information and inexact rules of inference. Its major application, to
date, has been to the interpretation of visual features (such as
regions) in scene analysis. In this application, features are
assigned sets of possible interpretations with associated likelihoods
based on local attributes (e.g., color, size, and shape).
Interpretations are related by rules of inference that adjust the
likelihoods up or down in accordance with interpretation likelihoods
of related features. An asynchronous relaxation process repeatedly
applies the rules until a consistent set of likelihood values is
attained. At this point, several alternative interpretations still
exist for each feature. One feature is chosen and the most likely of
its alternatives is assumed. the rules are then used in this more
precise context to determine likelihoods for the interpretations of
remaining features by a further round of relaxation. The selection
and relaxation steps are repeated until all features have been
interpreted.
Some interpretation typifies constraint optimization problems
involving the assignment of values to a set of mutually constrained
variables. For an interesting class of constraints, MSYS is
guaranteed to find the optimal solution with less branching than
conventional heuristic search methods.
MSYS is implemented as a network of asynchronous parallel
processes. The implementation provides an effective way of using data
driven systems with distributed control for optimal stochastic search.\\
\pagebreak
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 123\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: EXPERIMENTS IN INTERPRETATION-GUIDED SEGMENTATION\\
AUTHORS: J. MARTIN TENENBAUM and HARRY G. BARROW \\
DATE: MARCH 1976\\[0.01in]
ABSTRACT: This paper presents a new approach for integrating the
segmentation and interpretation phases of scene analysis. Knowledge
from a variety of sources is used to make inferences about the
interpretations of regions, and regions are merged in accordance with
their possible interpretations.
The deduction of region interpretations is performed using a
generalization of Waltz's filtering algorithm. Deduction proceeds by
eliminating possible region interpretations that are not consistent
with any possible interpretation of an adjacent region. Different
sources of knowledge are expressed uniformly as constraints on the
possible interpretations of regions. Multiple sources of knowledge
can thus be combined in a straightforward way such that incremental
additions of knowledge (or equivalently, human guidance) will effect
incremental improvements in performance.
Experimental results are reported in three scene domains,
landscapes, mechanical equipment, and rooms, using, respectively, a
human collaborator, a geometric model and a set of relational
constraints as sources of knowledge. These experiments demonstrate
that segmentation is much improved when integrated computational
overhead over unguided segmentation.
Applications of the approach in cartography, photointerpretation,
vehicle guidance, medicine, and motion picture analysis are suggested.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
\noindent TECHNICAL NOTE: 124\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: SUBJECTIVE BAYESIAN METHODS FOR RULE-BASED INFERENCE SYSTEMS\\
AUTHORS: RICHARD O. DUDA, PETER E. HART, and NILS J. NILSSON\\
DATE: JANUARY 1976\\[0.01in]
ABSTRACT: The general problem of drawing inferences from
uncertain or incomplete evidence has invited a variety of technical
approaches, some mathematically rigorous and some largely informal and
intuitive. Most current inference systems in artificial intelligence
have emphasized intuitive methods, because the absence of adequate
statistical samples forces a reliance on the subjective judgment of
human experts. We describe in this paper a subjective Bayesian
inference method that realizes some of the advantages of both formal
and informal approaches. Of particular interest are the modifications
needed to deal with the inconsistencies usually found in collections
of subjective statements.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 127\hfill PRICE: \$10.00\\[0.01in]
\begin{tabbing}
\noindent TITLE: \= APPLICATION OF INTERACTIVE SCENE ANALYSIS TECHNIQUES\\
\> TO CARTOGRAPHY\\
AUTHORS: THOMAS D. GARVEY and J. MARTIN TENENBAUM\\
DATE: SEPTEMBER 1976\\[0.01in]
\end{tabbing}
ABSTRACT: One of the most time-consuming and labor-intensive
steps in map production involves the delineation of cartographic and
cultural features such as lakes, rivers, roads, and drainages in
aerial photographs. These features are usually traced manually on a
digitizing table in painstaking detail. This paper investigates an
alternative approach, an interactive graphically designates a feature
of interest by pointing at or crudely tracing it with a display
cursor. Using this input as a guide, the system employs context-dependent,
scene-analysis techniques to extract a detailed outline of
the feature. The results are displayed so that errors can be
corrected by further interaction, for example, by tracing small
sections of the boundary in detail. This interactive approach appears
applicable to many other problem domains involving large quantities of
graphic or pictorial data, which are difficult to extract in digital
form by either strictly manual or strictly automatic means.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 132\hfill PRICE: \$10.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE: \= IS SOMETIME SOMETIMES BETTER THAN ALWAYS?\\
\> INTERMITTENT ASSERTION IN PROVING PROGRAM CORRECTNESS\\
AUTHORS: ZOHAR MANNA and RICHARD WALDINGER\\
DATE: JUNE 1976\\[-0.15in]
\end{tabbing}
ABSTRACT: This paper explores a technique for proving the
correctness and termination of programs simultaneously. This
approach, which we call the intermittent-assertion method, involves
documenting the program with assertions that must be true at some time
when control is passing through the corresponding point, but that need
not be true every time. The method, introduced by Knuth and further
developed by Burstall, promises to provide a valuable complement to
the more conventional methods.
We first introduce and illustrate the technique with a number of
examples. We then show that a correctness proof using the invariant
assertion method or the subgoal induction method can always be
expressed using intermittent assertions instead, but that the reverse
is not always the case. The method can also be used just to prove
termination, and any proof of termination using the conventional
well-founded sets approach can be rephrased as a proof using
intermittent assertions. Finally, we show how the method can be
applied to prove the validity of program transformations and the
correctness of continuously operating programs.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 134\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: EXPERIMENTS IN SPEECH UNDERSTANDING SYSTEM CONTROL\\
AUTHOR: WILLIAM H. PAXTON\\
DATE: AUGUST 1976\\[0.01in]
ABSTRACT: A series of experiments was performed concerning
control strategies for a speech understanding system. The main
experiment tested the effects on performance of four major choices:
focus attention by inhibition or use an unbiased best-first method,
island-drive or process left or right, use context checks in
priority setting or do not, and map words all at once or map only as
called for. Each combination of choices was tested with 60
simulated utterances of lengths varying from 0.8 to 2.3 seconds. The
results include analysis of the effects and interactions of the design
choices with respect to aspects of system performance such as overall
sentence accuracy, processing time, and storage. Other experiments
include tests of acoustic processing performance and a study of the
effects of increased vocabulary and improved acoustic accuracy.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 136\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: SEMANTIC NETWORK REPRESENTATION IN RULE BASED INFERENCE
SYSTEM\\
AUTHOR: RICHARD O. DUDA\\
DATE: JANUARY 1977\\[0.01in]
ABSTRACT: Rule-based inference systems allow judgmental knowledge
about a specific problem domain to be represented as a collection of
discrete rules. Each rule states that if certain premises are known,
then certain conclusions can be inferred. An important design issue
concerns the representational form for the premises and conclusions of
the rules. We describe a rule-based system that uses a partitioned
semantic network representation for the premises and conclusions.
Several advantages can be cited for the semantic network
representation. The most important of these concern the ability to
represent subset and element taxonomic information, the ability to
include the potential for smooth interface with natural language
subsystems. This representation is being used in a system currently
under development at SRI to aid a geologist in the evaluation of the
mineral potential of exploration sites. The principles behind this
system and its current implementation are described in the paper.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 137\hfill PRICE: \$10.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE: INTERACTIVE AIDS FOR CARTOGRAPHY AND PHOTO INTERPRETATION\\
AUTHORS: \= HARRY BARROW, THOMAS GARVEY, JAN KREMERS,\\
\> J. MARTIN TENENBAUM and HELEN C. WOLF\\
DATE: JANUARY 1977\\[-0.15in]
\end{tabbing}
ABSTRACT: This report covers the six-month period October 1975 to
April 1976. In this report, the application areas of ARPA-supported
Machine Vision work at SRI were changed to Cartography and
Photointerpretation. This change entailed general familiarization
with the new domains, exploration of their current practices and uses,
and determination of outstanding problems. In addition, some
preliminary tool-building and experimentation have been performed with
a view to determining feasibility of various AI approaches to the
identified problems. The work of this period resulted in the
production and submission to ARPA of a proposal for research into
Interactive Aids for Cartography and Photointerpretation. This report
will not reiterate in detail the content of the proposal, but will
refer the reader to it for further information.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 138\hfill PRICE: \$15.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE: \= LIFER MANUAL: A GUIDE TO BUILDING PRACTICAL\\
\> NATURAL LANGUAGE INTERFACE\\
AUTHOR: GARY G. HENDRIX\\
DATE: FEBRUARY 1977\\[-0.15in]
\end{tabbing}
ABSTRACT: This document describes an application-oriented system
for creating natural language interfaces between existing computer
programs (such as data base management systems) and casual users. The
system is easy to use and flexible, offering a range of capabilities
that support both simple and complex interfaces. This range of
capabilities allows beginning interface builders to rapidly define
worktable sublets of English and gives more advanced language
definitions. The system includes an automatic mechanism for handling
certain classes of elliptical (incomplete) inputs, a spelling
corrector, a grammar editor, and a mechanism that allows even novices,
through the use of paraphrase, to extend the language recognized by
the system. Experience with the system has shown that for many
applications, very practicable interfaces may be created in a few
days.\\
--------------------------------------------------------------------------------
------------------------------------------------\\
TECHNICAL NOTE: 139\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: HUMAN ENGINEERING FOR APPLIED NATURAL LANGUAGE PROCESSING\\
AUTHOR: GARY G. HENDRIX\\
DATE: MARCH 1977\\[0.01in]
ABSTRACT: Human engineering features for enhancing the usability
of practical natural language systems are described. Such features
include spelling correction, processing of incomplete (elliptical)
inputs, interrogation of the underlying language definition through
English queries, and an ability for casual users to extend the
language accepted by the system through the use of synonyms and
paraphrases. All of the features described are incorporated in LIFER,
an applications-oriented system for creating natural language
interfaces between computer programs and casual users. LIFER's
methods for realizing the more complex human engineering features are
presented.\\
--------------------------------------------------------------------------------
------------------------------------------------\\
TECHNICAL NOTE: 140\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: LANGUAGE ACCESS TO DISTRIBUTED DATA WITH ERROR RECOVERY\\
AUTHOR: EARL D. SACERDOTI\\
DATE: APRIL 1977\\[0.01in]
ABSTRACT: This paper discusses an effort in the application of
artificial intelligence to the access of data from a large,
distributed data base over a computer network. A running system is
described that provides access to multiple instances of a data base
management system over the ARPANET in real time. The system accepts a
rather wide range of appropriate queries to the data base management
system to answer the question, determines on which machine to carry
out the queries, establishes links to those machines over the ARPANET,
monitors the prosecution of the queries and recovers from certain
errors in execution, and prepares a relevant answer to the original
question. In addition to the functional components that make up the
demonstration system, equivalent functional components with higher
levels of sophistication are discussed and proposed.\\
--------------------------------------------------------------------------------
------------------------------------------------\\
TECHNICAL NOTE: 142\hfill PRICE: \$25.00\\[0.01in]
\noindent TITLE: A FRAMEWORK FOR SPEECH UNDERSTANDING\\
AUTHOR: WILLIAM H. PAXTON\\
DATE: JUNE 1977\\[0.01in]
ABSTRACT: This paper reports the author's results in designing,
implementing, and testing a framework for a speech-understanding
system. The work was done as part of a multi-disciplinary effort
based on state-of-the-art advances in computational linguistics,
artificial intelligence, systems programming, and speech science. The
overall project goal was to develop one or more computer systems that
would recognize continuous speech uttered in the context or some
well-specifiedtask by making extensive use of grammatical, semantic, and
and contextual constraints. We call a system emphasizing such
linguistic constraints a `speech-understanding system' to distinguish
it from speech-recognition systems which rely on acoustic information
alone.
Two major aspects of a framework for speech understanding are
integration of the process of forming a unified system out of the
collection of components--and control--the dynamic direction of the
overall activity of the system during the processing of an input
utterance. Our method of system integration gives a central role to
the input-language definition, which is based on augmented phrase-
structure rules. A rule consists of a phrase-structure declaration
which specifies the possible for computing 'attributes'and `factors.'
Attribute statements determine the properties of particular phrases
constructed by the rule; factor statements make acceptability
judgments on phrases. Together these statements contain specifications for
most of the potential interactions among system components.
Our approach to system control centers on a system `Executive'
applying the rules of the language definition organizing hypotheses
and results, and assigning priorities. Phrases with their attributes
and factors are the basic entities manipulated by the Executive, which
takes on the role of a parser in carrying out its integration and
control functions. The Executive controls the overall activity of the
system by setting priorities on the basis of acoustics and linguistic
acceptability judgments. These data are combined to form scores and
ratings. A phrase score reflects a quality judgment independent of
the phrase's context and gives useful local information concerning the
sentential context. To get early and efficient access to the
contextual information, we have developed a technique for calculating
phrase ratings by a heuristic search of possible interpretations that
would use the phrase. One of our experiments shows that this
context-checking method results in significant improvements in system
performance.\\
--------------------------------------------------------------------------------
------------------------------------------------\\
TECHNICAL NOTE: 145\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: IDA\\
AUTHOR: DANIEL SAGALOWICZ\\
DATE: JUNE 1977\\[0.01in]
ABSTRACT: IDA was developed at SRI to allow a casual user to
retrieve information from a data base, knowing the fields present in
the data base, but not the structure of the data base itself. IDA is
part of a system that allows the user to express queries in a
restricted subset of English, about a data base of fourteen files
stored on CCA's Datacomputer. IDA's input is a very simple, formal
query language which is essentially a list of restrictions on fields
and queries about fields, with no mention of the structure of the data
base. It produces a series of DBMS queries, which are transmitted
over the ARPA network. The results of these queries are combined by
IDA to provide the answer to the user's query. In this paper, we will
define the input language, and give examples of IDA's behavior. We
will also present our representation of the structural schema," which
is the information needed by IDA to know how the data base is actually
organized. We will give an idea of some of the heuristics which are
used to produce a program in the language of the DBMS. Finally, we
will discuss the limitations of this approach, as well as future
research areas.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
-------