leff@smu.UUCP (Laurence Leff) (12/29/88)
Subject: AI-Related Dissertations from SIGART No. 102, part 1 of 3
The following is a list of dissertation titles and
abstracts related to Artificial Intelligence taken
taken from the Dissertation Abstracts International
(DAI) database. The list is assembled by Susanne
Humphrey and myself and is published in the SIGART
Newsletter (that list doesn't include the abstracts).
The dissertation titles and abstracts contained here
are published with the permission of University Microfilms
International, publishers of the DAI database. University
Microfilms has granted permission for this list to be
redistributed electronically and for extracts and
hardcopies to be made of it, provided that this notice
is included and provided that the list is not sold.
Copies of the dissertations may be obtained by
addressing your request to:
University Microfilms International
Dissertation Copies
Post Office Box 1764
Ann Arbor, Michigan 48106
or by telephoning (toll-free) 1-800-521-3042
(except for Michigan, Hawaii, and Alaska).
In Canada: 1-800-268-6090.
From SIGART Newsletter No. 102
File 1 of 3
Agriculture to Computer Science
------------------------------------------------------------------------
AN University Microfilms Order Number ADG87-08112.
AU SCHMOLDT, DANIEL LEE.
IN The University of Wisconsin - Madison Ph.D. 1987, 232 pages.
TI EVALUATION OF AN EXPERT SYSTEM APPROACH TO FOREST PEST MANAGEMENT OF
RED PINE (PINUS RESINOSA).
SO DAI V48(02), SecB, pp314.
DE Agriculture, Forestry and Wildlife.
AB Significant time and effort is required by forest managers to
seek, organize, and synthesize information pertinent to making
forestry decisions. Managers must also rely on the knowledge and
experience of human experts, a resource in short supply, which
requires many years to develop, and is concentrated in a few
individuals. This research effort suggests expert systems as one,
potential solution to the problems of technology transfer and
maintaining a consistent and usable form of expertise.
Expert systems are computer programs which solve problems that
were previously considered only solvable by human expertise. A
number of expert system development tools are commercially available
for microcomputers; these tools contain an inference engine and the
facilities necessary for creation of a knowledge base. Two these
tools were applied to the problem of pest diagnosis of Wisconsin red
pine (Pinus resinosa).
Twenty-eight pests were incorporated into PREDICT (Pinus
Resinosa Expert Diagnostic Consultation Tool). Pests were selected
on the basis of economic importance, potential confusion with the
former, and high visual impact. Insects, diseases, mammals, and
abiotic conditions were included.
A methodology was developed for transforming current literature
knowledge and the private knowledge of human experts into decision
rules. Factors relevant to pest diagnosis decisions were identified.
English-like decision rules were constructed, and the human experts
were asked to review these rules. The process of encoding these
rules into the format of the two development tools produced a number
of interesting comparisons between the capabilities and utility of
these tools. A lack of test cases with definitive answers and
inconclusive aspects of the adopted diagnosis paradigm influenced
the use of a decreasing diagnostic content procedure for suggesting
rule revisions.
In a blind experiment, two teams of evaluators rated PREDICT's
diagnoses and the diagnoses of five other experts, pest specialists
and field foresters. Both binary and comprehensive evaluation
scores were recorded. PREDICT's scores were found comparable to the
pest specialists and superior to the field foresters. These results
indicate that performance comparable to human expertise, in this
specialized problem area, has been captured by a computer program.
Success in this project should encourage other researchers to
implement this approach for other forestry tasks.
AN University Microfilms Order Number ADG87-12168.
AU MARCHANT, GARRY ALLEN.
IN The University of Michigan Ph.D. 1987, 184 pages.
TI ANALOGICAL REASONING AND ERROR DETECTION.
SO DAI V48(02), SecA, pp432.
DE Business Administration, Accounting.
AB The auditor is required to provide a satisfactory explanation
for any exception that might indicate a possible material
misstatement of the financial statements. Since errors are
relatively infrequent events, even seasoned auditors may have no
direct experience with many types of errors. This thesis addresses
the question of how the auditor generates a potential explanation
and suggests analogical reasoning is an effective means for dealing
with these rare events.
The first experiment examines the use of analogical reasoning in
error detection. The second experiment examines whether two prior
analogs are more effective than one prior analog. The transfer
paradigm, based on Gick and Holyoak (1982), was used in both
experiments. The detected error in the prior analog was the only
difference between the two treatment conditions (timing and
performance) and was for each condition analogous to only one of the
alternative solutions. In the comparison between the timing and
control conditions, experts mean probability estimate for the target
error increased when a prior analog was available, indicating that
experts were using the timing analogy. Novices showed no change
across conditions. In the comparison between the performance and
control conditions neither experts nor novices showed any change
across conditions. In the second experiment, experts showed no
change between the one prior analog and two prior analog conditions.
Novices, with a high quality schema, showed an increase in the use
of the analogy in the two prior analog condition.
Experts may have been affected by the frequency of occurrence of
the target error. The error contained in the timing condition is a
high frequency error and the error contained in the performance and
two prior analog condition is a low frequency error. When the target
is a high frequency error experts will use the analogy since a high
frequency target has a high activation strength. When the target is
a low frequency error high frequency alternatives (with a higher
activation strength) will interfere with the generation of the
target. Expert auditors use of analogy is mediated by the strength
of activation of the target error.
AN University Microfilms Order Number ADG87-08218.
AU BASU, AMIT.
IN The University of Rochester Ph.D. 1986, 180 pages.
TI IMPRECISE REASONING IN INTELLIGENT DECISION SUPPORT SYSTEMS.
SO DAI v47(12), SecA, pp4439.
DE Business Administration, Management.
AB In this thesis, a formal methodology to support reasoning with
imprecise knowledge in computer based decision support systems is
developed. Many important decision problems are highly
unstructured, and cannot be solved adequately using preset
algorithms. Much of the complexity of such problems lies in the
reasoning needed to determine how to solve individual problem
instances. Existing decision support systems do not provide much
reasoning support, largely due to the difficulty of representing and
manipulating the fragmented and imprecise knowledge that is
generally available. The methodology developed in this dissertation
provides a basis for the design of Intelligent Decision Support
Systems (IDSS) in which heuristic problem solving methods can be
used to support reasoning as well as data retrieval and numerical
computation.
The dissertation consists of three parts. First, a logic based
framework for reasoning is developed. The basic constructs of First
Order Logic (FOL) are supplemented with constructs and mechanisms
for automatic model manipulation, resulting in a powerful framework
for IDSS development. Next, the need to distinguish between two
different sources of imprecision, namely fuzziness and uncertainty
is established, and methods for formally representing and
mechanically manipulating fuzzy and/or uncertain knowledge within
the logic framework are developed. Finally, the strengths of the
imprecise reasoning methodology are demonstrated by implementing a
prototype IDSS to support imprecise reasoning and examining the
prototype's performance on sample problems.
This research shows how IDSS can be developed for unstructured
problems even when the available knowledge is imprecise, and also
demonstrates the versatility of such a system. For instance, the
imprecision measures provide useful bases for comparing alternative
solutions, even solutions that are "close misses"; evaluation of
solutions is also possible for each subproblem. Information about
imprecision can be used not only to interpret solutions, but also to
control the problem solving process itself. Furthermore, the
generation of useful results is often possible even if some of the
available information is highly imprecise, sometimes even if some
information is missing. Such features can be very useful in
supporting unstructured decision making, yet cannot readily be
supported by a system limited to precise reasoning.
AN University Microfilms Order Number ADG87-05977.
AU CHAPMAN, BRUCE LEROY.
IN The University of Texas at Austin Ph.D. 1986, 585 pages.
TI KISMET: A KNOWLEDGE-BASED SYSTEM FOR THE SIMULATION OF DISCRETE
MANUFACTURING OPERATIONS.
SO DAI v47(12), SecA, pp4440.
DE Business Administration, Management.
AB Artificial Intelligence (AI) tools and techniques were used to
construct KISMET, a discrete manufacturing simulation system.
Because of the high degree of isolation between the system's
scheduling knowledge and mechanisms used to apply this knowledge,
KISMET allows quick and easy examination, modification, and
replacement of all or part of the system's knowledge by people or
LISP-based computer programs.
KISMET incorporates several novel manufacturing concepts,
including a clear, efficient language for the representation of
multiple alternative production sequences, and a distributed
manufacturing control philosophy that is a superset of current
manufacturing philosophies.
This dissertation discusses the fundamental AI concepts germane
to KISMET; examines the nature and source of scheduling knowledge;
explains the theory, structure, and operation of the system;
presents simulations of various manufacturing shops; and explores
the use of KISMET as the basis of more advanced manufacturing AI
systems.
AN University Microfilms Order Number ADG87-09879.
AU ZACARIAS, PRUDENCE TANGCO.
IN Purdue University Ph.D. 1986, 148 pages.
TI A SCRIPT-BASED KNOWLEDGE REPRESENTATION FOR INTELLIGENT OFFICE
INFORMATION SYSTEMS.
SO DAI V48(01), SecA, pp174.
DE Business Administration, Management.
AB Intelligent Office Information Systems integrate problem
solving, natural language processing, knowledge representation,
information management and other capabilities that are necessary for
supporting the various functions of an organization. This research
focuses on the problem solving aspect, and attempts to model
organizational problem solving behavior, in planning and acting,
using script-based knowledge representation techniques. The
philosophy of object-oriented programming languages is useful in
describing the behavior of the different parts of the organization
that coordinate and cooperate in a problem solving situation.
Problem solving in office information systems call for
facilities for natural language processing for testing the
effectivity of the proposed model. Natural language processing is a
problem solving activity and theories for representing knowledge in
NLP provide the basis for developing a unified theory of
representing and using knowledge that is appropriate for intelligent
OISs for audit support. The components of the proposed OIS for
audit problem solving are based on Discourse Representation Theory,
Conceptual Graph Theory, and Scripts. Queries involving direct data
retrieval, postcondition and precondition analysis, and deduction
are processed in the system.
AN University Microfilms Order Number ADG87-09482.
AU ALOIMONOS, JOHN.
IN The University of Rochester Ph.D. 1987, 261 pages.
TI COMPUTING INTRINSIC IMAGES.
SO DAI V48(01), SecB, pp183.
DE Computer Science.
AB Low-level modern computer vision is not domain dependent, but
concentrates on problems that correspond to identifiable modules in
the human visual system. Several theories have been proposed in the
literature for the computation of shape from shading, shape from
texture, retinal motion from spatiotemporal derivatives of the image
intensity function, and the like.
The problems with the existing approach are basically the
following: (1) The employed assumptions are very strong (they are
not present in a large subset of real images), and so most of the
algorithms fail when applied to real images. (2) Usually the
constraints from the geometry and the physics of the problem are not
enough to guarantee uniqueness of the computed parameters. In this
case, strong additional assumptions about the world are used, in
order to restrict the space of all solutions to a unique value. (3)
Even if no assumptions at all are used and the physical constraints
are enough to guarantee uniqueness of the computed parameters, then
in most cases the resulting algorithms are not robust, in the sense
that if there is a slight error in the input (i.e. small amount of
noise in the image), this results in a catastrophic error in the
output (computed parameters).
It turns out that if several available cues are combined, then
the above-mentioned problems disappear; the resulting algorithms
compute uniquely and robustly the intrinsic parameters (shape,
depth, motion, etc.).
In this thesis the problem of machine vision is explored from
its basics. A low level mathematical theory is presented for the
unique and robust computation of intrinsic parameters. The
computational aspect of the theory envisages a cooperative highly
parallel implementation, bringing in information from five different
sources (shading, texture, motion, contour and stereo), to resolve
ambiguities and ensure uniqueness and stability of the intrinsic
parameters. The problems of shape from texture, shape from shading
and motion, visual motion analysis and shape and motion from contour
are analyzed in detail.
AN University Microfilms Order Number ADG87-10326.
AU CHEN, JENN-NAN.
IN Northwestern University Ph.D. 1987, 137 pages.
TI VERIFICATION AND TRANSLATION OF DISTRIBUTED COMPUTING SYSTEM
SOFTWARE DESIGN.
SO DAI V48(01), SecB, pp184.
DE Computer Science.
AB In this dissertation, a methodology for generating a distributed
computing system application program for the design specification
based on modified Petri nets is presented. There are four major
stages in this methodology: (1) to build a structured graphics
specification model, (2) to verify abstract data type and detect
deadlock of the model, (3) to define communicate among individual
processes within the model, and (4) to translate symbolic
representation into a program of a specified high-level target
language. In this dissertation, Ada is used as the specified
high-level target language.
The structured graphics promote intelligibility because
hierarchical decomposition functional modules is encouraged and the
behavior of each process can be easily extracted from the net as a
separate view of the system. The formal method described in this
dissertation uses symbolic formal representation to represent the
design specification of distributed computing systems. This
symbolic representation is then translated into an equivalent Ada
program structure, especially with the features of concurrency and
synchronization.
Artificial intelligence techniques are employed to verify and to
detect deadlock properties in a distributed computing system
environment. In the aspect of verification, the axioms of abstract
data types are translated into PROLOG clauses and some inquires are
tested to prove correctness of abstract data types. An algorithm
for detecting the communication deadlock among tokens, running
concurrently, of the Petri Nets in a distributed computing system is
presented. Both have been implemented in PROLOG and OPS5 that run
on a VAX/VMS system. Some examples are given to illustrate the
various concepts presented in each stage of the approach of this
dissertation.
AN University Microfilms Order Number ADG87-11310.
AU CHRISTENSEN, MARGARET H.
IN Temple University Ph.D. 1987, 276 pages.
TI EXPLANATION GENERATION FROM ALGEBRAIC SPECIFICATION THROUGH
HYPERRESOLUTION AND DEMODULATION: AUTOMATED HEURISTIC ASSISTANCE.
(Volumes I and II).
SO DAI V48(02), SecB, pp493.
DE Computer Science.
AB AHA delivers semantic help to users of interactive systems. It
possesses the following six capabilities: (1) It can report the
user's current state. (2) It can list all of the commands which are
legal in the user's current state. (3) It can explain the meaning
of a given command. (4) It can explain how the user got to the
current state. (5) It can explain the consequences of the issuance
of hypothetical commands from the current state. (6) It can tell
the user how to get to a goal state, and if requested, explain why
this will do the job. Knowledge about the software is represented
through algebraic specification and question answering is handled by
a resolution-based theorem prover with demodulation for the
treatment of equality. A demonstration version is implemented for a
subset of DEC command language.
AN University Microfilms Order Number ADG87-07050.
AU DEBRAY, SAUMYA KANTI.
IN State University of New York at Stony Brook Ph.D. 1986, 246
pages.
TI GLOBAL OPTIMIZATION OF LOGIC PROGRAMS.
SO DAI v47(12), SecB, pp4957.
DE Computer Science.
AB Logic programming languages have several very attractive
features, such as separation of program logic from control,
declarative reading of programs, and ease of understanding and
maintenance. Despite this, however, the pragmatic problem of
efficiency has hampered the widespread acceptance of such languages
as general-purpose programming languages, and has led to the
proliferation of "impure" language features whose use compromises
precisely those properties of these languages that made them
attractive in the first place.
In this dissertation, we take the view that rather than leave
the burden of writing efficient programs to the programmer, and
force him to deal with impure language features, it is preferable to
have the system perform such optimizations for him. It is argued
that the model theory of first order logic, traditionally used to
specify the semantics of logic programs, is too weak to allow us to
reason about the computational behaviour of programs. A classical
denotational semantics is developed for Prolog, and notions of
strong and weak program equivalence are defined. A general
framework for dataflow analysis of logic programs, based on this
semantics, is developed. Several analysis problems considered in
the literature, such as mode inference, data dependency analysis,
type inference, functor propagation, and dead and unreachable code
detection, are shown to be instances of this framework. The
property of functionality, related to but more general than that of
determinacy, is discussed, and an algorithm given for its inference.
Several optimizing transformations of logic programs based on these
analyses are discussed. Finally, some experimental results are
given, showing the effects of such transformations on the
performance of programs. These results support our view that it is
possible, in general, to take "pure" logic programs and have an
intelligent compiler generate code that is quite efficient, thereby
achieving efficiency without having to sacrifice the advantages of
purity.
AN This item is not available from University Microfilms International
ADG05-60031.
AU ETHERINGTON, DAVID WILLIAM.
IN The University of British Columbia (Canada) Ph.D. 1986.
TI REASONING WITH INCOMPLETE INFORMATION: INVESTIGATIONS OF
NON-MONOTONIC REASONING.
SO DAI V48(01), SecB, pp185.
DE Computer Science.
AB Intelligent behaviour relies heavily on the ability to reason in
the absence of complete information. Until recently, there has been
little work done on developing a formal understanding of how such
reasoning can be performed. We focus on two aspects of this
problem: default or prototypical reasoning, and closed-world or
circumscriptive reasoning.
After surveying the work in the field, we concentrate on
Reiter's default logic and the various circumscriptive formalisms
developed by McCarthy and others. Taking a largely semantic
approach, we develop and/or extend model-theoretic semantics for the
formalisms in question. These and other tools are then used to
chart the capabilities, limitations, and interrelationships of the
various approaches.
It is argued that the formal systems considered, while
interesting in their own rights, have an important role as
specification/evaluation tools vis-a-vis explicitly computational
approaches. An application of these principles is given in the
formalization of inheritance networks in the presence of exceptions,
using default logic.
AN University Microfilms Order Number ADG87-09383.
AU FREDERKING, ROBERT ERIC.
IN Carnegie-Mellon University Ph.D. 1986, 173 pages.
TI NATURAL LANGUAGE DIALOGUE IN AN INTEGRATED COMPUTATIONAL MODEL.
SO DAI V48(01), SecB, pp186.
DE Computer Science.
AB Natural language dialogue is a continuous, unified phenomenon.
Speakers use their conversational context to simplify individual
utterances through a number of linguistic devices, including
ellipsis and definite references. Yet most computational systems
for using natural language treat individual utterances as separate
entities, and have distinctly separate processes for handling
ellipsis, definite references, and other dialogue phenomena.
The computational system presented here, Psli3, uses the uniform
framework of a production system architecture to carry out natural
language understanding and generation in a well-integrated way.
This is demonstrated primarily using intersentential ellipsis
resolution, in addition to examples of definite reference resolution
and interactive error correction. The system's conversational
context arises naturally as the result of the persistence of the
internal representations of previous utterances in working memory.
Natural language input is interpreted within this framework using a
modification of the syntactic technique of chart parsing, extended
to include semantics, and adapted to the production system
architecture. It provides a graceful way of handling ambiguity
within this architecture, and allows separate knowledge sources to
interact smoothly across different utterances in a highly integrated
fashion.
The design of this system demonstrates how flexible and natural
user interactions can be carried out using a system with a naturally
flexible control structure. A processing-based taxonomy for
ellipsis resolution that we developed is used to analyze our
coverage of intersentential ellipsis. The semantic chart parser is
further extended to allow several closely related sentences to be
treated in a single chart. This allows the relationship between the
sentences to be used in a simple way to select between competing
alternative interpretations, and provides a natural means of
resolving complex elliptical utterances.
We describe this system in detail, and include a number of
extensive examples of the system's processing during user
interactions.
AN University Microfilms Order Number ADG87-08227.
AU FRISCH, ALAN MARK.
IN The University of Rochester Ph.D. 1986, 127 pages.
TI KNOWLEDGE RETRIEVAL AS SPECIALIZED INFERENCE.
SO DAI v47(12), SecB, pp4957.
DE Computer Science.
AB Artificial Intelligence reasoning systems commonly contain a
large corpus of declarative knowledge, called a knowledge base (KB),
and provide facilities with which the system's components can
retrieve this knowledge. This thesis sets out to study the very
nature of retrieval. Formal specifications that capture certain
informal intuitions about retrieval are developed, studied, and
implemented by retrieval algorithms.
Consistent with the necessity for fast retrieval is the guiding
intuition that a retriever is, at least in simple cases, a pattern
matcher, though in more complex cases it may perform selected
inferences such as property inheritance.
Seemingly at odds with this intuition, this thesis views the
entire process of retrieval as a form of inference and hence the KB
as a representation, not merely a data structure. A retriever makes
a limited attempt to prove that a queried sentence is a logical
consequence of the KB. When constrained by the no-chaining
restriction, inference becomes indistinguishable from
pattern-matching. Imagining the KB divided into quanta, a retriever
that respects this restriction cannot combine two quanta in order to
derive a third.
The techniques of model theory are adapted to build
non-procedural specifications of retrievability relations, which
determine what sentences are retrievable from the KB's.
Model-theoretic specifications are presented for four retrievers,
each extending the capabilities of the previous one. Each is
accompanied by a rigorous investigation into its properties, and a
presentation of an efficient, terminating algorithm that probably
meets the specification.
The first retriever, which operates on a propositional language,
handles only yes/no queries, the second also handles wh-queries, and
the third allows quantifiers in the KB and the query. Each is shown
to be, in some sense, the strongest retriever that meets the
no-chaining restriction.
The third retriever forms an excellent basis for integrating a
specialized set of inferences that chain in a controllable manner.
This is achieved by incorporating taxonomic inference, such as
inheritance, to form the fourth retriever, an idealized version of
the retriever incorporated in the ARGOT natural language dialogue
system. It is characterized by its ability to infer all
consequences of its taxonomic knowledge without otherwise chaining.
AN University Microfilms Order Number ADG87-02889.
AU GUPTA, ANOOP.
IN Carnegie-Mellon University Ph.D. 1986, 264 pages.
TI PARALLELISM IN PRODUCTION SYSTEMS.
SO DAI v47(12), SecB, pp4959.
DE Computer Science.
AB Production system programs, on the surface, appear to be capable
of using large amounts of parallelism--it is possible to match each
production in a program to the data memory in parallel. The thesis
shows that in practice, however, the speed-up obtainable from
parallelism is quite limited, around 15-fold as compared to initial
expectations of 100-fold to 1000-fold. The main reasons for the
limited speed-up from parallelism are: (1) there are only a small
number of productions that require significant processing as a
result of a change to working memory; and (2) there is a large
variation in the processing requirement of these productions. To
obtain a large fraction of the limited speed-up that is available,
the thesis proposes a parallel version of the Rete match algorithm
which exploits parallelism at a very fine grain. It further
suggests that a suitable architecture to exploit the fine-grained
parallelism is a shared-memory multiprocessor, with 32-64 high
performance processors. For scheduling the fine grained tasks
(consisting of about 50-100 instructions), a hardware task scheduler
is proposed.
The thesis presents a large set of simulation results for
production systems exploiting different sources of parallelism. The
thesis points out the features of existing programs that limit the
speed-up obtainable from parallelism and suggests solutions for some
of the bottlenecks. The simulation results show that using the
suggested multiprocessor architecture (with individual processors
performing at 2 MIPS), it is possible to obtain execution speeds of
5000-27000 working memory element changes per second. This
corresponds to a speed-up of 5-fold over the best known sequential
implementation using a 2 MIPS processor. This performance is also
higher than that obtained by other proposed parallel implementations
of production systems.
AN University Microfilms Order Number ADG87-11810.
AU HOFF, WILLIAM ALBERT.
IN University of Illinois at Urbana-Champaign Ph.D. 1987, 162
pages.
TI SURFACES FROM STEREO: AN INTEGRATED APPROACH.
SO DAI V48(02), SecB, pp494.
DE Computer Science.
AB Stereo vision provides the capability of determining
three-dimensional distance of objects from a stereo pair of images.
The usual approach is to first identify corresponding features
between the two images, then interpolate to obtain a complete
distance or depth map. Traditionally, the most difficult problem
has been to correctly match the corresponding features. Also,
occluding and ridge contours (depth and orientation discontinuities)
have not been explicitly detected and this has made surface
interpolation difficult. The approach described in this thesis is
novel in that it integrates the processes of feature matching,
contour detection, and surface interpolation. Integration is
necessary to ensure that the detected surface is smooth. The
surface interpolation process takes into account the detected
occluding and ridge contours in the scene; interpolation is
performed within regions enclosed by these contours. Planar and
quadratic patches are used as local models of the surface. Occluded
regions in the image are identified and are not used for matching
and interpolation.
The approach described is fairly domain-independent since it
uses no constraint other than the assumption of piecewise smoothness.
A coarse-to-fine algorithm is presented that requires no human
intervention other than an initial rough estimate of depth. The
surface estimate obtained at any given level of resolution is used
to predict the expected locations of the matches at the next finer
level. As the final result, a multiresolution hierarchy of surface
maps is generated, one at each level of resolution. Experimental
results are given for a variety of stereo images.
AN University Microfilms Order Number ADG87-07139.
AU HOFFMAN, RICHARD LEE.
IN Michigan State University Ph.D. 1986, 252 pages.
TI OBJECT RECOGNITION FROM RANGE IMAGES.
SO DAI v47(12), SecB, pp4959.
DE Computer Science.
AB The recognition of objects in 3-dimensional space is an
essential capability of the ideal computer vision system. Range
images directly measure 3D surface coordinates of the visible
portion of a scene and are well suited for this task. We report a
procedure to identify 3D objects in range images, which makes use of
four key processes. The first process segments the range image into
"surface patches" by a clustering algorithm using surface points and
associated surface normals. The second process classifies these
patches as planar, convex, or concave based on a nonparametric
statistical test for trend. The third process derives patch
boundary information, and the results of this and the second process
are used to merge compatible patches to produce reasonable object
faces. The fourth process takes the patch and boundary information
provided by the earlier stages and derives a representation of the
range image. A list of salient features of the various objects in
the database forms the core of an object recognition system, which
looks for instances of these features in the representation.
Occurrences of these salient features are interpreted as evidence
for or against the hypothesis that a given object occurs in the
scene. A measure of similarity between the set of observed features
and the set of salient features for a given object in the database
is used to determine the identity of an object in the scene or
reject the object(s) in the scene as unknown. This evidence-based
object recognition system correctly identified objects in 30 out of
31 range images. Four range images showing objects not included in
the object database were rejected, as desired. Recognition degraded
only slightly when evidence weights were perturbed.
AN University Microfilms Order Number ADG87-10462.
AU IBERALL, ALTHEA RUTH.
IN University of Massachusetts Ph.D. 1987, 214 pages.
TI A NEURAL MODEL OF HUMAN PREHENSION.
SO DAI V48(02), SecB, pp495.
DE Computer Science. Psychology, Psychobiology.
AB Behavioral studies have shown that as the human hand reaches out
to grasp an object, it preshapes for the anticipated interaction.
As it gets close to the object, it encloses it. In order to
evaluate such studies, a model is needed which can be used to
analyze grasping behavior characteristics as they relate to object
and task properties. We present here a framework for studying human
prehensile postures and movements, which is based on architectural
features of the hand. While a grasping task will involve many
fingers, we note that fingers group together to apply an opposing
force against other fingers or against task torques. We call these
groupings virtual fingers, and we call the coordinate frame in which
they move an opposition space. This space provides a description of
the ways virtual fingers can apply oppositions, as well as a method
for defining the relevant object/task properties. In order to test
this style of analysis, we performed behavioral experiments, and
show here how our framework can be used by motor and sensory
psychologists as an effective method for analyzing grasping
behaviors.
To account for the coordination and control of these complex
movements, we use schema theory. A perceptual schema must choose
where to place the hand relative to the object. We show how such a
choice could be performed by a neural network, simulated on a
computer here using an Amari-Arbib cooperation/competition model.
The preshape schema drives the virtual fingers into general ballpark
configurations, and the enclose schema fine tunes the virtual finger
configurations until the object is stably grasped. The interactions
of these schemas form a coordinated control program.
Finally, we look for neural correlates to schemas in cortical
areas of the primate central nervous system. We suggest that
transcortical loops within levels of the CNS act in a distributed
fashion to program different sensorimotor systems at appropriate
times during a task.
AN University Microfilms Order Number ADG87-12145.
AU KASPER, ROBERT T.
IN The University of Michigan Ph.D. 1987, 150 pages.
TI FEATURE STRUCTURES: A LOGICAL THEORY WITH APPLICATION TO LANGUAGE
ANALYSIS.
SO DAI V48(02), SecB, pp495.
DE Computer Science.
AB Feature structures are used for the representation of linguistic
information in several grammar formalisms for natural language
processing. These structures are a type of directed graph, in which
arcs are labeled by names of features, and nodes correspond to
values of features.
As a step in constructing a parser for a large Systemic
Functional Grammar of English, a general mapping is described from
systemic descriptions to the type of feature structures used in
Functional Unification Grammar. Experiments carried out with a
trial version of the parser revealed that existing methods of
unification could not effectively handle descriptions containing a
large amount of disjunction. Subtle difficulties were also
discovered in defining a precise interpretation for some kinds of
disjunctive feature descriptions.
In order to clarify the interpretation of feature descriptions,
a new sort of logic is developed. The formulas of this logic can be
precisely interpreted as descriptions of sets of feature structures.
A complete calculus of equivalences is defined for these formulas,
providing a sound basis for the simplification of feature
descriptions. The consistency problem for formulas of the logic is
shown to be NP-complete, with disjunction as the dominant source of
complexity. This result indicates that any complete unification
algorithm for disjunctive descriptions will probably require
exponential time in the worst-case. However, an algorithm has been
designed with a much better average performance, by factoring
formulas according to laws of equivalence and using a method of
successive approximation. This algorithm has been implemented and
tested as part of the experimental parser for Systemic Functional
Grammar with favorable results. The implementation also extends the
PATR-II grammar formalism, by providing an effective way to use
disjunction in the statement of a grammar. The methods presented
are generally applicable to any computational system which uses
feature structures, as well as to the description of large grammars
for natural language analysis.
AN University Microfilms Order Number ADG87-06042.
AU KIM, MYUNG WON.
IN The University of Texas at Austin Ph.D. 1986, 168 pages.
TI ON AUTOMATICALLY GENERATING AND USING EXAMPLES IN A COMPUTATIONAL
LOGIC SYSTEM.
SO DAI v47(12), SecB, pp4960.
DE Computer Science.
AB Examples are an important tool in Artificial Intelligence,
especially in automated reasoning and machine learning. Early work
has show that examples can be effectively used in automatic theorem
proving. This thesis describes research on automatic example
generation and applications of examples in a formal domain, namely,
the Boyer-Moore theory.
A system called EGS has been implemented which automatically
generates examples for a constraint stated in the theory. Examples
are generated by successively refining a given constraint guided by
certain kinds of knowledge. This knowledge includes function
definitions, known examples, and equation solving procedures. EGS
also rewrites constraint formulas and lemmas into simpler forms.
EGS combines the methods of testing known examples and expanding
definitions to achieve both efficiency and generality. Experiments
have been performed using the EGS system to control backward
chaining, one of the critical problem areas in the Boyer-Moore
theorem prover. It has been found that irrelevant literals in
conjectures, which are often introduced by the induction strategy of
the theorem prover, prohibit effective use of examples in pruning
unpromising backward chainings. Other potential applications of
examples are also discussed.
AN University Microfilms Order Number ADG87-06048.
AU KORNER, KIM M.
IN The University of Texas at Austin Ph.D. 1986, 197 pages.
TI AN INTELLIGENT REMOTE FILE SERVER.
SO DAI v47(12), SecB, pp4960.
DE Computer Science.
AB Limitations of current disk block caching strategies are
discussed. A new model for providing remote file service using
knowledge based caching algorithms is proposed. The knowledge based
algorithms generate expectations of user process behavior which are
used to provide hints to the file server. Surplus resources of the
remote file server permit incorporation of these hints into caching
algorithms. The research involves gathering trace data from a
modified Unix kernel and trace driven simulation of remote file
server models. Comparisons are made between conventional, knowledge
based and optimal models. Further applications of knowledge based
strategies in operating systems are discussed.
AN University Microfilms Order Number ADG87-10092.
AU KUMAR, GANAPATHY SWAMINATHA.
IN Case Western Reserve University Ph.D. 1987, 201 pages.
TI A PREDICTIVE MONITORING ARCHITECTURE FOR EXPERT SYSTEMS.
SO DAI V48(01), SecB, pp187.
DE Computer Science.
AB This thesis proposes an architecture for expert systems that is
based on a predictive monitoring control strategy. It has proved
effective in a manufacturing control task. The problem solving
activity can be described as iterative cycles of
"Diagnose-Prescribe-Monitor" or "Design-Analyze-Monitor" steps. The
architecture employs multiple models of the domain, each model
reflecting different aspects of problem solving activity of the
expert system. The performance of these models is monitored to
recognize situations where the knowledge base has failed to produce
the correct answer. These failures are used as a guide to improve
performance in the next cycle.
Different models, views of knowledge, are specialized for
different types of problem solving tasks. Thus multiple models are
needed to represent knowledge in expert systems in a perspicuous
manner. The use of multiple models provides for the integration of
expert systems technology to conventional numerical software systems.
This integration is important for solving complex problems, where a
variety of techniques need to be brought to bear on the problem to
be solved.
The heuristic knowledge in any expert system is essentially
incomplete, with many implicit conditions for the application of
such knowledge. In order for expert systems to be used with
confidence, performance monitoring capabilities are needed.
Adaptive mechanisms, such as proposed in this thesis, enable an
expert system to gracefully degrade its performance, as problem
instances arise that the knowledge base is not designed to handle.
We have implemented an expert system employing the above
architecture to control the wax injection process. This process
produces wax patterns to be used in the production of mono-crystal
components for high performance parts. Fuzzy Logic formalism is
used to deal with imprecision and uncertainty in the knowledge base.
The solution is developed iteratively, improving the pattern in
successive iterations.
AN University Microfilms Order Number ADG87-12163.
AU LILLY, PATRICE MARIE.
IN The University of Michigan Ph.D. 1987, 207 pages.
TI AUTOMATIC CONTOUR DEFINITION ON LEFT VENTRICULOGRAMS BY IMAGE
EVIDENCE AND A MULTIPLE TEMPLATE-BASED MODEL.
SO DAI V48(02), SecB, pp496.
DE Computer Science. Engineering, Biomedical.
AB Extraction of the left ventricular silhouette from angiographic
images of the left ventricle of the heart is a required step in the
use of contour information for the evaluation of left ventricular
function. Since left ventricular function is the most significant
indicator of overall cardiac function, assessment of the former is
crucial to diagnostic cardiovascular evaluation. Typically, the
left ventricular contour is extracted by a tedious and time
consuming process of manual tracing. This thesis presents an
automatic technique for computer generation of left ventricular
contours from digital angiographic images. The image processing
techniques are applicable to the detection of objects in images
other than the left ventriculogram. The research presented is based
on the hypothesis that it is possible to derive left ventricular
contours automatically and that the results of such automation are
not prone to the human subjectivity and variability of manually
defined contours.
A new endocardial edge detection method for digital subtraction
ventriculography has been developed for volume determination and
wall motion studies. The procedure involves the extraction and
processing of chords of colinear sets of pixels in order to locate
gradient-based edge points and threshold-based border points which
represent the ventricular edge. The method not only provides data
reduction, but it also effectively permits the search for gradients
perpendicular to edge crossings. A small set of candidate border
points are determined for each chord, threshold algorithms are
invoked, and various path cost functions, reflecting expectations
about the left ventricular contour, are applied in order to locate
the best border candidates. Border candidate points are used as
input to a voting mechanism which seeks points of agreement between
knowledge-based cost functions. A powerful model-based template
correction procedure is invoked which permits border definition in
regions of insufficient image evidence. Thus major concepts include
the notions of image data reduction based on knowledge of the object
to be recognized; edge detection and thresholding, also based on a
priori knowledge of the image domain; dramatic reduction of the edge
candidate space; combined use of a variety of path cost functions
based on knowledge about object shape; template-based recognition
and correction of possible versus impossible object shape.
The algorithm was tested by application to three test sets of 25
images each. Test sets 1 and 2 were used as training sets for
creation of the model for contour correction. Model-based
correction produced dramatic reduction of error in the final
contours. Future work based on the developed algorithms is
discussed.
AN University Microfilms Order Number ADG87-11369.
AU LU, CHENGZHAO.
IN Temple University Ph.D. 1987, 213 pages.
TI AND/OR: A SYSTEM FOR MANUFACTURING PROCESS CONTROL AND SIMULATION.
SO DAI V48(02), SecB, pp496.
DE Computer Science.
AB The AND/OR System can control or simulate the shop floor
activities of Flexible Manufacturing Systems (FMSs). It applies
Artificial Intelligence techniques to manufacturing automation.
An AND/OR system consists of three major modules: AND/OR
Programs, in the form of AND/OR networks; a Manufacturing
Configuration Model (MCM), in the form of a data base; and an
rule-based AND/OR Monitor. With this decomposition, the FMS
software can be developed by people that need use only knowledge
from their own domain of expertise.
Within the MCM, the FMS devices are modeled in terms of the
operations they can carry out, of their relationships and attributes.
The AND/OR programs are defined in terms of device independent
operations. The AND/OR monitor is able to dispatch and route
processes dynamically: it fully utilizes the flexibility afforded by
the FMS layout.
Active Procedures (AP) are introduced to represent all the parts
that can be moved within the FMS and the manufacturing activities
that they undergo. Activities that can be carried out concurrently
are represented by APs that can execute concurrently. A semantic
driven method based on the states of the APs is used to resolve
conflicts in requests for resources.
The same AND/OR programs can be used either for real-time
process control or for simulation. The AND/OR system can be used to
evaluate job schedules in an environment exactly like the one
encountered in real time operation.
A prototype AND/OR System has been implemented. The
implementation uses expert system techniques and the entire system
is coded in LISP.
AN University Microfilms Order Number ADG87-11830.
AU LUBARS, MITCHELL DOUGLAS.
IN University of Illinois at Urbana-Champaign Ph.D. 1987, 210
pages.
TI A KNOWLEDGE-BASED DESIGN AID FOR THE CONSTRUCTION OF SOFTWARE
SYSTEMS.
SO DAI V48(02), SecB, pp497.
DE Computer Science.
AB This thesis addresses the use of knowledge-based techniques in
providing high level support for software design activities. A
knowledge-based refinement paradigm of software development is
introduced that alleviates some of the problems of the traditional
software life cycle. A design system, IDeA, is presented as a
prototypical environment that supports this paradigm.
Within IDeA, design knowledge is encoded in schematic forms that
abstract out the common design features across related application
domains. These design schemas represent design families and provide
significant potential for design reuse, since they can be customized
and refined to satisfy a particular user's design requirements.
This is accomplished through the application of specialization and
refinement rules that are associated with the schemas. Design
schemas also represent shared constraints and design decisions.
This schema representation is based on a notion of polymorphic data
typing with multiple constraints.
IDeA supplements the knowledge-based techniques with a schema
selection strategy that facilitates user selection of design
fragments. In addition, constraint propagation and planning
techniques provide support for intelligent design activities. These
techniques are integrated with other support aspects, such as
clerical, organizational, and analysis support, to present IDeA as a
complete environment for software design.
AN University Microfilms Order Number ADG87-10209.
AU MIRANKER, DANIEL PAUL.
IN Columbia University Ph.D. 1987, 191 pages.
TI TREAT: A NEW AND EFFICIENT MATCH ALGORITHM FOR AI PRODUCTION SYSTEMS.
SO DAI V48(01), SecB, pp188.
DE Computer Science.
AB Due to the dramatic increase in computing power and the
concomitant decrease in computing cost that has occurred over the
last decade, many researchers are attempting to design computing
systems to solve complicated problems or execute tasks that have in
the past been performed by human experts. The focus of Knowledge
Engineering is the construction of such complex "expert system"
programs.
This thesis will describe the architecture and the software
systems embodying the DADO machine, a parallel tree-structured
computer designed to provide significant performance improvements
over serial computers of comparable hardware complexity in the
execution of large expert systems implemented in production system
form.
The central contribution of this thesis is a new match algorithm
for executing production systems, TREAT, that will be presented and
comparatively analyzed with the RETE match algorithm. TREAT,
originally designed specifically for the DADO machine architecture
can handle efficiently both temporally redundant and nontemporally
redundant production system programs. Although the development of
the algorithm was motivated by the inadequacies of the parallel
versions of existing production system algorithms, it is shown that
the TREAT algorithm performs better than the best known sequential
algorithm, the RETE match, even on a sequential machine.
AN University Microfilms Order Number ADG87-07450.
AU MURRAY, KAREN JEAN BURNS.
IN Texas A&M University Ph.D. 1986, 151 pages.
TI KNOWLEDGE-BASED MODEL CONSTRUCTION: AN AUTOMATIC PROGRAMMING
APPROACH TO SIMULATION MODELING.
SO DAI v47(12), SecB, pp4961.
DE Computer Science.
AB A knowledge-based model construction (KBMC) system has been
developed to allow the automatic construction of complete discrete
simulation models for queueing systems. While this work focuses on
the domain of simulation modeling, it closely parallels software
engineering research toward an automated software production
paradigm, and in particular, automatic programming research. This
knowledge-based approach to automatic programming facilitates
automation of the specification acquisition process in conjunction
with automatic simulation model construction.
Three types of knowledge are used in the specification and
construction of a simulation model: domain knowledge, in this case,
queueing system knowledge; general simulation modeling knowledge;
and target language knowledge, specifically knowledge of the
simulation language SIMAN. This knowledge has been incorporated
into the underlying rule base of the KBMC system and implemented via
the production system paradigm of OPS83.
In general, domain knowledge and modeling knowledge were
combined to form extraction rules, and modeling and target language
knowledge form construction rules. The extraction rules guide an
interactive session with the user and allow the acquisition of a
complete internal model specification (IMS) of the desired model via
structured dialog and menu selection. In effect, the user describes
what the model is to do in response to queries driven by the
extraction rules. Construction rules transform the internal model
specification into a basic simulation model, then into constructs of
the target language SIMAN to produce an executable simulation model
that meets the specification requirements.
The KBMC system is fully operational for a restricted subset of
queueing system problems. A complete sample case scenario,
including the specification extraction dialog, the resulting IMS,
and an overview of model construction, is presented. The model
constructed by the KBMC system and output from model execution are
included for the sample case and five additional cases. An analysis
of the current implementation is provided, as are suggestions for
future work.
This research represents a consolidation of automatic
programming with a current expert system building tool for the
domain of simulation modeling. The KBMC system has demonstrated the
feasibility and utility of the knowledge-based approach to enable a
user who lacks simulation language expertise to quickly build
executable models.
AN University Microfilms Order Number ADG87-11844.
AU O'RORKE, PAUL VINCENT.
IN University of Illinois at Urbana-Champaign Ph.D. 1987, 193
pages.
TI EXPLANATION-BASED LEARNING VIA CONSTRAINT POSTING AND PROPAGATION.
SO DAI V48(02), SecB, pp497.
DE Computer Science.
AB Researchers in a new subfield of Machine Learning called
Explanation-Based Learning have begun to utilize explanations as a
basis for powerful learning strategies. The fundamental idea is
that explanations can be used to focus on essentials and to strip
away extraneous details--obviating the need to search for
generalizations based on similarities and differences among large
numbers of examples.
This thesis presents an idealized model of explanation-based
learning centered on the notion of constraint posting and
propagation. In this paradigm, problems are solved by posting
constraints (specifying more and more precise descriptions of
solutions). Solutions are generalized by eliminating unnecessary
constraints. This view of explanation-based generalization is shown
to have advantages over back-propagation approaches to
generalization.
The results of experiments which demonstrate the power of the
learning method are also presented. One experiment compares the
performances of non-learning, rote-learning, and EBL versions of
Newell, Shaw, and Simon's LOGIC-THEORIST on problems from Whitehead
and Russell's Principia Mathematica. Another experiment involves an
interactive automated apprentice called LA.
AN University Microfilms Order Number ADG87-06650.
AU PFEIFFER, JOSEPH JOHN, JR.
IN University of Washington Ph.D. 1986, 262 pages.
TI INTEGRATING HIGH LEVEL AND LOW LEVEL COMPUTER VISION.
SO DAI v47(12), SecB, pp4962.
DE Computer Science.
AB The process of computer vision may be divided into two phases.
In the first phase, measurement and classification operations are
performed on data represented in the form of an array, and in the
second, data manipulations are applied to graph structures derived
from the image array. We refer to the first phase as "low-level"
vision, and the second phase as "high-level" vision. The task of
converting between the representations is "intermediate-level"
vision.
A number of computer architectures have been developed for both
low-level and high-level vision. For low-level work, these have
included such designs as cellular processors, while for high-level
work various networks of processors have been proposed. However,
little has been done in developing intermediate-level architectures
capable of bridging the gap between the low- and high-level
computers.
In this dissertation, we explore the question of
intermediate-level vision. We examine the relationship between the
low-level and the high-level representations, and develop
information-preserving techniques for reducing the amount of data in
the high-level representation. Algorithms for converting image
representations between the low- and the high-level are studied, and
architectural support in the form of custom VLSI is designed. This
permits the development of a "network vision computer", combining a
cellular processor with a network of graph structure processors.
The architecture developed is evaluated by examining three
algorithms. These include a technique for improving the results of
edge detection by use of a hypothesis-and-test paradigm, an
implementation of Nevatia and Binford's algorithm for ribbon
extraction, and an exploration of heuristic improvements to
region-filling algorithms.
Suggestions for future work are presented.
AN University Microfilms Order Number ADG87-10494.
AU POE, MICHAEL DOUGLAS.
IN University of Massachusetts Ph.D. 1987, 474 pages.
TI A FRAMEWORK FOR PRACTICAL MICROCODE SYNTHESIS.
SO DAI V48(02), SecB, pp497.
DE Computer Science.
AB This thesis describes a knowledge engineering approach to
microcode synthesis. The implementation of the thesis is a
rule-based expert system called UMS (Universal Microcode
Synthesizer). UMS takes a high-level procedural description of the
desired microcode and target hardware, and produces a microprogram
for that hardware. The synthesis process uses a minor amount of
example-dependent information. An example of synthesizing microcode
to emulate a PDP-8 shows that procedural hardware and
instruction-set descriptions contain enough information for
microcode synthesis.
The UMS code analysis techniques for procedural descriptions
create highly detailed control-and-data-dependency graphs used in
later synthesis processing. Description techniques are provided in
the thesis for many advanced microengine features not usually
discussed in the microcode compilation and synthesis literature.
The code analysis techniques are demonstrated to be powerful enough
for independently written descriptions in a synthesis example.
General truth-preserving code-transformation rules are applied
to the microcode description until it becomes the desired
microprogram. This transformation process has a large search space
of three interrelated tasks: (1) determine how to change the
operations specified in the high-level microcode to those available
in hardware; (2) search the hardware for ways to make these
operations actually occur; and (3) search to make the resulting
microprogram efficient in speed and space.
The design of the UMS software kernel, called the inference
engine, is a primary contribution of the thesis. Using cost
analysis, the engine manages search to make microcode synthesis
practical. Specialized search techniques control how the
synthesizer inference engine matches parts of the instruction-set
control-and-data-dependency graph to its hardware description
counterparts. Many novel equivalence classes for semantically
similar graph fragments allow matches between graph fragments that
are not literal matches. Graph-correspondence search is decomposed
into multiple node and arc types, arranged by cost of heuristic
search. Arc feature constraints are accumulated while node choice
is delayed as long as possible. Based on search cost estimates, a
search path may be suspended and possibly resumed later. The engine
tries to find a way for each search path to fail. Such a failure
triggers a shift in search direction.
AN University Microfilms Order Number ADG87-07736.
AU RUSSELL, STUART JONATHAN.
IN Stanford University Ph.D. 1987, 245 pages.
TI ANALOGICAL AND INDUCTIVE REASONING.
SO DAI v47(12), SecB, pp4963.
DE Computer Science.
AB The first problem discussed in this thesis is the logical
problem of analogy, which, given a formal definition of analogical
inference, asks under what conditions such inferences may be
justified. By showing the inadequacy of approaches based on the
degree of similarity between analogues, the importance of relevance
between known and inferred similarities is highlighted. The need
for a logical semantics for relevance motivates the definition of
determinations, first-order expressions capturing the idea of
relevance between predicate schemata.
Determinations are shown to justify analogical inferences and
single-instance generalizations non-trivially, and to express an
apparently common form of knowledge hitherto overlooked in
knowledge-based systems. Analogical reasoning is implemented in
MRS, a logic programming system, and shown to be more efficient than
simple rule-based methods for some important inference tasks. The
ability to acquire and use determinations is shown strictly to
increase the inferences a system can make from a given set of data.
Programs are described for the inductive acquisition of
determinations and their subsequent use in analogical reasoning to
aid in the construction of a large knowledge base.
The second problem, suggested by and subsuming the first, is to
identify the ways in which existing knowledge can be used to help a
system to learn from experience. A method is given for enumerating
the types of knowledge, of which determinations are but one, that
contribute to learning, and a general inductive inference machine is
described based on these ideas.
The application of a logical, knowledge-based approach to the
problems of analogy and induction thus shows the need for a system
to be able to detect as many logical forms of regularity as possible
in order to maximize its inferential capability. The possibility
that important aspects of 'common sense' are captured by complex,
abstract regularities suggests further empirical research to
identify this knowledge.
AN This item is not available from University Microfilms International
ADG05-60086.
AU STOREY, VEDA CATHERINE.
IN The University of British Columbia (Canada) Ph.D. 1986.
TI AN EXPERT VIEW CREATION SYSTEM FOR DATABASE DESIGN.
SO DAI V48(01), SecB, pp189.
DE Computer Science.
AB The process of generating user views during logical database
design is formalized and expressed as a set of rules which comprise
the knowledge base of an expert system. This system, called the
View Creation System, engages the user in a dialogue to determine
information requirements. These requirements are then translated
into a set of Fourth Normal Form relations representing a view. The
data model on which the system is based is the Entity-Relationship
Model. Using this model, the system elicits entities, attributes
and relationships while trying to detect and rectify inconsistencies
and ambiguities in the user's input. With the aid of the user,
functional dependencies are isolated and resolved before the final
set of relations is produced.
AN University Microfilms Order Number ADG87-09467.
AU SUBBARAO, MURALIDHARA.
IN University of Maryland Ph.D. 1986, 179 pages.
TI INTERPRETATION OF VISUAL MOTION: A COMPUTATIONAL STUDY.
SO DAI V48(01), SecB, pp189.
DE Computer Science.
AB A changing scene produces a changing image or visual motion on
the eye's retina. The human visual system is able to recover useful
three-dimensional information about the scene from this
two-dimensional visual motion. This thesis is a study of this
phenomenon from an information processing point of view. A
computational theory is formulated for recovering the scene from
visual motion. This formulation deals with determining the local
geometry and the rigid body motion of surfaces from spatio-temporal
parameters of visual motion. In particular, we provide solutions to
the problem of determining the shape and rigid motion of planar and
curved surfaces and characterize the conditions under which these
solutions are unique. The formulation is generalized to the case of
non-uniform (i.e. accelerated) and non-rigid motion of surfaces.
This serves to address the two fundamental questions: What scene
information is contained in the visual motion field? How can it be
recovered from visual motion? The theory exposes the well known
fact that the general problem of visual motion interpretation is
inherently ill-posed. Furthermore, it indicates the minimum number
of additional constraints (in the form of assumptions about the
scene) necessary to interpret visual motion. It is found that, in
general, the assumption that objects in the scene are rigid is
sufficient to recover the scene uniquely.
A computational approach is given for the interpretation of
visual motion. An important characteristic of this approach is a
uniform representation scheme and a unified algorithm which is both
flexible and extensible. This approach is implemented on a computer
system and demonstrated on a variety of cases. It provides a basis
for further investigations into both understanding human vision, and
building machine vision systems.
AN University Microfilms Order Number ADG87-07751.
AU TREITEL, RICHARD JAMES.
IN Stanford University Ph.D. 1987, 182 pages.
TI SEQUENTIALIZATION OF LOGIC PROGRAMS.
SO DAI v47(12), SecB, pp4964.
DE Computer Science.
AB Logical inference can be done in two directions: either forwards
from facts already known to other facts that may be of interest, or
backwards from goals whose answers are needed to subgoals whose
answers may be available. Some programs and languages use one
direction exclusively because it is clearly better (in the sense of
computational efficiency) for their purposes. There are, however,
problems that are better solved by using some rules for forwards
inference and others backwards. In theorem-providing and artificial
intelligence work over the past two decades, numerous systems have
been developed which allow inference to be drawn either forwards or
backwards, and a number of heuristics have evolved for deciding
which direction to use a rule in.
In this dissertation I attempt to put these decisions on a
quantitative footing, by developing algorithms for esimating the
computational cost of a set of directions for rules, and applying
standard optimisation techniques to derive the best set of choices.
In ascending order of difficulty, it is assumed first that no
forward rule uses any facts deduced by a backward rule; then that
directions can be chosen freely; and finally that each rule can be
split up and used partly forwards, partly backwards. All of these
problems, except for a highly constrained version of the first one,
are shown to be NP-complete. For each of them, one or more
optimisation techniques are given, with some comments on their
efficiency. Techniques for "ameliorating" or "satisficing" the
problem, namely returning a non-optimal solution that is good enough
for the purpose at hand, are also discussed, since the effect of
finding the exactly optimal solution may be excessive.
A related issue is the ordering of the terms in a rule, which
can have a strong effect on the cost of using the rule. Algorithms
for ordering terms optimally are known, but they rely on the
direction of inference being fixed in advance, and they apply only
to one rule considered in isolation. A more general algorithm is
developed, and a way is shown to incorporate it into the choice of
rule directions.
AN University Microfilms Order Number ADG87-08507.
AU TYLER, SHERMAN WILLIAM.
IN University of Pittsburgh Ph.D. 1986, 189 pages.
TI SAUCI: SELF-ADAPTIVE USER-COMPUTER INTERFACE.
SO DAI v47(12), SecB, pp4964.
DE Computer Science.
AB Different approaches to the design of the human-computer
interface have been taken in the past. These can be organized into
four broad categories: tack-on; intuitive/empirical; formal; and
conversational. There are several important interface design
criteria that have never been adequately attained in any of these
approaches. One is modularity, that is, maintaining a clear
separation between the interface and its target system. A second
criterion is self-adaptation, or the ability of the interface to
modify its own behavior to suit a given individual user. Two
further criteria relate to the interface's potential to guide users
in performing typical high-level tasks on the target system and to
provide intelligent advice on the use of that system.
This research was focused on developing an integrated technique
for achieving these four design criteria. To that end, an abstract
architecture called SAUCI, or the Self-Adaptive User-Computer
Interface, was proposed, embodying a knowledge-based,
object-oriented approach to interface design. The foundation of
this approach rests upon information encoded within sets of objects.
This information includes separate knowledge bases describing the
individual users, the commands of the target system, and the
high-level tasks appropriate for that system. The behavior of the
interface is controlled by various methods which call upon the
knowledge bases in a rule-governed manner to decide what interface
features should be present at each phase of the user's dialogue with
the target system.
To test the feasibility of the proposed architecture, a working
interface was implemented on a Xerox 1108 computer in the LOOPS
language, with a UNIX operating system running on a separate
minicomputer as the target system. An empirical evaluation of this
prototype revealed clear advantages over the standard interface.
Closer examination pointed to each of the factors of modularity,
task guidance, and user-tailored assistance as playing a significant
role in these effects.
A discussion of additional applications of this architecture and
of areas for future development is offered as further evidence of
the value of this approach as a general framework for human-computer
interface design.
AN University Microfilms Order Number ADG87-07756.
AU VAN GELDER, ALLEN.
IN Stanford University Ph.D. 1987, 111 pages.
TI LOGIC PROGRAMMING FOR PARALLEL IMPLEMENTATION.
SO DAI v47(12), SecB, pp4964.
DE Computer Science.
AB This report explores several topics concerning the parallel
implementation of logic programming. In the introductory chapter we
define logic programs and describe a logic programming language that
is suitable for parallel implementation. In the second chapter we
treat the semantics of our language with particular attention to the
role of negation as failure. The third chapter is an investigation
of the parallel complexity of a class of logic programs that we call
logical query programs, using the theoretical and abstract PRAM
model of computation. In this chapter we show that certain logical
query programs are in the complexity class NC, meaning that they
have very fast execution times given enough processors. However,
"enough processors" for inputs of size n turns out to be a
polynomial in n of degree 6 (or more), an unrealistic resource
requirement. Consequently, in the fourth and fifth chapters, we
investigate more realistic approaches to implementation, based on
transforming the logic program into a rule/goal graph, and embedding
that graph into a general purpose processor interconnection network
in the form of a rectangular grid.
AN University Microfilms Order Number ADG87-09470.
AU VELAUTHAPILLAI, MAHENDRAN.
IN University of Maryland Ph.D. 1986, 96 pages.
TI ON THE INDUCTIVE INFERENCE OF PROGRAMS WITH ANOMALIES.
SO DAI V48(01), SecB, pp190.
DE Computer Science.
AB Inductive inference machines (IIMs) synthesize programs when
given examples of their input/output behavior. Several different
criteria for successful synthesis by IIMs have been studied. Two
new types of criteria of successful inference are defined. One of
the types (density) admits a successful inference when the result is
a program which computes the function to be inferred everywhere,
except on a sparse (but infinite) subset of its domain. The other
type (uniform density) requires in addition the program which
computes the function to be inferred "uniformly" everywhere, except
on a sparse subset of its domain. In previous work sparse was taken
to mean finite. The density criteria are distinguished from one
another by the bounds placed on the "density" of correctness in the
final program, and the number of allowable conjecture changes. The
same holds true for uniform density. Comparison of density criteria
yields a two dimensional lattice structure and so does the density
criteria. When the two types of criteria were compared, it resulted
in a three dimensional lattice. The above two types of criteria
were found to be incomparable to the behaviorally correct inference
classes. A team of IIMs synthesizes programs for a set of
functions, if and only if for each function in the set, at least one
of the IIMs in the team successfully synthesizes the proper program.
Team inference classes are distinguished from one another by the
number of IIM's and the criteria of success specified. Simulating a
team with a single IIM using the EX criteria for success has been
studied. The minimum number of mind changes necessary for a single
machine was obtained. Precise bounds on the number of mind changes
and the number of anomalies necessary for the simulation were found.
Classes inferred by the behaviorally correct teams are shown to be
incomparable to classes inferred by the density/uniform density
teams.
AN University Microfilms Order Number ADG87-09400.
AU WAIBEL, ALEXANDER.
IN Carnegie-Mellon University Ph.D. 1986, 225 pages.
TI PROSODY AND SPEECH RECOGNITION.
SO DAI V48(01), SecB, pp190.
DE Computer Science.
AB Although numerous studies have demonstrated that prosody is
critical to human speech perception, many automatic speech
recognition systems process only spectral/phonetic cues. They
ignore or deliberately remove prosodic cues such as pitch,
intensity, rhythm, temporal relationships, and stress. Extending
speech recognition systems to human performance levels, however,
will require exploiting all available cues and sources of knowledge.
This work demonstrates the power of prosodic constraints in
computer speech recognition systems. We first show theoretically
that prosodic patterns can discriminate between words of large
vocabularies (vocabularies the size an adult typically commands).
We then introduce several novel algorithms to extract prosodic
parameters reliably. These parameters include segmentation
algorithms for detecting syllable boundaries and major segment
boundaries and algorithms for measuring pitch and intensity
contours, and lexical stress levels. Extensive performance
evaluation of these algorithms is presented. We then implement and
evaluate prosodic knowledge sources that apply the extracted
parameters at appropriate processing levels including the lexical,
syntactic and sentential levels. To permit large vocabulary
capability, the knowledge source designs emphasize a concern for
minimizing lexical search, exploiting parallelism and
speaker-independent and/or template-independent operation.
Major contributions of this research include several implemented
knowledge sources and insights for further application of prosodic
information to speech understanding. For lexical access, temporal
knowledge sources restrict the word candidate set by 50% to 93%.
Intensity- and stress-based knowledge sources also each reduce
possible word candidate sets by about 50%. The lexical, prosodic
knowledge sources were combined and compared with a phonetic word
hypothesizer currently under development. The results show that the
average rank of the correct word hypothesis can be reduced to almost
1/3 when prosodic knowledge sources are added to the phonetic word
hypothesizer. At the sentential level of processing, a pitch
contour knowledge source reduces syntactic and pragmatic ambiguity
by discriminating between statement and question "tunes". We have
examined the role of stress at distinct levels of speech processing.
At the acoustic/phonetic level, we have reevaluated phonemic and
phonetic consistency in stressed and unstressed syllables in terms
of phonetic substitution and omission rates. Our results indicate
that speaking rate, more than stress, predicts the rate of segmental
omissions. At the syntactic level, automatically detected stress
levels provide an acoustic cue to distinguishing between content
words and function words in a spoken sentence.
AN University Microfilms Order Number ADG87-06124.
AU WANG, TIE-CHENG.
IN The University of Texas at Austin Ph.D. 1986, 154 pages.
TI SEMANTICALLY GUIDED HIERARCHICAL DEDUCTION AND EQUALITY CONDITIONAL
RESOLUTION.
SO DAI v47(12), SecB, pp4965.
DE Computer Science.
AB This dissertation is concerned with research in designing
computer programs which are effective in finding, or helping to
find, proofs of theorems of first order logic. Discussed herein are
three interrelated topics. The first topic is the design of a new,
goal-oriented proof procedure--hierarchical deduction. The second
topic is semantically guided hierarchical deduction, which concerns
issues related to designing models for guiding the search of a
particular version of this procedure. The third topic is equality
conditional resolution, which provides a control strategy useful for
further development of hierarchical deduction.
The hierarchical deduction procedure (HD) proves a theorem by
searching for a proof acceptable to an hierarchical deduction
structure; those derivations which are irrelevant to this proof are
limited by means of a set of completeness-preserving refinements.
In addition to the basic algorithm, there is a partial set of
support strategy which provides a simple, but effective way to
incorporate semantics and human factors.
Semantically guided hierarchical deduction (SHD) is defined from
HD for incorporating domain dependent knowledge presented in models.
A set of rules by which a user designs models for SHD is
investigated, and applied to design models for helping a SHD-prover
prove efficiently a series of non-trivial theorems. In order to
include type information and other human factors into models, a
three-value interpretation is introduced.
An ECR strategy is described through studying an equality
conditional resolution proof procedure (ECR). ECR incorporates a
user's knowledge concerning the different roles of input equations
and other hypotheses in a proof. The input equations are
transformed into different classes of rules according to the roles
they will play. The conditions on the application of these rules
control their inference, and prevent inappropriate use of them. An
ECR-prover, which is a modified SHD-prover enhanced by the ECR
strategy, is implemented and proves efficiently a number of theorems
from mathematics.
For each of these topics, a summary of results obtained by the
computer implementations is presented, and a concluding remark is
made in comparison with the performance of others. Issues related
to the extension and further development of SHD and ECR proof
procedures are discussed in the final chapter. The proofs
concerning the completeness of the basic algorithm of the
hierarchical deduction are included in an appendix.
AN University Microfilms Order Number ADG87-07765.
AU WINSLETT, MARIANNE SOUTHALL.
IN Stanford University Ph.D. 1987, 174 pages.
TI UPDATING DATABASES WITH INCOMPLETE INFORMATION.
SO DAI v47(12), SecB, pp4966.
DE Computer Science.
AB Suppose one wishes to construct, use, and maintain a database of
facts about the real world, even though the state of that world is
only partially known. In the AI domain, this problem arises when an
agent has a base set of beliefs that reflect partial knowledge about
the world, and then tries to incorporate new, possibly contradictory
knowledge into this set of beliefs. In the database domain, one
facet of this situation is the well-known null values problem. We
choose to represent such a database as a logical theory, and view
the models of the theory as representing possible states of the
world that are consistent with all known information.
How can new information be incorporated into the database? For
example, given the new information that "b or c is true," how can
one get rid of all outdated information about b and c, add the new
information, and yet in the process not disturb any other
information in the database? In current-day database management
systems, the burden of determining exactly what to add and remove
from the database is placed on the user.
Our research has produced a formal method of specifying the
desired change intensionally, by stating a well-formed formula that
the state of the world is now known to satisfy. The database update
algorithms we provide will automatically accomplish that change.
Our approach embeds the incomplete database and the incoming
information in the language of mathematical logic, and gives formal
definitions of the semantics of our update operators, along with
proofs of correctness for their associated algorithms. We assess
the computational complexity of the algorithms, and propose a means
of lazy evaluation to avoid undesirable expense during execution of
updates. We also examine means of enforcing integrity constraints
as the database is updated.
This thesis also examines the question of choices of semantics
for update operators for databases with incomplete information, and
proposes a framework for evaluation of competing candidate semantics.
Several candidate semantics are evaluated with respect to that
framework.
A experimental implementation of our method has been
constructed, and we include the results of test runs on a range of
patterns of queries and updates.
AN University Microfilms Order Number ADG87-02918.
AU WISE, BEN PAUL.
IN Carnegie-Mellon University Ph.D. 1986, 260 pages.
TI AN EXPERIMENTAL COMPARISON OF UNCERTAIN INFERENCE SYSTEMS.
SO DAI v47(12), SecB, pp4966.
DE Computer Science.
AB Uncertainty is a pervasive feature of the domains in which
expert systems are supposed to function. There are several
mechanisms for handling uncertainty, of which the oldest and most
widely used is probability theory. It is the only one which is
derived from a formal description of rational behavior. For use in
patten-directed inference systems, or rule-based inference engines,
artificial intelligence researchers have favored others, largely for
reasons of simplicity and speed. We have developed techniques which
measure how these alternatives approximate the results of
probability theory, assess how well they perform by those measures,
and find out what underlying features of a problem affect
performance.
Because the amount of data required to fully specify a
probability distribution is enormous, some technique must be used to
estimate a distribution when only partial information is given. We
give intuitive and axiomatic arguments, algebraic analysis, and
numerical examples, that fitting maximum entropy priors and using
minimum cross entropy updating are the most appropriate ways to do
so.
For several uncertain inference systems, detailed analysis of
operations have been performed to elucidate both which basic
problem-features bias the answers and the directions of the biases.
We present and discuss both the motivation and design of our
analysis techniques, and the specific structures which were found to
have strong effects on performance. The techniques have also been
tried on several variations of a fragment from a real expert system,
with qualitatively similar results.
We have found that the newer uncertain inference systems often
re-incorporated features of general probability theory which have
been eliminated in earlier systems. Moreover, we found that newer
systems sometimes continued exactly the features which they were
supposed to eliminate, albeit in different notation. For every
simple uncertain inference system, we found not only structures for
which the performance was very good, but also structures for which
performance was worse than random guessing, or systematic bias was
present, or data was interpreted as having the opposite of its true
import.
end of part 1 of 3
*********************