leff@smu.UUCP (Laurence Leff) (11/08/88)
Received: from Mars.UCC.UMass.EDU by UMass (outbound name server)
with BSMTP; 3 Mar 88 01:37:33 EST
Date: Thu, 3 Mar 88 00:59 EDT
From: krovetz@UMass
To: e1ar0002@smuvm1.bitnet
Subject: AI-Related Dissertations
The following is a list of dissertation titles and
abstracts related to Artificial Intelligence taken
taken from the Dissertation Abstracts International
(DAI) database. The list is assembled by Susanne
Humphrey and myself and is published in the SIGART
Newsletter (that list doesn't include the abstracts).
The dissertation titles and abstracts contained here
are published with the permission of University Microfilms
International, publishers of the DAI database. University
Microfilms has granted permission for this list to be
redistributed electronically and for extracts and
hardcopies to be made of it, provided that this notice
is included and provided that the list is not sold.
Copies of the dissertations may be obtained by
addressing your request to:
University Microfilms International
Dissertation Copies
Post Office Box 1764
Ann Arbor, Michigan 48106
or by telephoning (toll-free) 1-800-521-3042
(except for Michigan, Hawaii, and Alaska).
In Canada: 1-800-268-6090.
From SIGART Newsletter, No. 101
File 1 of 3
Business Admin through Comput Sci
----------------------------------------------------------------
AN University Microfilms Order Number ADG87-00409.
AU LEE, JAE BEOM.
IN New York University, Graduate School of Business Administration
Ph.D. 1986, 228 pages.
TI INTELLIGENT DECISION SUPPORT SYSTEMS FOR BUSINESS APPLICATIONS: WITH
AN EXAMPLE OF PORTFOLIO MANAGEMENT DECISION MAKING.
SO DAI V47(09), SecA, pp3473.
DE Business Administration, General. Information Science.
AB This study involves exploratory research to develop more
effective approaches to designing man-machine interfaces. A key
objective is to improve DSS development methodologies to allow
incorporation of expert system (ES) and artificial intelligence (AI)
techniques. We identify the following problems in using AI
techniques for DSS development: (1) selection of the software
component types (a database model, management science model, or
knowledge representation and inferencing scheme) that best fit the
tasks to be performed and are appropriate for the users; (2)
acquisition of the appropriate predefined software components or
construction of new ones; (3) combination of the heterogeneous
components into a useful system.
An Intelligent Decision Support System (IDSS) has been proposed
to solve the above problems. A rough architecture for an IDSS has
been developed that supports the business problem-solving
environment by employing a useful subset of knowledge representation
and inference techniques. A prototype system for portfolio
management decision making has been implemented in Prolog to
illustrate and validate the approach. The research makes several
practical contributions in the area of DSS analysis and design.
Among them are: (1) a new ES development strategy for business
problem-solving environments; (2) the architecture for the IDSS
which recognizes the need for multiple knowledge representation
schemes; (3) the knowledge engineering techniques which can be used
as guidelines for other ES developers.
AN University Microfilms Order Number ADG87-03235.
AU LIANG, TING-PENG.
IN University of Pennsylvania Ph.D. 1986, 223 pages.
TI TOWARD THE DEVELOPMENT OF A KNOWLEDGE-BASED MODEL MANAGEMENT SYSTEM.
SO DAI V47(10), SecA, pp3803.
DE Business Administration, General.
AB Model management systems (MMS) are the most important but least
researched component of decision support systems. Recently,
research in MMS design has increased dramatically because
significant progress in artificial intelligence, especially in the
areas of knowledge representation, heuristics, and automatic
reasoning, and experience gained from developing knowledge-based
systems or expert systems, have provided a very good basis for
developing MMSs. Successful development of an MMS has now become a
promising and challenging research topic for researchers in
information system areas.
Because of the similarity between a data base and a model base,
many previous researchers have focused on applying existing data
models, such as the relational model, to the development of model
management systems. However, in addition to the functions similar
to data base management systems, such as model storage, retrieval,
execution, and maintenance, a knowledge-based management system
needs the following two capabilities: (1) Model Integration. A
mechanism for integrating existing models so that the model in the
model base is not only a stand-alone model but also a module for
creating ad hoc models. (2) Model Selection. a mechanism that
facilitates the process of model selection.
This dissertation focuses on applying artificial intelligence
techniques, especially the automated reasoning capabilities for
model integration and selection. It first proposes a conceptual
framework for MMS design which integrates four different
considerations: three user's roles, three levels of models, three
views of a model base, and two roles of model management systems.
Secondly, a graph-based approach to model management is developed.
The approach formulates the modeling process as a process for the
creation of a directed network graph, which represents all candidate
models for solving a problem, and the selection of a path on the
network. Mechanisms and strategies for formulating a model graph
are discussed.
Finally, two prototypes, TIMMS (The Integrated Model Management
System) and MODELCAL are presented to demonstrate the feasibility of
the framework developed in this research. TIMMS is implemented in
PROLOG and MODELCAL is developed in TURBO PASCAL.
AN University Microfilms Order Number ADG86-29661.
AU SVIOKLA, JOHN JULIUS.
IN Harvard University D.B.A. 1986, 448 pages.
TI PlanPower, XCON, and MUDMAN: AN IN-DEPTH ANALYSIS INTO THREE
COMMERCIAL EXPERT SYSTEMS IN USE.
SO DAI V47(09), SecA, pp3473.
DE Business Administration, General.
AB The objective of this thesis is to generate knowledge about the
effects of ESs on the organizations which use them. Three field
sites with expert systems in active use are examined, and the
implications for management are drawn from the empirical
observations.
This thesis uses a comparative, three-site, pre-post exploratory
design to describe and compare the effects of ES use on three
organizations: The Financial Collaborative (using the PlanPower
system), Digital (XCON) and Baroid (MUDMAN). The study is guided by
the notions of organizational programs as defined by March and
Simon, and the information-processing capacity of the firm, as
defined by Galbraith, to organize, describe, and compare the effects
of ES use across the sites. Eleven exploratory hypotheses act as a
basis for theory-building and further hypothesis generation.
ESs address ill-structured problems. Ill-structured problems
are those problems for which the solution methods and criteria are
either ill-defined or non-existent. In investigating three
large-scale ESs in use, this researcher discovered that these
systems seem to create a phenomenon referred to as "progressive
structuring." This process alters the nature of the underlying task
and the organizational mechanisms which support it. This phenomenon
is dynamic and evolves over time.
All the ESs seemed to increase the effectiveness and efficiency
of the user firm. The price of the benefits was an increased
rigidity in the task. In considering ESs a manager should be
concerned not only with the ES itself, but with the process by which
the ES is adapted, and the overall process of creating and using the
ES. In addition, the manager needs to consider the effects of the
ES on the uncertainty associated with the task and should
consciously manage that uncertainty to foster the level of
adaptation necessary to keep the ES alive and viable in the
organization.
AN University Microfilms Order Number ADG87-01132.
AU ANDERSON, CHARLES WILLIAM.
IN University of Massachusetts Ph.D. 1986, 260 pages.
TI LEARNING AND PROBLEM SOLVING WITH MULTILAYER CONNECTIONIST SYSTEMS.
SO DAI V47(09), SecB, pp3846.
DE Computer Science.
AB The difficulties of learning in multilayered networks of
computational units has limited the use of connectionist systems in
complex domains. This dissertation elucidates the issues of
learning in a network's hidden units, and reviews methods for
addressing these issues that have been developed through the years.
Issues of learning in hidden units are shown to be analogous to
learning issues for multilayer systems employing symbolic
representations.
Comparisons of a number of algorithms for learning in hidden
units are made by applying them in a consistent manner to several
tasks. Recently developed algorithms, including Rumelhart, et
al.'s, error back-propagation algorithm and Barto, et al.'s,
reinforcement-learning algorithms, learn the solutions to the tasks
much more successfully than methods of the past. A novel algorithm
is examined that combines aspects of reinforcement learning and a
data-directed search for useful weights, and is shown to out perform
reinforcement-learning algorithms.
A connectionist framework for the learning of strategies is
described which combines the error back-propagation algorithm for
learning in hidden units with Sutton's AHC algorithm to learn
evaluation functions and with a reinforcement-learning algorithm to
learn search heuristics. The generality of this hybrid system is
demonstrated through successful applications to a numerical,
pole-balancing task and to the Tower of Hanoi puzzle. Features
developed by the hidden units in solving these tasks are analyzed.
Comparisons with other approaches to each task are made.
AN University Microfilms Order Number ADG86-29669.
AU BASU, DIPAK.
IN City University of New York Ph.D. 1986, 184 pages.
TI MECHANIZATION OF DATA MODEL DESIGN: A PETRI NET BASED APPROACH FOR
LEARNING.
SO DAI V47(09), SecB, pp3846.
DE Computer Science.
AB Development and design of data models plays an important role in
the mechanization of solution of problems. In this dissertation, we
discuss mechanization of the design of data models.
We focus our attention to the micro-world of combinatorial
problems, their solutions, and the data models for the solutions.
We show how a model can be constructed for the micro-world. We
discuss how a machine can learn to construct such a model when it is
provided with a rudimentary data model consisting of rules and
definitions of a problem.
For this purpose, we interpret the states of the problem and the
actions that connect the states, as place-nodes and transition-nodes
respectively, of a Petri net: a bipartite directed multi-graph. The
petri net is thought to represent the dynamics of the problem. A
compatible data model based on the Petri net is constructed which
supports and drives the Petri net. This enables the machine to
solve the combinatorial problem at hand proving the effectiveness of
the data model.
We use a heirarchical learning process to enable the machine to
construct the Petri net and the corresponding data model. This
evolutionary approach to data model design is viewed as
mechanization of design of such models.
AN University Microfilms Order Number ADG87-02684.
AU BELEW, RICHARD KUEHN.
IN The University of Michigan Ph.D. 1986, 328 pages.
TI ADAPTIVE INFORMATION RETRIEVAL: MACHINE LEARNING IN ASSOCIATIVE
NETWORKS.
SO DAI V47(10), SecB, pp4216.
DE Computer Science.
AB One interesting issue in artificial intelligence (AI) currently
is the relative merits of, and relationship between, the "symbolic"
and "connectionist" approaches to intelligent systems building. The
performance of more traditional symbolic systems has been striking,
but getting these systems to learn truly new symbols has proven
difficult. Recently, some researchers have begun to explore a
distinctly different type of representation, similar in some
respects to the nerve nets of several decades past. In these
massively parallel, connectionist models, symbols arise implicitly,
through the interactions of many simple and sub-symbolic elements.
One of the advantages of using such simple elements as building
blocks is that several learning algorithms work quite well. The
range of application for connectionist models has remained limited,
however, and it has been difficult to bridge the gap between this
work and standard AI.
The AIR system represents a connectionist approach to the
problem of free-text information retrieval (IR). Not only is this
an increasingly important type of data, but it provides an excellent
demonstration of the advantages of connectionist mechanisms,
particularly adaptive mechanisms. AIR's goal is to build an
indexing structure that will retrieve documents that are likely to
be found relevant. Over time, by using users' browsing patterns as
an indication of approval, AIR comes to learn what the keywords
(symbols) mean so as use them to retrieve appropriate documents.
AIR thus attempts to bridge the gap between connectionist learning
techniques and symbolic knowledge representations.
The work described was done in two phases. The first phase
concentrated on mapping the IR task into a connectionist network; it
is shown that IR is very amenable to this representation. The
second, more central phase of the research has shown that this
network can also adapt. AIR translates the browsing behaviors of
its users into a feedback signal used by a Hebbian-like local
learning rule to change the weights on some links. Experience with
a series of alternative learning rules are reported, and the results
of experiments using human subjects to evaluate the results of AIR's
learning are presented.
AN This item is not available from University Microfilms International
ADG05-59521.
AU CHAN, KWOK HUNG.
IN The University of Western Ontario (Canada) Ph.D. 1986.
TI FOUNDATIONS OF LOGIC PROGRAMMING WITH EQUALITY.
SO DAI V47(10), SecB, pp4217.
DE Computer Science.
AB An obstacle to practical logic programming systems with equality
is infinite computation. In the dissertation we study three
strategies for eliminating infinite searches in Horn clause logic
programming systems and develop an extension of Prolog that has the
symmetry, transitivity and predicate substitutivity of equality
built-in. The three strategies are: (1) Replacing logic programs
with infinite search trees by equivalent logic programs with finite
search trees; (2) Building into the inference machine the axioms
that cause infinite search trees; (3) Detecting and failing searches
of infinite branches.
The dissertation consists of two parts. General theories of the
three strategies identified above are developed in Part I. In Part
II we apply these strategies to the problem of eliminating infinite
loops in logic programming with equality.
Part I. General Theories. We introduce the notion of
CAS-equivalent logic programs: logic programs with identical correct
answer substitutions. Fixpoint criteria for equivalent logic
programs are suggested and their correctness is established.
Semantic reduction is introduced as a means of establishing the
soundness and completeness of extensions of SLD-resolution. The
possibility of avoiding infinite searches by detecting infinite
branches is explored. A class of SLD-derivations called repetitive
SLD-derivation is distinguished. Many infinite derivations are
instances of repetitive SLD-derivations. It is demonstrated that
pruning repetitive SLD-derivations from SLD-trees does not cause
incompleteness.
Part II. Extended Unification for Equality. An extension of
SLD-resolution called SLDEU-resolution is presented. The symmetry,
transitivity and predicate substitutivity of equality are built into
SLDEU-resolution by extended unification. Extended unification, if
unrestricted, also introduces infinite loops. We can eliminate some
of these infinite loops by restricting SLDEU-resolution to
non-repetitive right recursive SLDEU-resolution; this forbids
extended unification of the first terms in equality subgoals and has
a built-in mechanism for detecting repetitive derivations. The
soundness and completeness of non-repetitive right recursive
SLDEU-resolution are proved.
AN University Microfilms Order Number ADG87-01100.
AU COOPER, NELL.
IN The University of Texas at Arlington Ph.D. 1986, 117 pages.
TI A FORMAL DESCRIPTION AND THEORY OF KNOWLEDGE REPRESENTATION
METHODOLOGIES.
SO DAI V47(09), SecB, pp3847.
DE Computer Science.
AB The absence of a common and consistently applied terminology in
discussions of knowledge representation techniques and the lack of a
unifying theory or approach are identified as significant needs in
the area of knowledge representation. Knowledge representation
viewed as a collection of levels is presented as an alternative to
traditional definitions. The levels and their associated primitives
are discussed. The concept of levels within each knowledge
representation technique provides resolution to many of the
controversies and disagreements that have existed among researchers
concerning the equivalency of representation methodologies.
A statement of the equivalence of a certain class of frame
knowledge representation and a certain class of logic based
knowledge representation is presented. Definitions of the classes
are included. Algorithms to convert from each class to the other
are given as evidence of their equivalence.
AN University Microfilms Order Number ADG87-03200.
AU DURRANT-WHYTE, HUGH FRANCIS.
IN University of Pennsylvania Ph.D. 1986, 235 pages.
TI INTEGRATION, COORDINATION AND CONTROL OF MULTI-SENSOR ROBOT SYSTEMS.
SO DAI V47(10), SecB, pp4219.
DE Computer Science.
AB This thesis develops a theory and methodology for integrating
observations from multiple disparate sensor sources. An
architecture for a multi-sensor robot system is proposed, based on
the idea of a coordinator guiding a group of expert sensor agents,
communicating through a blackboard facility. As description of the
robot environment is developed in terms of a topological network of
uncertain geometric features. Techniques for manipulating,
transforming and comparing these representations are described,
providing a mechanism for combining disparate observations. A
general model of sensor characteristics is developed that describes
the dependence of sensor observations on the state of the
environment, the state of the sensor itself, and other sensor
observations or decisions. A constrained Bayesian decision
procedure is developed to cluster and intergrate sparse, partial,
uncertain observations from diverse sensor systems. Using the
network network topology of the world model, a method is developed
for updating uncertain geometric descriptions of the environment in
a manner that maintains a consistent interpretation for observations.
A team theoretic representation of dynamic sensor operation is used
to consider competitive, complementary, and cooperative elements of
multi-sensor coordination and control. These descriptions are used
to develop algorithms for the dynamic exchange of information
between sensor systems and the construction of active sensor
strategies. This methodology is implemented on a distributed
computer system using an active stereo camera and a robot-mounted
tactile gripper.
AN University Microfilms Order Number ADG87-02882.
AU EBELING, WILLIAM HENRY CARL.
IN Carnegie-Mellon University Ph.D. 1986, 187 pages.
TI ALL THE RIGHT MOVES: A VLSI ARCHITECTURE FOR CHESS.
SO DAI V47(10), SecB, pp4219.
DE Computer Science.
AB Hitech, the Carnegie-Mellon chess program that recently won the
ACM computer chess championship and owns a USCF rating of 2352, owes
its success in large part to an architecture that embraces both move
generation and position evaluation. Previous programs have been
subject to a tradeoff between speed and knowledge: applying more
chess knowledge to position evaluation necessarily slows the search.
Recent experience with chess programs such as Belle, Cray Blitz and
BEBE has shown that a deep search solves many problems that a
shallow search with deep understanding cannot cope with. With this
new architecture, Hitech is able to search both deeply and
knowledgeably.
Chapter 2 gives some background and describes previous hardware
move generators. This chapter discusses the requirements of the
move generator in light of the performance of the (alpha)-(beta)
search. Chapter 3 presents a new architecture for move generation
which allows fine-grained parallelism to be applied with very
effective results. Although the amount of hardware required is
substantial, the architecture is eminently suited to VLSI. This
chapter also gives the details of the move generator used by Hitech,
which comprises 64 identical custom VLSI chips. This move generator
is able to judge moves much more effectively than previous move
generators because it knows all the moves available for each side.
Since the efficiency of the (alpha)-(beta) search depends on the
order in which moves are examined, this ability of the move
generator to order moves extremely well results in a very efficient
search.
Chapter 4 describes the requirements of position evaluation and
discusses how this architecture can be used to perform evaluation as
well. This includes a description of a VLSI implementation that we
propose for position evaluation. Chapter 5 describes the other
Hitech hardware and software. Chapter 6 presents a performance
analysis of Hitech as a whole and the move generator in particular.
Based on these measurements, some ways to improve the move generator
performance are discussed. Finally, we draw some conclusions about
the effect of the architecture presented in this thesis on the
problem of chess.
AN University Microfilms Order Number ADG87-01496.
AU GREENBAUM, STEVEN.
IN University of Illinois at Urbana-Champaign Ph.D. 1986, 259
pages.
TI INPUT TRANSFORMATIONS AND RESOLUTION IMPLEMENTATION TECHNIQUES FOR
THEOREM PROVING IN FIRST-ORDER LOGIC.
SO DAI V47(09), SecB, pp3848.
DE Computer Science.
AB This thesis describes a resolution based theorem prover designed
for users with little or no knowledge of automated theorem proving.
The prover is intended for high speed solution of small to moderate
sized problems, usually with no user guidance. This contrasts with
many provers designed to use substantial user guidance to solve hard
or very hard problems, often having huge search spaces. Such
provers are often weak when used without user interaction. Many of
our methods should be applicable to large systems as well.
Our prover uses a restricted form of locking resolution,
together with an additional resolution step. Pending resolvents are
ordered using a priority-based search strategy which considers a
number of factors, including clause complexity measures, derivation
depth of the pending resolvent, and other features.
Also described are transformations that convert formulas from
one to another. One is a nonstandard clause-form translation which
often avoids the loss of structure and increase in size resulting
from the conventional translation, and also takes advantage of
repeated subexpressions. Another transformation replaces operators
in first-order formulas with their first-order definitions, before
translation to clause form. This works particularly well with the
nonstandard clause-form translation. There is also a translation
from clauses to other clauses that, when coupled with some prover
extensions, is useful for theorem proving with equality. The
equality method incorporates Knuth-Bendix completion into the proof
process to help simplify the search.
Some implementation methods are described. Data structures that
allow fast clause storage and lookup, and efficient implementation
of various deletion methods, are discussed. A modification of
discrimination networks is described in detail.
AN University Microfilms Order Number ADG87-01292.
AU HARBISON-MOSS, KARAN ANN.
IN The University of Texas at Arlington Ph.D. 1986, 220 pages.
TI MAINTAINING CURRENT STATUS IN A TIME-CONSTRAINED KNOWLEDGE-BASED
SYSTEM CHARACTERIZED BY CONTINUOUSLY INCOMING TEMPORAL DATA.
SO DAI V47(09), SecB, pp3849.
DE Computer Science.
AB Reasoning processes for knowledge-based systems have in the past
focused on maintaining a current database with a single context and
a given set of data. These methods for reason maintenance do not
suffice in domains in which data is acquired during the solution
process and in which there is a constraint on time to decision.
A reasoning process is proposed for these data acquisition
time-constrained problems that allows multiple contexts and
contradictions to exist. This flexibility, in turn, simplifies the
retraction of data for nonmonotonic inferencing and allows direct
assessment of goal state progression.
This reasoning process is designed to function within the
architecture of a knowledge-based system which itself was developed
to meet the requirements of data acquisition time-constrained
domains.
AN University Microfilms Order Number ADG87-00203.
AU HERMENEGILDO, MANUEL VICTOR.
IN The University of Texas at Austin Ph.D. 1986, 268 pages.
TI AN ABSTRACT MACHINE BASED EXECUTION MODEL FOR COMPUTER ARCHITECTURE
DESIGN AND EFFICIENT IMPLEMENTATION OF LOGIC PROGRAMS IN PARALLEL.
SO DAI V47(09), SecB, pp3849.
DE Computer Science.
AB The term "Logic Programming" refers to a variety of computer
languages and execution models which are based on the traditional
concept of Symbolic Logic. The expressive power of these languages
offers promise to be of great assistance in facing the programming
challenges of present and future symbolic processing applications in
Artificial Intelligence, Knowledge-based systems, and many other
areas of computing. The sequential execution speed of logic
programs has been greatly improved since the advent of the first
interpreters. However, higher inference speeds are still required
in order to meet the demands of applications such as those
contemplated for next generation computer systems. The execution of
logic programs in parallel is currently considered a promising
strategy for attaining such inference speeds. Logic Programming in
turn appears as a suitable programming paradigm for parallel
architectures because of the many opportunities for parallel
execution present in the implementation of logic programs.
This dissertation presents an efficient parallel execution model
for logic programs. The model is described from the source language
level down to an "Abstract Machine" level, suitable for direct
implementation on existing parallel systems or for the design of
special purpose parallel architectures. Few assumptions are made at
the source language level and therefore the techniques developed and
the general Abstract Machine design are applicable to a variety of
logic (and also functional) languages. These techniques offer
efficient solutions to several areas of parallel Logic Programming
implementation previously considered problematic or a source of
considerable overhead, such as the detection and handling of
variable binding conflicts in AND-Parallelism, the specification of
control and management of the execution tree, the treatment of
distributed backtracking, and goal scheduling and memory management
issues etc.
A parallel Abstract Machine design is offered, specifying data
areas, operation, and a suitable instruction set. This design is
based on extending to a parallel environment the techniques
introduced by the Warren Abstract Machine, which have already made
very fast and space efficient sequential systems a reality.
Therefore, the model herein presented is capable of retaining
sequential execution speed similar to that of high performance
sequential systems, while extracting additional gains in speed by
efficiently implementing parallel execution. These claims are
supported by simulations of the Abstract Machine on sample programs.
AN University Microfilms Order Number ADG87-01190.
AU LEE, YILLBYUNG.
IN University of Massachusetts Ph.D. 1986, 150 pages.
TI A NEURAL NETWORK MODEL OF FROG RETINA: A DISCRETE TIME-SPACE
APPROACH.
SO DAI V47(09), SecB, pp3852.
DE Computer Science.
AB Most computational models of the nervous systems in the past
have been developed at the level of a single cell or at the level of
a population of uniform elements. But neither the absolute temporal
activity of a single neuron nor some steady state of a population of
neurons seems to be of utmost importance. A spatio-temporal pattern
of activities of neurons and the way they interact through various
connections appear to matter most in the neuronal computations of
the vertebrate retina.
A population of neurons are modelled based on a connectionist
scheme and on experimental data to provide a spatio-temporal pattern
of activities for every cell involved in the cone-pathways for a
patch of frog's retina. The model has discrete representations for
both space and time. The density of each type/subtype of neuron and
the existence and the size of the connections are based on
anatomical data. Individual neurons are modelled as variations of a
leaky-capacitor model of neurons. Parameters for each type of model
neuron are set so that their temporal activities approximate the
typical intracellular recording for the corresponding neurons given
the known visual/electrical stimulus patterns. Connectivity was
thought the single most important factor for network computation.
Computer simulation results of the model are compared with
well-known physiologic data.
The results show that a network model of coarse individual
neuronal models based on known structures of the vertebrate retina
approximates the overall system behavior successfully reproducing
the observed functions of many of the cell types, thus showing that
the connectionist approach can be applied successfully to neural
network modeling and provide an organizing theory of how individual
neurons interact in a population of neurons.
AN University Microfilms Order Number ADG87-02078.
AU LEE, YONG-BOK.
IN Case Western Reserve University Ph.D. 1986, 104 pages.
TI CONSTRAINT PROPAGATION IN A PATHOPHYSIOLOGIC CAUSAL NETWORK.
SO DAI V47(10), SecB, pp4221.
DE Computer Science.
AB The causal model approach to expert knowledge representation and
reasoning, which is based on making the causal domain relationships
explicit, is a focus of current research in expert systems.
Currently existing model-based algorithms are, however, limited in
the complexity of domains to which they can be applied. Recently, a
semiquantitative simulation method integrated with a symbolic
modeling approach based on functional and organizational primitives
has been described. It has the ability to handle problems in
complex domains involving nonlinear relationships between the
causally related nodes. Its performance, however, requires the
availability of the initial states used in the simulation. The term
"initial condition" is used here to mean the specification of the
values of all variables in the model at a given instant in time.
These values, when then used for simulation, are "initial" in that
they precede all simulated values.
This thesis describes a new algorithm, called semi-quantitative
inverse reasoning, for deriving a complete set of possible current
state descriptions of an arbitrary complex causal model from partial
specifications of the current state. Algorithms of constraint
propagation by inference and hypothesis, hypothesis generation, and
hypothesis conformation are developed to support the
semi-quantitative inverse reasoning technique.
Therefore in application to the medical domain, this technique
can derive a complete set of primary diagnoses given medical data
and an appropriate causal model.
AN University Microfilms Order Number ADG87-00230.
AU LEI, CHIN-LAUNG.
IN The University of Texas at Austin Ph.D. 1986, 171 pages.
TI TEMPORAL LOGICS FOR REASONING UNDER FAIRNESS ASSUMPTIONS.
SO DAI V47(09), SecB, pp3852.
DE Computer Science.
AB In this dissertation, we consider the problem of whether the
branching time or linear time framework is more appropriate for
reasoning about concurrent programs in light of the criteria of
expressiveness and complexity. We pay special attention to the
problem of temporal reasoning under (various) fairness assumptions.
In particular, we focus on the following: (1) The Model Checking
Problem--Given a formula p and a finite structure M, does M define a
model of p? (2) The Satisfiability Problem--Given a formula p, does
there exist a structure M which defines a model of p? Algorithms for
the model checking problem are useful in mechanical verification of
finite state concurrent systems. Algorithms for testing
satisfiability have applications not only to the automation of
verification of (possibly infinite state) concurrent programs, but
also in mechanical synthesis of concurrent programs where the
decision procedure is used to construct a model of the specification
formula from which a concurrent program is extracted.
AN University Microfilms Order Number ADG86-20896.
AU LI, ZE-NIAN.
IN The University of Wisconsin - Madison Ph.D. 1986, 200 pages.
TI PYRAMID VISION: USING KEY FEATURES AND EVIDENTIAL REASONING.
SO DAI V47(09), SecB, pp3852.
DE Computer Science.
AB Pyramid programs and multicomputers appear to offer a number of
interesting possibilities for computer visual perception. This
thesis takes a pyramidal approach for the analysis of the images of
cells and outdoor scenes. Transforms that compute relatively local
brain-like functions are used to extract and combine features at the
successive layers in the pyramid. They are applied in a parallel
and hierarchical manner to model the living visual systems.
The use of 'key features' in this thesis is an exploitation of
the generation and use of 'focus of attention' techniques for visual
perception in a pyramid vision system. In contrast to many other
systems, key features are used as the central threads for the
control process. They embed naturally into the pyramid structure,
organizing bottom-up, top-down and lateral searches and
transformations into a well-integrated structure of processes.
Moveover, they are also incorporated into the knowledge
representation and reasoning techniques proposed for the pyramid
vision system.
The term 'evidential reasoning' refers to the reasoning process
conducted by a system on the basis of uncertain and incomplete data
and world knowledge. The Dempster-Shafer Theory of Evidence is
adapted for evidential reasoning in a multi-level pyramid vision
system where images are analyzed and recognized using micro-modular
production-like transforms. Treated as a belief function, the form
of the system's knowledge is compact. The reasoning mechanism
extends the use of the belief function and the Dempster Combination
Rule. While other approaches leave a gap between the feature space
and the object space, the present mapping between these two spaces
makes smooth transitions. The new knowledge representation
technique and its reasoning mechanism take advantage of the
set-theoretic formalism, while still maintaining modularity and
flexibility. The comparison between the evidential reasoning
approach and a simple weight combination method shows that this new
approach makes better use of the world knowledge, and offers a good
way to use key features.
The pyramid vision programs using key features and evidential
reasoning were used successfully on two biomedical images and four
outdoor-scene images. The results indicate that this new approach
is efficient and effective for the analysis of complex real-world
images.
AN This item is not available from University Microfilms International
ADG05-59587.
AU LU, SIWEI.
IN University of Waterloo (Canada) Ph.D. 1986.
TI ATTRIBUTED HYPERGRAPH REPRESENTATION AND RECOGNITION OF 3-D OBJECTS
FOR COMPUTER VISION.
SO DAI V47(10), SecB, pp4221.
DE Computer Science.
AB This thesis presents a robot vision system which is capable of
recognizing objects in a 3-D scene and interpreting their spatial
relation even though some objects in the scene may be partially
occluded by other objects. In my system, range data for a
collection of 3-D objects placed in proximity is acquired by laser
scanner. A new algorithm is developed to transform the geometric
information from the range data into an attributed hypergraph
representation (AHR). The AHR is a unique representation of 3-D
object which is invariant to orientation. A hypergraph monomorphism
algorithm is used to compare the AHR of objects in the scene with
the complete AHR of a set of prototypes in a database. Through a
hypergraph monomorphism, it is possible to recognize any view of an
object and also classify the scanned objects into classes which
consist of similar shapes. The system can acquire representation
for unknown objects. Several AHR's of the various views of an
unknown object can be synthesized into a complete AHR of the object
which can then be included in the model database. A scene
interpretation algorithm is developed to locate and recognize
objects in the scene even though some of them are partially occluded.
The system is implemented in PASCAL on a VAX11/750 running VMS, and
the image results are displayed on a Grinnell 270 display device.
AN University Microfilms Order Number ADG87-01195.
AU LYONS, DAMIAN MARTIN.
IN University of Massachusetts Ph.D. 1986, 255 pages.
TI RS: A FORMAL MODEL OF DISTRIBUTED COMPUTATION FOR SENSORY-BASED
ROBOT CONTROL.
SO DAI V47(09), SecB, pp3853.
DE Computer Science.
AB Robot systems are becoming more and more complex, both in terms
of available degrees of freedom and in terms of sensors. It is no
longer possible to continue to regard robots as peripheral devices
of a computer system, and to program them by adapting
general-purpose programming languages. This dissertation analyzes
the inherent computing characteristics of the robot programming
domain, and formally constructs an appropriate model of computation.
The programming of a dextrous robot hand is the example domain for
the development of the model.
This model, called RS, is a model of distributed computation:
The basic mode of computation is the interaction of concurrent
computing agents. A schema in RS describes a class of computing
agents. Schemas are instantiated to produce computing agents,
called SIs, which can communicate with each other via input and
output ports. A network of SIs can be grouped atomically together
in an Assemblage, and appears externally identical to a single SI.
The senory and motor interface to RS is a set of primitive,
predefined schemas. These can be grouped arbitrarily with built-in
knowledge in assemblages to form task-specific object models. A
special kind of assemblage called a task-unit is used to structure
the way robot programs are built.
The formal semantics of RS is automata theoretic; the semantics
of an SI is a mathematical object, a Port Automaton. Communication,
port connections, and assemblage formation are among the RS concepts
whose semantics can be expressed formally and precisely. A temporal
logic specification and verification method is constructed using the
automata semantics as a model. While the automata semantics allows
the analysis of the model of computation, the temporal logic method
allows the top-down synthesis of programs in the model.
A computer implementation of the RS model has been constructed,
and used in conjunction with a graphic robot simulation, to
formulate and test dextrous hand control programs. In general, RS
facilitates the formulation and verification of versatile robot
programs, and is an ideal tool with which to introduce AI constructs
to the robot domain.
AN University Microfilms Order Number ADG87-04023.
AU MEREDITH, MARSHA JEAN EKSTROM.
IN Indiana University Ph.D. 1986, 205 pages.
TI SEEK-WHENCE: A MODEL OF PATTERN PERCEPTION.
SO DAI V47(11), SecB, pp4584.
DE Computer Science.
AB Seek-Whence is an inductive learning program that serves as a
model of a new approach to the programming of "intelligent" systems.
This approach is characterized by: (1) structural representation of
concepts; (2) the ability to reformulate concepts into new, related
concepts; (3) a probabilistic, biologically-inspired approach to
processing; (4) levels of abstraction in both representation and
processing.
The program's goals are to discover patterns, describe them as
structural pattern concepts, and reformulate those concepts, when
appropriate. The system should model human performance as closely
as possible, especially in the sense of generating plausible
descriptions and ignoring implausible ones. Description development
should be strongly data-driven. Small, special-purpose tasks
working at different levels of abstraction with no overseeing agent
to impose an ordering eventually guide the system toward a correct
and concise pattern description.
The chosen domain is that of non-mathematically-sophisticated
patterns expressed as sequences of nonnegative integers. A user
presents a patterned number sequence to the system, one term at a
time. Seek-Whence then either ventures a guess at the pattern,
quits, or asks for another term. Should the system guess a pattern
structure different from the one the user has in mind, the system
will attempt to reformulate its faulty perception.
Processing occurs in two stages. An initial formulation must
first evolve; this is the work of stage one, culminating in the
creation of a hypothesis for the sequence pattern. During stage
two, the hypothesis is either verified or refuted by new evidence.
Consistent verification will tend to confirm the hypothesis, and the
system will present the user with its hypothesis. An incorrect
guess or refutation of the hypothesis by new evidence will cause the
system to reformulate or abandon the hypothesis.
Reformulation of the hypothesis causes related changes
throughout the several levels of Seek-Whence structures. These
changes can in turn cause the noticing of new perceptions about the
sequence, creating an important interplay among the processing
levels.
AN University Microfilms Order Number ADG87-00791.
AU MITCHELL, JOSEPH S. B.
IN Stanford University Ph.D. 1986, 143 pages.
TI PLANNING SHORTEST PATHS.
SO DAI V47(09), SecB, pp3853.
DE Computer Science.
AB Recent research in the algorithmic aspects of robot motion and
terrain navigation has resulted in a number of interesting variants
of the shortest path problem. A problem that arises when planning
shortest collision-free paths for a robot is the following: Find the
shortest path from START to GOAL for a point moving in two or three
dimensions and avoiding a given set of polyhedral obstacles. In
this thesis we survey some of the techniques used and some of our
recent results in shortest path planning. We introduce a useful
generalization of the shortest path problem, the "weighted region
problem". We describe a polynomial-time algorithm which finds a
shortest path through "weighted" polygonal regions, that is, which
minimizes the sum of path lengths multiplied by the respective
weight factors of the regions through which the path passes. Our
algorithm exploits the fact that optimal paths obey Snell's Law of
Refraction when passing through region boundaries. We also give an
O(n('2) log n) algorithm for the special case of the
three-dimensional shortest path problem in which paths are
constrained to lie on the surface of a given (possibly non-convex)
polyhedron. Both algorithms make use of a new technique of solving
shortest path problems; we call this technique a "continuous
Dijkstra algorithm", as it closely resembles the method used by
Dijkstra to solve simple shortest path problems in a graph.
AN University Microfilms Order Number ADG86-29095.
AU MORGADO, ERNESTO JOSE MARQUES.
IN State University of New York at Buffalo Ph.D. 1986, 234 pages.
TI SEMANTIC NETWORKS AS ABSTRACT DATA TYPES.
SO DAI V47(11), SecB, pp4584.
DE Computer Science.
AB Abstraction has often been used to permit one to concentrate on
the relevant attributes of the domain and to disregard the
irrelevant ones. This is accompanied by a reduction in the
complexity of the domain. Researchers have extensively studied the
use of abstraction in programming languages to allow programmers to
develop software that is precise, reliable, readable, and
maintainable. In spite of the amount of research that it has been
subjected to, data abstraction has been largely neglected by
programmers, when compared with other abstract methodologies used in
programming. One problem is that it is not always easy to
characterize the correct set of operations that defines an abstract
data type; and, although many definitions have been presented, no
precise methodology has ever been proposed to hint at the choice of
those operations. A second problem is that there is a discrepancy
between the formalism used to define an abstract specification and
the architecture of the underlying virtual machine used to implement
it. This discrepancy makes it difficult for the programmer to map
the abstract specification, written at design time, into a concrete
implementation, written at coding time. In order to correct these
problems, a theory of data abstraction is presented, which includes
a new definition of abstract data type and a methodology to create
abstract data types.
Because of their complexity, semantic networks are defined in
terms of a variety of interrelated data types. The preciseness of
the abstract data type formalism, and its emphasis on the behavior
of the data type operations, rather than on the structure of its
objects, makes the semantics of semantic networks clearer. In
addition, the design, development, and maintenance of a semantic
network processing system requires an appropriate software
engineering environment. The methodology of data abstraction, with
its philosophy of modularity and independence of representations,
provides this kind of environment. On the other hand, the
definition of a semantic network as an abstract data type and its
implementation using the methodology of data abstraction provide
insights on the development of a new theory of abstract data types
and the opportunity for testing and refining that theory. (Abstract
shortened with permission of author.).
AN University Microfilms Order Number ADG87-05476.
AU PEPER, GERRI L.
IN Colorado State University Ph.D. 1986, 174 pages.
TI INEXACT REASONING IN AN INDUCTIVE LEARNING ENVIRONMENT.
SO DAI V47(11), SecB, pp4585.
DE Computer Science.
AB For large expert systems it is well known that better methods
for acquiring expert decision-making knowledge are needed to speed
up the development cycle. For this reason, there has been
significant interest shown in the possibilities of using an
inductive learning approach to ease this knowledge acquisition
bottleneck. Although quite successful in their ability to generate
correct and efficient rules, the initial attempts at inductive
learning systems have failed to take into consideration a very
important aspect of expert systems, that being the ability to accept
and reason with uncertain knowledge. This is known as the inexact
reasoning problem.
This thesis describes an approach to inexact reasoning which is
designed for an expert system environment which allows inductive
learning as one method of knowledge acquisition. The system
presented in this thesis is KNET, a generalized expert system shell
which provides full support for both knowledge acquisition and
consultation. It allows knowledge to be expressed in two forms,
either as a set of examples or as a decision network.
Transformations are allowed from one form to another. Previously
existing methods of inexact reasoning have not directly dealt with
these forms of knowledge representation.
Three phases of the inexact reasoning problem are addressed:
obtaining probabilistic knowledge during the creation of the
knowledge base; using and preserving this knowledge during
transformations from one form of knowledge to another; and reasoning
with the inexact knowledge during the consultation. A general
approach for dealing with inexact knowledge in each of these phases
is presented. In addition to presenting this general approach to
inexact reasoning, special consideration is given to the problem of
representing uncertainty during the consultation. Emphasis is
placed on insuring that the degree of uncertainty reflected by the
user's answers is also clearly reflected in the certainty assigned
to each of the possible conclusions presented by the system.
Several possible techniques for accomplishing this task are explored.
These are presented as two different models for reasoning with
uncertainty.
AN University Microfilms Order Number ADG87-00810.
AU RENNELS, GLENN DOUGLAS.
IN Stanford University Ph.D. 1986, 259 pages.
TI A COMPUTATIONAL MODEL OF REASONING FROM THE CLINICAL LITERATURE.
SO DAI V47(09), SecB, pp3854.
DE Computer Science.
AB This dissertation explores the premise that a formalized
representation of empirical studies can play a central role in
computer-based decision support. The specific motivations
underlying this research include the following propositions: (1)
Reasoning from experimental evidence contained in the clinical
literature is central to the decisions physicians make in patient
care. Previous researchers in medical artificial intelligence,
concentrating on issues such as causal modeling, have not adequately
addressed the role of experimental evidence in medical reasoning.
(2) A computational model, based upon a declarative representation
for published reports of clinical studies, can drive a computer
program that selectively tailors knowledge of the clinical
literature as it is applied to a particular case. (3) The
development of such a computational model is an important first step
toward filling a void in computer-based decision support systems.
Furthermore, the model may help us better understand the general
principles of reasoning from experimental evidence both in medicine
and other domains.
Roundsman is a developmental computer system which draws upon
structured representations of the clinical literature in order to
critique plans for the management of primary breast cancer. A
distance metric has been developed to help assess the relevance of a
published study to a particular clinical decision. A general model
of choice and explanation in medical management has also been
adapted for application to this task domain. Roundsman is able to
produce patient-specific analyses of breast cancer management
options based on the 24 clinical studies currently encoded in its
knowledge base.
Medicine will repeatedly present problem domains for which there
are no reliable causal models, and in which reasoning from
experimental evidence may be pivotal to problem-solving. The
Roundsman system is a first step in exploring how the computer can
help to bring a critical analysis of the relevant literature to the
physician, structured around a particular patient and treatment
decision.
AN University Microfilms Order Number ADG87-05480.
AU RICHARDSON, RAY CHARLES.
IN Colorado State University Ph.D. 1986, 238 pages.
TI INTELLIGENT COMPUTER AIDED INSTRUCTION IN STATICS.
SO DAI V47(11), SecB, pp4586.
DE Computer Science.
AB The increased emphasis on fifth-generation computers has
prompted much attention on the study of artificial intelligence and
the sub-field of expert systems. Expert systems are computer
programs which solve expert problems using expert knowledge. The
primary emphasis of these programs is human knowledge representation
of problems that humans solve. One of the areas where expert
systems have been used is in education. The linking of expert
systems and traditional Computer Aided Instruction is known as
Intelligent Computer Aided Instruction. The purpose of this study
is to demonstrate the feasibility of an expert system applied to
undergraduate instruction.
An expert system was developed to model the problem solving
knowledge of Dr. J. L. Meriam from his text Engineering Mechanics
Volume I, Statics. The rules and heuristics for solving
two-dimensional truss problems were then implemented in the MRS
language. The expert system was then validated by solving problems
from the text in the same manner as Meriam. Linked to the expert
system were three learning style modules. The learning styles
modeled in this study were drill-and-practice, learning-by-example,
and a new style called buddy-study. The buddy-state learning style
represents an implementation of the Whimbley-pairs technique for
computer based learning. The learning system comprising the expert
system, learning style modules, and associated support programs were
then tested for correctness and completeness.
The results of the expert system validation demonstrated a
system capable of solving problems within the domain as Meriam did.
The learning style module testing showed procedures commensurate
with accepted classroom uses of the styles. The buddy-study method
demonstrated a computer learning strategy which places the expert
system and the student user as colleagues in a problem solving
environment. The results of the testing indicate the feasibility of
such a system for inclusion in undergraduate statics courses.
AN University Microfilms Order Number ADG86-25748.
AU TSAO, THOMAS T.
IN University of Maryland Ph.D. 1985, 151 pages.
TI THE DESIGN AND ANALYSIS OF PARALLEL ADAPTIVE ALGORITHMS FOR
COMPOSITE DECISION PROCESSES.
SO DAI V47(09), SecB, pp3857.
DE Computer Science.
AB This dissertation presents new approaches to the design and
analysis of parallel adaptive algorithms for multiple instruction
stream multiple data stream (MIMD) machines.
A composite decision process (cdp) is a model for many problems
from the field of artificial intelligence. The mathematical
structure modeled by a cdp includes both the algebraic structure of
the domain set of the problem and the functional structure of the
problem.
A dynamic algorithm is a parallel algorithm with its control
structure consisting of (1) an adaptive task structure, and (2)
eager computation enabling mechanism. The eager computation is an
enabling mechanism of parallel computations governed by processor
availabilities.
In the algorithm analysis, we also focus on the utility of
computing power in the designed algorithm and the utility of the
accumulated information in reducing the cost of search effort. The
relation of these two aspects to the speed-up ratio is investigated.
We call the analysis a dynamic analysis because it focuses on these
major dynamic features of the parallel processes. A survey of the
literature shows that very little previous work is available along
these lines.
The quantitative analysis presented in this dissertation
confirms that in parallel adaptive search, to increase parallelism
we must accept dynamic task assignment, and must have dynamic
modification of global tasks in order to make best use of
accumulated information.
The key contributions of this dissertation to the state of the
art of parallel search in A.I. are: (1) A new approach to the use
of accumulated information in parallel search, whereby information
accumulated during the execution of processes is used to change the
specification of tasks which are remaining or unfinished. (2) A new
approach to dynamic parallel algorithm design, which combines the
use of accumulated (task related) information with eager
computation, so that information developed during search may be
employed to achieve maximum possible parallelism in a given
environment. (3) A new and precise measure for speed-up obtained by
a parallel algorithm. (4) A new approach to the comparative
analysis of parallel algorithms. (Abstract shortened with
permission of author.)
AN University Microfilms Order Number ADG87-01639.
AU TUCERYAN, MIHRAN.
IN University of Illinois at Urbana-Champaign Ph.D. 1986, 171
pages.
TI EXTRACTION OF PERCEPTUAL STRUCTURE IN DOT PATTERNS.
SO DAI V47(10), SecB, pp4224.
DE Computer Science.
AB Perceptual grouping is an important mechanism of early visual
processing. This thesis presents a computational approach to
perceptual grouping in dot patterns. Detection of perceptual
organization is done in two steps. The first step, called the
lowest level grouping, extracts the perceptual segments of dots that
group together because of their relative locations. The grouping is
accomplished by interpreting dots as belonging to interior or border
of a perceptual segment, or being along a perceived curve, or being
isolated. The Voronoi neighborhood of a dot is used to represent
its local geometric environment. The grouping is seeded by
assigning to dots their locally evident perceptual roles and
iteratively modifying the initial estimates to enforce global
Gestalt constraints. This is done through independent modules that
possess narrow expertise for recognition of typical interior dots,
border dots, curve dots and isolated dots, from the properties of
the Voronoi neighborhoods. The results of the modules are allowed
to influence and change each other so as to result in perceptual
components that satisfy global, Gestalt criteria such as border or
curve smoothness and component compactness. Thus, an integration is
performed of multiple constraints, active at different perceptual
levels and having different scopes in the dot pattern, to infer the
lowest level perceptual structure. The result of the lowest level
grouping phase is the partitioning of a dot pattern into different
perceptual segments or tokens.
The second step further groups the lowest level tokens to
identify any hierarchical structure present. The grouping among
tokens is done based on a variety of constraints including their
proximity, orientations, sizes, and terminations, integrated so as
to mimic the perceptual roles of these criteria. This results in a
new set of larger tokens. The hierarchical grouping process repeats
until no new groupings are formed. The final result of the
implementation described here is a hierarchical representation of
the perceptual structure in a dot pattern. Our representation of
perceptual structure allows for "focus of attention" through the
presence of multiple levels, and for "rivalry" of groupings at a
given level through the probabilistic interpretation of groupings
present.
AN University Microfilms Order Number ADG87-01283.
AU YODER, CORNELIA MARIE.
IN Syracuse University Ph.D. 1986, 383 pages.
TI AN EXPERT SYSTEM FOR PROVIDING ON-LINE INFORMATION BASED ON
KNOWLEDGE OF INDIVIDUAL USER CHARACTERISTICS.
SO DAI V47(09), SecB, pp3858.
DE Computer Science.
AB In many interactive systems which provide information, such as
HELP systems, the form and content of the information presented
always seems to satisfy some people and frustrate others. Human
Factors textbooks and manuals for interactive systems focus on the
need for consistency and adherence to some standard. This
implicitly assumes that if the optimum format and level of detail
could be found for presenting information to a user, interactive
systems would only need to adhere to the standard to be optimum for
everyone. This approach neglects one of the most important factors
of all--differences in people. If these individualizing differences
in people could be identified, a system could be designed with
options built into it to accommodate different users. The role of
the intelligent active system should be more like that of a human
expert or consultant, who answers questions by first interpreting
them in terms of the user's knowledge and the context of his
activities and then recommending actions which may be otherwise
unknown to the user.
The HELP system developed in this study is an Expert System
written in PROLOG which uses logic programming rules to
intelligently provide needed information to a terminal user. It
responds to a request with a full screen display containing
information determined by the request, the user's cognitive style
and the user's experience level. The investigation studies the
relationship between some cognitive style and experience level
parameters and individual preferences and efficacy with an
interactive computer information system. These factors are measured
by the ability of an individual user to perform unfamiliar tasks
using a HELP function as information source. The format of the
information provided by the HELP function is varied along three
dimensions and the content of the information is varied by three
levels of detail.
Experiments were performed with the system and experimental
results are presented which show some trends relating cognitive
style and individual preferences and performance using the system.
In addition, it is argued that an Expert System can perform such a
function effectively.
AN University Microfilms Order Number ADG87-03940.
AU YOSHII, RIKA.
IN University of California, Irvine Ph.D. 1986, 152 pages.
TI JETR: A ROBUST MACHINE TRANSLATION SYSTEM.
SO DAI V47(11), SecB, pp4586.
DE Computer Science.
AB This dissertation presents an expectation-based approach to
Japanese-to-English translation which deals with grammatical as well
as ungrammatical sentences and preserves the pragmatic, semantic and
syntactic information contained in the source text. The approach is
demonstrated by the JETR system, which is composed of the
particle-driven analyzer, the simultaneous generator and the context
analyzer. The particle-driven analyzer uses the forward
expectation-refinement process to handle ungrammatical sentences in
an elegant and efficient manner without relying on the presence of
particles and verbs in the source text. To achieve extensibility
and flexibility, ideas such as the detachment of control structure
from the word level, and the combination of top-down and bottom-up
processing have been incorporated. The simultaneous generator
preserves the syntactic style of the source text without carrying
syntactic information in the internal representation of the text.
No source-language parse tree needs to be constructed for the
generator. The context analyzer is able to provide contextual
information to the other two components without fully understanding
the text. JETR operates without pre-editing and post-editing, and
without interacting with the user except in special cases involving
unknown words.