leff@smu.UUCP (Laurence Leff) (12/20/88)
Subject: AI-Related Dissertations from SIGART No. 103 (only one file)
The following is a list of dissertation titles and
abstracts related to Artificial Intelligence taken
taken from the Dissertation Abstracts International
(DAI) database. The list is assembled by Susanne
Humphrey and myself and is published in the SIGART
Newsletter (that list doesn't include the abstracts).
The dissertation titles and abstracts contained here
are published with the permission of University Microfilms
International, publishers of the DAI database. University
Microfilms has granted permission for this list to be
redistributed electronically and for extracts and
hardcopies to be made of it, provided that this notice
is included and provided that the list is not sold.
Copies of the dissertations may be obtained by
addressing your request to:
University Microfilms International
Dissertation Copies
Post Office Box 1764
Ann Arbor, Michigan 48106
or by telephoning (toll-free) 1-800-521-3042
(except for Michigan, Hawaii, and Alaska).
In Canada: 1-800-268-6090.
From SIGART Newsletter No. 103
File 1 of 1
------------------------------------------------------------------------
AN University Microfilms Order Number ADG87-13824.
AU KRAEMER, JAMES RICHARD.
IN The University of Oklahoma Ph.D. 1987, 166 pages.
TI ADMINISTRATIVE CONTROL BY EXPERT SYSTEM: A FRAMEWORK FOR EXPERT
SYSTEMS IN MANAGEMENT.
SO DAI V48(03), SecA, pp694.
DE Business Administration, Management.
AB Administrative Control Expert Systems (ACES) can be developed to
control an administrative process in the same way that PERT, CPM,
and Gantt charts control a project. The fundamental difference
between an administrative process and a project is that tasks to be
performed are determined by policy and procedures, not a fixed
schedule. Traditional administrative control systems are driven by
data and rules, called policy, procedures, objectives, and budgets.
Expert systems are, likewise, driven by their data and rule base.
The ACES framework makes explicit the implicit similarity between
administrative control systems and expert systems.
The overall design of the ACES framework and each of its five
major subsystems are described. A methodology is presented to allow
the integration of an ACES into an existing transaction processing
environment. Extensions to the expert system methodology are
presented to provide capabilities to specifically support
administrators. Security issues for expert systems are discussed,
along with the integration of microcomputers into a mainframe ACES
environment.
A limited example of ACES rule structure, data, and processing
is included in an appendix.
AN University Microfilms Order Number ADG87-14769.
AU OLIVERO, RAMON ALFREDO.
IN University of Houston Ph.D. 1987, 272 pages.
TI SELECTION OF EXPERIMENTAL DESIGNS FOR ANALYTICAL CHEMISTRY WITH THE
AID OF AN EXPERT SYSTEM.
SO DAI V48(04), SecB, pp1028.
DE Chemistry, Analytical.
AB An expert system for selecting statistical experimental designs
in analytical chemistry has been designed and implemented.
The resulting expert system computer program (DXPERT) uses
information about the characteristics and constraints of the
analytical chemical system as well as information about the user's
interests and resources to assess the suitability of each of
thirteen candidate experimental designs.
A dedicated "inference engine" was constructed to utilize a
knowledge base containing the experience of an expert in the field
of statistical experimental design, the knowledge of this writer,
and information from the literature.
The selection of experimental designs is determined by the
answers given by the analytical chemist to the questions posed by
the expert system in an interactive consultation session. The
questions are presented in an order determined by a criterion of
maximum potential information gain. Fuzzy set logic and arithmetic
are applied to the knowledge representation and to the calculation
of the experimental designs' desirabilities.
The program operates on an IBM-PC('TM) or compatible personal
computer and is written in Pascal language. It has user-friendly
features like "why" explanations, a help facility, reviewing and
revision options, and menus.
A number of test runs with representative problems were carried
out to validate the system and to evaluate its performance. It was
found that the system assigned appropriate desirabilities to
experimental designs in these test cases, as determined by
comparison with the solutions recommended by human experts.
AN University Microfilms Order Number ADG87-13877.
AU ACKLEY, DAVID HOWARD.
IN Carnegie-Mellon University Ph.D. 1987, 238 pages.
TI STOCHASTIC ITERATED GENETIC HILLCLIMBING.
SO DAI V48(03), SecB, pp808.
DE Computer Science.
AB In the "black box function optimization" problem, a search
strategy is required to find an extremal point of a function without
knowing the structure of the function or the range of possible
function values. Solving such problems efficiently requires two
abilities. On the one hand, a strategy must be capable of learning
while searching: It must gather global information about the space
and concentrate the search in the most promising regions. On the
other hand, a strategy must be capable of sustained exploration: If
a search of the most promising region does not uncover a
satisfactory point, the strategy must redirect its efforts into
other regions of the space.
This dissertation describes a connectionist learning machine
that produces a search strategy called stochastic iterated genetic
hillclimbing (SIGH). Viewed over a short period of time, SIGH
displays a coarse-to-fine searching strategy, like simulated
annealing and genetic algorithms. However, in SIGH the convergence
process is reversible. The connectionist implementation makes it
possible to diverge the search after it has converged, and to
recover coarse-grained information about the space that was
suppressed during convergence. The successful optimization of a
complex function by SIGH usually involves a series of such
converge/diverge cycles.
SIGH can be viewed as a generalization of a genetic algorithm
and a stochastic hillclimbing algorithm, in which genetic search
discovers starting points for subsequent hillclimbing, and
hillclimbing biases the population for subsequent genetic search.
Several search stratgies--including SIGH, hillclimbers, genetic
algorithms, and simulated annealing--are tested on a set of
illustrative functions and on a series of graph partitioning
problems. SIGH is competitive with genetic algorithms and simulated
annealing in most cases, and markedly superior in a function where
the uphill directions usually lead away from the global maximum. In
that case, SIGH's ability to pass information from one
coarse-to-fine search to the next is crucial. Combinations of
genetic and hillclimbing techniques can offer dramatic performance
improvements over either technique alone.
AN This item is not available from University Microfilms International
ADG05-60485.
AU BAPA RAO, KOTCHERLAKOTA V.
IN University of Southern California Ph.D. 1987.
TI AN EXTENSIBLE OBJECT-ORIENTED FRAMEWORK FOR ENGINEERING DESIGN
DATABASES.
SO DAI V48(04), SecB, pp1095.
DE Computer Science.
AB This thesis describes DOM (Design Object Model), a model of
objects in a database to support computer-aided design of complex
artifacts such as VLSI chips and software systems. Most database
models are designed with administrative domains in view and hence
are ill-suited to cope with the complex structural hierarchies,
multiple representations, and incremental evolution at both the
object and meta-object (schema) levels that are typical of design
objects. DOM aims to provide a generic framework of high-level
concepts for representing these aspects of data and meta-data in the
design environment. Important considerations in the design of DOM
are uniformity of representation, integration of concepts, and the
ability to represent design data and the more conventional kinds of
data in a common framework. DOM is object-oriented in that it seeks
to directly capture the properties of real-world objects; it is
extensible in that a DOM database and schema can be incrementally
extended to accommodate evolution in the real world.
DOM has been developed in two phases. First, the conceptual
requirements of design data models are formulated as a set of
abstract concepts obtained by analyzing the properties of design
environments. These are organized into four dimensions: (1) Static
structure describing the design object considered as a static
entity; (2) Evolution structure describing the evolutionary stages
of design and their relationships; (3) Level denoting whether an
object is an extensional object or a schema; (4) Originality
denoting whether an object is a design in its own right or an
instantiation of a design.
In the second phase, the abstract concepts are mapped to a
simple object-based data model, and thus articulated as concrete
concepts. Simple and compound objects realize the static structure
dimension. Generic and realization objects implement evolution
structure permitting multiple evolutionary alternatives. Schema
objects represent meta-designs, and copy objects represent
instantiations. These concepts are extended in the static structure
dimension to enable the description of design objects via
abstractions such as interface, implementation, and views, the last
denoting multiple representations. The application of DOM is
demonstrated by modelling the domain of VLSI design objects.
(Copies available exclusively from Micrographics Department, Doheny
Library, USC, Los Angeles, CA 90089-0182.).
AN University Microfilms Order Number ADG87-13885.
AU CHRIST, JAMES P.
IN Carnegie-Mellon University Ph.D. 1987, 138 pages.
TI SHAPE ESTIMATION AND OBJECT RECOGNITION USING SPATIAL PROBABILITY
DISTRIBUTIONS.
SO DAI V48(03), SecB, pp809.
DE Computer Science.
AB This thesis describes an algorithm for performing object
recognition and shape estimation from sparse sensor data. The
algorithm is based on a spatial likelihood map which estimates the
probability density for the surface of the object in space. The
spatial likelihood map is calculated using an iterative, finite
element approach based on a local probabilistic model for the
object's surface. This algorithm is particularly useful for
problems involving tactile sensor data. An object classification
algorithm using the spatial likelihood map was developed and
implemented using simulated tactile data. The implementation for
the tactile problem was in two dimensions for the sake of clarity
and computational speed, and is easily generalized to three
dimensions. The spatial likelihood map is also useful for
multi-sensor data fusion problems. This is illustrated with an
application drawn from the study of mobile robots.
AN University Microfilms Order Number ADG87-14481.
AU KELLER, RICHARD MICHAEL.
IN Rutgers University The State U. of New Jersey (New Brunswick)
Ph.D. 1987, 352 pages.
TI THE ROLE OF EXPLICIT CONTEXTUAL KNOWLEDGE IN LEARNING CONCEPTS TO
IMPROVE PERFORMANCE.
SO DAI V48(03), SecB, pp811.
DE Computer Science.
AB This dissertation addresses some of the difficulties encountered
when using artificial intelligence-based, inductive concept learning
methods to improve an existing system's performance. The underlying
problem is that inductive methods are insensitive to changes in the
system being improved by learning. This insensitivity is due to the
manner in which contextual knowledge is represented in an inductive
system. Contextual knowledge consists of knowledge about the
context in which concept learning takes place, including knowledge
about the desired form and content of concept descriptions to be
learned (target concept knowledge), and knowledge about the system
to be improved by learning and the type of improvement desired
(performance system knowledge). A considerable amount of contextual
knowledge is "compiled" by an inductive system's designers into its
data structures and procedures. Unfortunately, in this compiled
form, it is difficult for the learning system to modify its
contextual knowledge to accommodate changes in the learning context
over time.
This research investigates the advantages of making contextual
knowledge explicit in a concept learning system by representing that
knowledge directly, in terms of express declarative structures. The
thesis of this research is that aside from facilitating adaptation
to change, explicit contextual knowledge can support two additional
capabilities not supported in most existing inductive systems.
First, using explicit contextual knowledge, a system can learn
approximate concept descriptions when necessary or desirable in
order to improve performance. Second, with explicit contextual
knowledge, a learning system can generate its own concept learning
tasks.
To investigate the thesis, this study introduces an alternative
concept learning framework--the concept operationalization
framework--that requires various types of contextual knowledge as
explicit inputs. To test this new framework, an existing inductive
concept learning system (the LEX system Mitchell et al. 81 ) was
rewritten as a concept operationalization system (the MetaLEX
system). This document describes the design of MetaLEX and reports
the results of several experiments performed to test the system.
Results confirm the utility of explicit contextual knowledge, and
suggest possible improvements in the representations and methods
used by the system.
AN University Microfilms Order Number ADG87-14072.
AU LANKA, SITARAMASWAMY VENKATA.
IN University of Pennsylvania Ph.D. 1987, 148 pages.
TI AN AID TO DATABASE DESIGN: AN INDUCTIVE INFERENCE APPROACH.
SO DAI V48(03), SecB, pp811.
DE Computer Science.
AB The conventional approach to the design of databases has the
drawback that to specify a database schema, it requires the user to
have knowledge about both the domain and the data model. That is,
the onus of encoding the domain information in terms of concepts
foreign to the domain falls on the user. The goal of this research
is to free the user of such burdens. We propose a system that
designs a database based on its functional requirements. The user
need only provide information on how the database is expected to be
used, and the system infers a schema from this. Furthermore, the
information is expressed in a language which is independent of the
underlying data model.
The above problem has been cast as an inductive inference
problem. The input is in the form of Natural Language (English)
queries and a conceptual database schema is inferred from this. The
crux of the inference mechanism is that the hypotheses are
synthesized compositionally and this is described in terms of
Knuth's attribute grammars.
In certain situations the inference mechanism has the potential
to synthesize false hypothesis. We have advanced a method to detect
these potentially false hypotheses, and refine them to obtain
acceptable hypotheses.
A prototype of such a system has been implemented on the
symbolics Lisp machine.
AN University Microfilms Order Number ADG87-12346.
AU MICHON, GERARD PHILIPPE.
IN University of California, Los Angeles Ph.D. 1983, 125 pages.
TI RECURSIVE RANDOM GAMES: A PROBABILISTIC MODEL FOR PERFECT
INFORMATION GAMES.
SO DAI V48(03), SecB, pp813.
DE Computer Science.
AB A simple probabilistic model for game trees is described which
exhibits features likely to be found in realistic games. The model
allows any node to have n offsprings (including n = 0) with
probability f(,n) and assigns each terminal node a WIN status with
probability p and a LOSS status with probability q = 1 - p. Our
model may include infinite game trees and/or games that never end
when played perfectly. The statistical properties of games and the
computational complexities of various game solving approaches are
quantified and compared. A simple analysis of game pathology and
quiescence is also given. The model provides a theoretical
justification for the observed good behavior of game-playing
programs whose search horizon is not rigid. Pathological features
that were recently found to be inherent in some former game models
are put in a new perspective.
AN University Microfilms Order Number ADG87-13872.
AU MUELLER, ERIK THOMAS.
IN University of California, Los Angeles Ph.D. 1987, 763 pages.
TI DAYDREAMING AND COMPUTATION: A COMPUTER MODEL OF EVERYDAY
CREATIVITY, LEARNING, AND EMOTIONS IN THE HUMAN STREAM OF THOUGHT.
SO DAI V48(03), SecB, pp813.
DE Computer Science.
AB This dissertation presents a computational theory of
daydreaming: the spontaneous human activity--carried out in a stream
of thought--of recalling past experiences, imagining alternative
versions of past experiences, and imagining possible future
experiences. Although on the face of it, daydreaming may seem like
a useless distraction from a task being performed, we argue that
daydreaming serves several important functions for both humans and
computers: (1) learning from imagined experiences, (2) creative
problem solving, and (3) a useful interaction with emotions.
The theory is implemented within a computer program called
DAYDREAMER which models the daydreaming of a human in the domain of
interpersonal relations and common everyday occurrences. As input,
DAYDREAMER takes English descriptions of external world events. As
output, the program produces English descriptions of (1) actions it
performs in the external world and (2) its internal "stream of
thought" or "daydreams": sequences of events in imaginary past and
future worlds.
Five major research issues are considered: (1) the generation
and incremental modification of realistic and fanciful solutions or
daydreams, (2) focusing attention in the presence of multiple active
problems, (3) the recognition and exploitation of accidental
relationships among problems, (4) the use of previous solutions or
daydreams in generating new solutions or daydreams, and (5) the
interaction between emotions and daydreaming.
DAYDREAMER consists of a collection of processing mechanisms and
strategies which address each of the above issues: (1) a planner, a
collection of personal goals, daydreaming goals, and planning and
inference rules for the domain, and a mutation mechanism; (2) a
control mechanism based on emotions as motivation; (3) a serendipity
mechanism; (4) an analogical planner which stores, retrieves, and
applies solutions or daydreams in a long-term episodic memory; and
(5) mechanisms for initiating and modifying emotions during
daydreaming and for influencing daydreaming in response to emotions.
DAYDREAMER is able to generate a number of daydreams and
demonstrate how daydreaming enables learning, creative problem
solving, and a useful interaction with emotions.
AN University Microfilms Order Number ADG87-14099.
AU NADATHUR, GOPALAN.
IN University of Pennsylvania Ph.D. 1987, 169 pages.
TI A HIGHER-ORDER LOGIC AS THE BASIS FOR LOGIC PROGRAMMING.
SO DAI V48(03), SecB, pp813.
DE Computer Science.
AB The objective of this thesis is to provide a formal basis for
higher-order features in the paradigm of logic programming. Towards
this end, a non-extensional form of higher-order logic that is based
on Church's simple theory of types is used to provide a
generalisation to the definite clauses of first-order logic.
Specifically, a class of formulas that are called higher-order
definite sentences is described. These formulas extend definite
clauses by replacing first-order terms by the terms of a typed
(lamda)-calculus and by providing for quantification over predicate
and function variables. It is shown that these formulas, together
with the notion of a proof in the higher-order logic, provide an
abstract description of computation that is akin to the one in the
first-order case. While the construction of a proof in a
higher-order logic is often complicated by the task of finding
appropriate substitutions for predicate variables, it is shown that
the necessary substitutions for predicate variables can be tightly
constrained in the context of higher-order definite sentences. This
observation enables the description of a complete theorem-proving
procedure for these formulas. The procedure constructs proofs
essentially by interweaving higher-order unification with
backchaining on implication, and constitutes a generalisation, to
the higher-order context, of the well-known SLD-resolution procedure
for definite clauses. The results of these investigations are used
to describe a logic programming language called (lamda)Prolog. This
language contains all the features of a language such as Prolog,
and, in addition, possesses certain higher-order features. The
nature of these additional features is illustrated, and it is shown
how the use of the terms of a (typed) (lamda)-calculus as data
structures provides a source of richness to the logic programming
paradigm.
AN University Microfilms Order Number ADG87-13897.
AU OFLAZER, KEMAL.
IN Carnegie-Mellon University Ph.D. 1987, 210 pages.
TI PARTITIONING IN PARALLEL PROCESSING OF PRODUCTION SYSTEMS.
SO DAI V48(03), SecB, pp814.
DE Computer Science.
AB This thesis presents research on certain issues related to
parallel processing of production systems. It first presents a
parallel production system interpreter that has been implemented on
a four-processor multiprocessor. This parallel interpreter is based
on Forgy's OPS5 interpreter and exploits production-level
parallelism in production systems. Runs on the multiprocessor
system indicate that it is possible to obtain speed-up of around 1.7
in the match computation for certain production systems when
productions are split into three sets that are processed in parallel.
However for production systems that are already relatively fast on
uniprocessors, the communication overhead imposed by the
implementation environment essentially offsets any gains when
productions are split for parallel match.
The next issue addressed is that of partitioning a set of rules
to processors in a parallel interpreter with production-level
parallelism, and the extent of additional improvement in performance.
The partitioning problem is formulated and an algorithm for
approximate solutions is presented. Simulation results from a
number of OPS5 production systems indicate that partitionings using
information about run time behaviour of the production systems can
improve the match performance by a factor of 1.10 to 1.25, compared
to partitionings obtained using various simpler schemes.
The thesis next presents a parallel processing scheme for OPS5
production systems that allows some redundancy in the match
computation. This redundancy enables the processing of a production
to be divided into units of medium granularity each of which can be
processed in parallel. Subsequently, a parallel processor
architecture for implementing the parallel processing algorithm is
presented. This architecture is based on an array of simple
processors which can be clustered into groups of potentially
different sizes, each group processing an affected production during
a cycle of execution. Simulation results for a number of production
systems indicate that the proposed algorithm performs better than
other proposed massively parallel architectures like DADO, or
NON-VON that use much larger number of processors. However, for
certain systems, the performance is in the same range or sometimes
worse than that can be obtained while a parallel interpreter based
on Forgy's RETE algorithm such as an interpreter using
production-level parallelism implemented on a small number of
powerful processors, or an interpreter based on Gupta's parallel
version of Forgy's RETE algorithm, implemented on a shared memory
multiprocessor with 32 - 64 processors.
AN University Microfilms Order Number ADG87-14990.
AU POTTER, WALTER DONNELL.
IN University of South Carolina Ph.D. 1987, 247 pages.
TI A KNOWLEDGE-BASED APPROACH TO ENTERPRISE MODELING: THE FOUNDATION.
SO DAI V48(04), SecB, pp1097.
DE Computer Science.
AB This dissertation describes the Knowledge/Data Model. The
description includes the modeling foundation and primitives, the
representational paradigm, a formal schema specification language,
and a prototype implementation based upon the model. The
Knowledge/Data Model captures both knowledge semantics, as specified
in Knowledge Based Systems, and data semantics, as represented by
Semantic Data Models. The Knowledge/Data Model can be thought of as
an instance of a newly defined class of data models, called
Hyper-Semantic Data Models, that facilitate the incorporation of
knowledge in the form of heuristics, uncertainty, constraints and
other Artificial Intelligence Concepts, together with
object-oriented concepts found in Semantic Data Models. The unified
knowledge/data modeling features provide a novel mechanism for
combining Artificial Intelligence and Database Management techniques
to establish the foundation of a Knowledge/Data Model for an Expert
Database System. These features are provided via the constructs of
the specification language, called the Knowledge/Data Language.
The Knowledge/Data Language is the formal specification language
for the Knowledge/Data Model. It is characterized as a context free
language and is represented by a collection of grammar rules that
specify the syntax of the language. The constructs of the language
allow the features of the Knowledge/Data Model to be utilized in a
modeling situation. In addition to being context-free, the
Knowledge/Data Language is self-descriptive (sometimes referred to
as self-referential). Throughout the dissertation, modeling
examples, including the prototype application description, are
presented using the language.
AN University Microfilms Order Number ADG87-14121.
AU SCHOCKEN, SHIMON.
IN University of Pennsylvania Ph.D. 1987, 308 pages.
TI ON THE UNDERLYING RATIONALITY OF NON-DETERMINISTIC RULE-BASED
INFERENCE SYSTEMS: A DECISION SCIENCES PERSPECTIVE.
SO DAI V48(04), SecB, pp1099.
DE Computer Science.
AB This research investigates the underlying rationality of several
leading mechanisms in artificial intelligence designed to elicit,
represent, and synthesize experts' belief: Bayesian inference, the
certainty factors (CF) calculus, and an ad-hoc Bayesian inference
mechanism. The research methodology includes a review of the
philosophical foundations of these "belief languages," a
mathematical analysis of their proximity to a classical Bayesian
belief updating model, and an empirical comparison of their
performance in a controlled experiment involving human subjects and
their corresponding computer-based expert systems.
The major analytic finding is that the certainty factors
language is a special case of the Bayesian language. This implies
that the certainty factors language is consistent with its Bayesian
interpretation if and only if it is restricted to a very small
subset of realistic inference problems. However, the widely-used CF
language might perform better than its Bayesian counterpart due to
the greater semantic appeal of the former.
With this in mind, the thesis compares the descriptive and
external validity of the three languages in a controlled experiment.
The major empirical results are (a) within the limited context of
this experiment, neither the certainty factors nor the Bayesian
language dominates the other in terms of descriptive validity,
defined as the proximity of the system's judgment to actual experts'
judgment; and (b) the correlation between the computer-based
Bayesian judgment and the pooled expert judgments is significantly
greater than the corresponding CF correlation.
To sum up, this research shows that the classical Bayesian
approach to rule-based inference appears to dominate the certainty
factors language, both on analytic and empirical grounds. At the
same time, the proven success of CF-based systems (e.g. MYCIN) and
its wide popularity suggest that the CF approach to inference is
indeed appealing to many designers and users of expert systems. It
is suggested that future research attempt to formulate a synthetic
approach to knowledge engineering, i.e. one that combines the
attractive descriptive features of the CF language with the
normative rigor of a Bayesian design. It is hoped that this will
strike a balance between preserving the intuitive element of human
reasoning, and, at the same time, enforcing a certain degree of
normative rationality.
AN University Microfilms Order Number ADG87-15371.
AU UMRIGAR, ZERKSIS DHUNJISHA.
IN Syracuse University Ph.D. 1986, 259 pages.
TI AUTOMATION OF HARDWARE-CORRECTNESS PROOFS.
SO DAI V48(04), SecB, pp1099.
DE Computer Science.
AB The ubiquity of the digital computer and its use in critical
applications makes verification of its correctness an extremely
important issue. Unfortunately present verification methodologies,
which rely almost exclusively on simulation, have difficulty
handling the complexity of modern hardware designs. In this
dissertation we explore an alternate verification methodology in
which the functional correctness of a design is proved using formal
proof techniques.
To prove the correctness of a design, a formal hardware
verification system is given two formal descriptions of the design
which correspond to a functional specification and an implementation.
It must then establish an implication or equivalence between these
two descriptions. This can be done using exhaustive simulation, but
this is slow and cannot be used to verify parameterized circuits. A
more general method is to use algebraic simulation to derive
verification conditions and then use a theorem prover to establish
the validity of these verification conditions.
An interactive general purpose theorem prover which is a partial
decision procedure for first-order logic is used as a shell for more
efficient but specialized algorithms. A specialized algorithm,
called the bounds algorithm is used to establish the validity of
formulas involving universally quantified linear inequalities over
the integer domain. This algorithm is goal-directed and is easily
extended to handle some properties of interpreted functions.
Theoretical properties of these theorem proving procedures are
established.
The usefulness of the formal verification system is limited by
its theorem proving component. It has successfully been used to
verify the functional correctness of simple arithmetic circuits,
including an array multiplier.
AN University Microfilms Order Number ADG87-13707.
AU ZERNIK, URI.
IN University of California, Los Angeles Ph.D. 1987, 346 pages.
TI STRATEGIES IN LANGUAGE ACQUISITION: LEARNING PHRASES IN CONTEXT.
SO DAI V48(03), SecB, pp815.
DE Computer Science.
AB How is language acquired by people, and how can we make
computers simulate language acquisition? Although current
linguistic models have investigated extensively parsing and
generation, so far, there has been no model of learning new lexical
phrases from examples in context.
We have identified four issues in language acquisition. (a) How
can a phrase be extracted from a single example? (b) How can
phrases be refined as further examples are provided? (c) How can
the context be incorporated as part of a new phrase? (d) How can
acquired phrases be used in parsing and in generation?
In solving this problems, we have established three theoretical
points. (a) We have shown how a dynamic lexicon is structured as a
phrasal hierarchy. (b) We have constructed strategies for learning
phrases. (c) We have constructed a parsing mechanism which can
operate even in the presence of lexical gaps.
The program RINA has incorporated these elements in modeling a
second-language speaker who augments her lexical knowledge by being
exposed to examples in context.
AN University Microfilms Order Number ADG87-13708.
AU CRAIG, ELAINE M.
IN University of California, Los Angeles Ph.D. 1987, 283 pages.
TI EXPERT AND NOVICE PROBLEM SOLVING IN A COMPLEX COMPUTER GAME.
SO DAI V48(03), SecA, pp600.
DE Education, Psychology.
AB This study examined the problem solving processes involved in
playing a complex computer game and explored the utility of computer
games for research on problem solving and for instruction in problem
solving skills. The study characterized and compared the problem
solving behaviors of "expert" and "novice" game players. It
compared expert/novice contrasts in computer game players with
expert/novice problem solving differences in other domains such as
physics, computer programming, and errand planning. The study also
looked at the changes in problem solving behaviors that occurred
when novices moved toward expert play and considered the potential
for incorporating computer game activities in problem solving
instructional programs.
The Opportunistic Planning Model (OPM) (Hayes-Roth & Hayes-Roth,
1978, 1979) provided the theoretical basis and the methodological
framework for the study which looked at the problem solving
behaviors of 18 university undergraduates playing an "off-the-shelf"
computer game. Measures of subjects' problem solving behaviors
included audio recordings of what they said while playing the game
("think aloud" protocols), detailed observations of their game play,
and interviews before and after game play. Data were analyzed using
t-tests and chi square tests.
The study found the following problem solving behaviors to be
associated with success at a computer game: making high level
decisions, exploiting world knowledge, showing sensitivity to
constraints, clustering tasks, using a system to organize
information, considering alternatives, and assessing the state of
one's knowledge. The study found very few increases in problem
solving behaviors as subjects became more experienced with the game.
It also found that computer game play involved subjects in many of
the same activities that are incorporated in problem solving
instructional programs.
AN University Microfilms Order Number ADG87-13882.
AU BUSHNELL, MICHAEL LEE.
IN Carnegie-Mellon University Ph.D. 1987, 250 pages.
TI ULYSSES -- AN EXPERT-SYSTEM BASED VLSI DESIGN ENVIRONMENT.
SO DAI V48(03), SecB, pp833.
DE Engineering, Electronics and Electrical.
AB Ulysses is a VLSI computer-aided design (CAD) environment which
effectively addresses the problems associated with CAD tool
integration. Specifically, Ulysses allows the integration of CAD
tools into a design automation (DA) system, the codification of a
design methodology, and the representation of a design space.
Ulysses keeps track of the progress of a design and allows
exploration of the design space. The environment employs artificial
intelligence techniques, functions as an interactive expert system,
and interprets descriptions of design tasks encoded in the scripts
language.
An integrated circuit silicon compilation task is presented as
an example of the ability of Ulysses to automatically execute CAD
tools to solve a problem where inferencing is required to obtain a
viable VLSI layout. The inferencing mechanism, in the form of a
controlled production system, allows Ulysses to recover when routing
channel congestion or over-constrained leaf-cell boundary conditions
make it impossible for CAD tools to complete layouts. Also, Ulysses
allows the designer to intervene while design activities are being
carried out. Consistency maintenance rules encoded in the scripts
language enforce geometric floor plan consistency when CAD tools
fail and when the designer makes adjustments to a VLSI chip layout.
Consistency maintenance is discussed extensively using floor
planning, leaf-cell synthesis, and channel routing tasks as
examples.
Ulysses has been implemented as a computer program and a chip
layout that was semi-automatically generated by Ulysses is presented
to illustrate the performance of the program.
AN University Microfilms Order Number ADG87-13900.
AU SAUK, BRIAN EDWARD.
IN Carnegie-Mellon University Ph.D. 1987, 98 pages.
TI LEILA: AN EXPERT SYSTEM FOR ESTIMATING CHEMICAL REACTION RATES.
SO DAI V48(03), SecB, pp840.
DE Engineering, Electronics and Electrical.
AB This work describes an expert system, named Leila, capable of
estimating chemical reaction rates. These estimates are based on
fundamental data and a hierarchy of reaction rate theories. The
theories are encoded in the form of production rules, and the expert
system methodology chosen for Leila is that of a production rule
system. Unlike most production systems, the rules in Leila are
segmented into nodes. Nodes represent knowledge about a specific
area of the reaction rate domain. During a rate determination,
attention is focused on only one node at a time, thus minimizing the
number of rules that need to be considered at each step. In
addition, since nodes represent a specific area of expertise,
extensions and modifications are simplified, since they only deal
with a small portion of the knowledge base.
Leila also provides a model for the solution of reaction rate
problems. The steps of this model are defined by rules, thereby
enabling modifications to the model without extensive recoding.
During a rate determination, Leila checks for balanced reactions,
classifies reactions, performs rate determinations based on
hierarchies of theories, estimates unknown data, performs any unit
conversions, and shows the solution path taken by the determination,
if requested.
The rate theories present in Leila deal primarily with
low-pressure gas phase reactions, and in particular, recombination
and ionization reactions. A summary of the reactions that Leila can
handle is given. For some reactions, many theories apply, while for
others, only one theory can be used.
A number of comparisons to experimental data is also presented.
In many cases, the theoretical estimates are in good agreement with
experiment, while for others agreement is poor. Reasons for
disagreement are given.
AN University Microfilms Order Number ADG87-13817.
AU CHEN, JEN-GWO.
IN The University of Oklahoma Ph.D. 1987, 143 pages.
TI PROTOTYPE EXPERT SYSTEM FOR PHYSICAL WORK STRESS ANALYSIS (dBASE
III).
SO DAI V48(03), SecB, pp845.
DE Engineering, Industrial.
AB This research involves the development of an interactive
knowledge-based Ergonomics Analysis SYstem (EASY) for physical work
stress analysis. EASY was written in dBASE III and BASIC for IBM-PC
compatible microcomputers. The system consists of three major
components: the Physical Work Stress Index (PWSI) used by the
supervisor or ergonomist for further investigation of problem
situations, the Ergonomics Information Analysis System (EIAS) for
evaluation of tasks by the worker, and the Dynamic Lifting Analysis
System (DLAS) for manual material handling tasks.
The Physical Work Stress Index is an observational method of
physical work stress analysis which possesses the ease of
application of traditional work study techniques but provides better
accounting of human and task variables. The technique involves
activity sampling of various physical components of the work
including body location, base of support, orientation, hand
position, acceleration and thermal load. The PWSI is derived from
observational data and is classified into six different levels: very
low, low, moderate, high, very high and extremely high. The EIAS
includes four sections: case identification, problem description,
job description and operator-operation interaction. The last two
sections record quantitative data as opposed to the qualitative data
collected in the first two sections. The quantitative data consists
of a 5-point scale which describes the seriousness of each aspect of
the problem. The EIAS provides general guidelines to tell the user
how to avoid unnecessary problems and improve performance. The DLAS
includes three components: lifting capacity analysis, biomechanical
analysis and NIOSH guidelines analysis.
Extensive use of menus for database entry/editing and analysis
provides an efficient and friendly interface design. The system was
evaluated by comparing the results of EASY and individuals with an
introductory knowledge of ergonomics with experts' conclusions for
nine test jobs involving a variety of physical work stressors. The
evaluation indicated that 83% of EASY's diagnoses were accepted by
the experts with some variation between individual experts and
between EASY and the other diagnosticians.
AN University Microfilms Order Number ADG87-14897.
AU WU, SZU-YUNG DAVID.
IN The Pennsylvania State University Ph.D. 1987, 235 pages.
TI AN EXPERT SYSTEM APPROACH FOR THE CONTROL AND SCHEDULING OF FLEXIBLE
MANUFACTURING CELLS.
SO DAI V48(04), SecB, pp1125.
DE Engineering, Industrial.
AB An expert system is a computer program that uses knowledge and
inference procedures to solve problems. Today, most expert systems
contain a substantial amount of domain expertise (i.e., knowledge)
organized for efficient problem solving. However, most of the
existing design philosophies for expert systems do not lend
themselves to real-time control environments. Expert systems are
currently being touted as a means of resolving factory scheduling
problems. Unfortunately, the expert systems developed to date are
neither generic nor responsive enough to be used for on-line system
control.
In this research, an architecture is created which takes
advantage of both expert system technology and discrete event
simulation. The simulation is used as a prediction mechanism to
evaluate several possible control alternatives provided by the
expert system. A performance measure is obtained from the
simulation for each of the suggested alternatives. A control
effector is then employed to affect the physical control of the cell
based on the performance measure. This performance measure is worth
a great deal of domain-specific knowledge that would otherwise have
to be included in the expert knowledge base. The integration of the
expert control system, the simulation, and the control effectors,
form a system called MPECS. MPECS is used to control Flexible
Manufacturing Cells (FMC).
Specific software and algorithms are developed to define and
implement the system. The control architecture is examined using
the information from an existing FMC to demonstrate its feasibility.
AN University Microfilms Order Number ADG87-13891.
AU GURSOZ, ESAT LEVENT.
IN Carnegie-Mellon University Ph.D. 1987, 193 pages.
TI EXPERT TASK PLANNING IN ROBOTIC CUTTING OPERATIONS.
SO DAI V48(03), SecB, pp854.
DE Engineering, Mechanical.
AB In this thesis, an expert system is developed for a class of
automated cutting operations. These operations include plasma
cutting, oxy-fuel cutting, laser cutting and water-jet cutting. The
common features in these processes, which define this class of
cutting operations, are the following: first, the work material is
cut by the sweeping action of a line segment emanating from the
process tool; second, the cutting effect terminates at an imprecise
point along that cutting segment; and third, the cutting task at
hand can be fully described by the surface-boundary representation
of the workpiece and the surface to be cut. The surface-boundary
representation is a fairly standard form of modeling in CAD systems.
Hence the description of the cutting task can easily be supplied by
a CAD database if it exists or can be interactively defined within a
CAD system. The specific concern in this thesis is robotic
applications in such tasks. Given such a CAD description of the
cutting task, we have developed an expert system to generate the
robot program which shall execute the desired cut. This overall
transformation from the task description to the robot program can be
naturally divided into two phases. In the first phase, the cutting
task is formulated in a manipulator independent fashion to the level
where relative movements of the cutting segment are prescribed. In
the second phase, a robot program which articulates the prescribed
cutting segment motions are generated. The focus of this study
deals with the first phase in which the cutting task is planned.
The fundamental problem in such a planning task is that neither a
strictly geometrical analysis, nor a purely heuristic approach is a
sufficient basis when considered alone. Commonly, geometric modeling
is used in simulating manufacturing operations. Knowledge-based
robot task planning, on the other hand, has usually been implemented
for the cases where complicated spatial reasoning is not required.
In this thesis, we have developed a knowledge-based system which
blends heuristics with spatial reasoning within the framework of a
solid modeling system. Although an implementation of robotic flame
cutting of structural beams is used to provide the fundamental
knowledge and the context, this system is constructed in a general
fashion to cover all of the addressed cutting operations.
Furthermore, it is possible to extend the developed planning
concepts to other manufacturing applications where spatial reasoning
is crucial.
AN University Microfilms Order Number ADG87-10627.
AU STANDLER, NANCY ANN.
IN The University of Rochester Ph.D. 1986, 508 pages.
TI AUTOMATED IDENTIFICATION OF A COMBINED POPULATION OF NEURONS AND
ASTROCYTES: APPLICATION OF A PROGRAMMING APPROACH SUITABLE FOR HIGH
RESOLUTION HISTOLOGICAL SPECIMENS.
SO DAI V48(03), SecB, pp710.
DE Health Sciences, Pathology.
AB A combined population of neurons and astrocytes in semithin (1
micrometer) sections of mouse cortex is automatically identified
with greater than 95% accuracy. The computer algorithms use a new
programming approach that shows promise of being applicable to the
identification of a wide variety of structures in complex, high
resolution images of histological sections. The approach stresses
the use of histologically meaningful distinctions between similar
sites in cells and in other structures in the histological section.
The use of logical trees to identify the cells enables the
algorithms to tolerate large variations in appearance from cell to
cell while retaining the ability to make subtle distinctions between
particular cells and non-cell structures with very similar
appearance. Difficulties with segmenting the cells from the
background are avoided by using branch point tests in the logical
trees that do not require segmented images.
AN University Microfilms Order Number ADG87-16349.
AU CHANG, HSI ALEX.
IN The University of Arizona Ph.D. 1987, 407 pages.
TI AN ARCHITECTURE FOR ELECTRONIC MESSAGING IN ORGANIZATIONS: A
DISTRIBUTED PROBLEM-SOLVING PERSPECTIVE.
SO DAI V48(04), SecA, pp768.
DE Information Science.
AB This dissertation provides a foundation for electronic
information management in organizations. It focuses on the
relationships among communication, control, and information flows of
the organization. The main thesis addresses the question of how
electronic mail messages may be managed according to their contents,
ensuring at the same time, the preservation of organizational and
social relationships.
A taxonomy for the management of unstructured electronic
information relevance based on the treatment of information is
derived from current research. Among the three paradigms, the
information processing, the information distribution, and the
information sharing paradigms, the inadequacy of the first two is
recognized, and the treatment of information in its active mode is
proposed. This taxonomy can be used to quickly differentiate one
research from another and evaluate its adequacy.
Three concepts, four cornerstones, and an architecture
constitute our framework of information relevance management. The
cornerstones are knowledge of the organization, knowledge of the
individual, information construction, and information interpretation.
Through knowledge of the organization and the individual, the
machine production systems are able to distribute and manage
information according to the logic of human production systems. The
other two cornerstones together improve the unity of interpretation
among the organizational members.
The physical architecture can adapt a number of applications,
each of which, may not only have different knowledge presentations
and inference mothods, but also may co-exist in the system
simultaneously. An integrated knowledge-based electronic messaging
system, the AI-MAIL system, is built, tested, and evaluated through
a case study to demonstrate the feasibility of the architecture and
its applicability to the real-world environment.
The three operating levels, interorganizational,
intraorganizational, and individual, are illustrated through a study
of the U.S. Army. From three large scale field studies, the
existing AUTODIN I system, a backbone of the Army's communications,
is analyzed and evaluated to illustrate the applicability and
benefits of the three operating levels.
This dissertation contributes to the field of Management
Information Systems by offering a methodology, a taxonomy, a new
paradigm, a framework, and a system for information management and a
method of adaptive organizational design. In addition, it points
toward future research directions. Among them are research to deal
with ethical issues, organizational research, knowledge engineering,
multi-processor configuration, and internal protocols for
applications.
AN University Microfilms Order Number ADG87-16352.
AU FJELDSTAD, OYSTEIN DEVIK.
IN The University of Arizona Ph.D. 1987, 394 pages.
TI ON THE REAPPORTIONMENT OF COGNITIVE RESPONSIBILITIES IN INFORMATION
SYSTEMS.
SO DAI V48(04), SecA, pp768.
DE Information Science.
AB As the number of information system users increases, we are
witnessing a related increase in the complexity and the diversity of
their applications. The increasing functional complexity amplifies
the degree of functional and technical understanding required of the
user to make productive use of the application tools. Emerging
technologies, increased and varied user interests and radical
changes in the nature of applications give rise to the opportunity
and necessity to re-examine the proper apportionment of cognitive
responsibilities in human-system interaction.
We present a framework for the examination of the allocation of
cognitive responsibilities in information systems. These cognitive
tasks involve skills associated with the models and tools that are
provided by information systems and the domain knowledge and problem
knowledge that are associated with the user. The term cognitor is
introduced to refer to a cognitive capacity for assuming such
responsibilities. These capacities are resident in the human user
and they are now feasible in information system architectures.
Illustrations are given of how this framework can be used in
understanding and assessing the apportionment of responsibilities.
Implications of shifting and redistributing cognitive task from the
system-user environment to the system environment are discussed.
Metrics are provided to assess the degree of change under
alternative architectures.
An architecture for the design of alternative responsibility
allocations, named Reapportionment of Cognitive Activities, (RCA),
is presented. The architecture describes knowledge and
responsibilities associated with facilitating dynamic allocation of
cognitive responsibilities. Knowledge bases are used to support and
describe alternative apportionments. RCA illustrates how knowledge
representations, search techniques and dialogue management can be
combined to accommodate multiple cooperating cognitors, each
assuming unique roles, in an effort to share the responsibilities
associated with the use of an information system. A design process
for responsibility allocation is outlined.
Examples of alternative responsibility allocation feasible
within this architecture are provided. Cases implementing the
architecture are described. We advocate treating the allocation of
cognitive responsibilities as a design variable and illustrate
through the architecture and the cases the elements necessary in
reapportioning these responsibilities in information systems
dialogues.
AN University Microfilms Order Number ADG87-12660.
AU CRITTENDEN, CHARLOTTE CLEMENTS.
IN University of Georgia Ph.D. 1987, 181 pages.
TI A STUDY OF SIX PRONOUN USAGES: FOR PRACTICAL PURPOSES.
SO DAI V48(03), SecA, pp640.
DE Language, Linguistics.
AB This study covers six areas of pronoun usage: broad reference of
which, that, and this; impersonal you; agreement with indefinites;
agreement with collective nouns; whose as genitive of which; and the
and which or and who constructions. The method used was to edit
approximately one million words of selected material published in
1983. Three types of primary sources have contributed to this
survey: twenty nonfiction best sellers, articles from ten
periodicals chosen from a variety of readership levels, and
newspaper editorials from five representative geographical areas.
The editing identified usages that are different from those advanced
by a number of traditional grammar books and handbooks used on the
college level.
Included in this study is historical information from the OED
and from scholars like Otto Jespersen, Albert Marckwardt, Fred G.
Walcott, and J. Lesslie Hall. Additionally, a number of
twentieth-century usage studies were surveyed, including, among
others, those of Paul Roberts, C. C. Fries, Robert Pooley, Margaret
Nicholson, and Bergen and Cornelia Evans. Several studies written
by journalists, e.g., Roy Copperud and Wilson Follett, contribute
added perspective. Further descriptive information comes from two
dictionaries often used on the college level: Webster's Third and
the American Heritage.
After listing the approximate number of identified examples of
each usage being investigated in all three types of primary sources
and citing typical quotations, this study makes observations about
the actual use of each pronoun construction in relation to its
history, reports from usage studies, dictionary notes, and handbook
information. The study finally draws general conclusions and
discusses implications appropriate for an effective approach in
using and teaching these six areas of pronoun reference.
AN This item is not available from University Microfilms International
ADG05-60501.
AU HALL, CHRISTOPHER JOHN.
IN University of Southern California Ph.D. 1987.
TI LANGUAGE STRUCTURE AND EXPLANATION: A CASE FROM MORPHOLOGY.
SO DAI V48(04), SecA, pp914.
DE Language, Linguistics.
AB This investigation examines the contribution of psycholinguistic
and diachronic factors to the development across languages of a
preference for suffixing over prefixing. It argues for an approach
to explanation in linguistics that stresses: (a) the need for an
investigation of potential underlying psychological or functional
principles, involving the cooperation of the various subdisciplines
of linguistics; and (b) the need for an explicit description of the
mechanism(s) of "linkage" between structure and explanation, i.e.,
an account of how languages developed the properties in question.
The investigation draws on principles of lexical processing,
diachronic change, universals/typology, theoretical morphology, and
semantics in order to provide a fuller and more motivated
explanation than has previously been offered. It critically
evaluates the major prior effort to explain the suffixing preference
provided by Cutler, Hawkins & Gilligan (1985). The discussion
presented here suggests that, although their fundamental insights
were correct and provide the basis for the present work, there are
three areas of inadequacy: (a) the processing explanation is
inaccurate in some details; (b) it is incomplete in that no
explanation of the mechanism of linkage is provided; and (c) the
Head Ordering Principle, formulated to "explain" the basic pattern
of the crosslinguistic data, is based on an erroneous assumption,
and is, in any case, more a statement of a generalisation than an
explanation of the facts.
The explanation offered in the present work refines the
processing explanation and introduces factors from language change
into the explanatory hypothesis. It is argued that the suffixing
preference is, in actuality, a prefixing dispreference that
ultimately derives from the conflict between two driving forces of
language change, namely, the opposing pressures of economy and
clarity. The former leads to semantic redundancy and phonological
reduction within words, and this interacts with the latter which
leads to maintenance of stem initial strength and a resistance to
prefixing, for reasons of efficient processing.
Two original experiments on word recognition in English are also
reported. Experiment I examines the processing of prefixed words at
various stages of reduction; Experiment II focuses on the
hypothesised locus of the dispreference for prefixing. The results
yield initial support for the account proposed. (Copies available
exclusively from Micrographics Department, Doheny Library, USC, Los
Angeles, CA 90089-0182.).
AN University Microfilms Order Number ADG87-12184.
AU OSHIRO, MARIAN MIDORI.
IN The University of Michigan Ph.D. 1987, 264 pages.
TI A TAGMEMIC ANALYSIS OF CONVERSATION.
SO DAI V48(03), SecA, pp641.
DE Language, Linguistics.
AB The tagmemic method of linguistic analysis as developed by
Kenneth L. Pike is applied to the analysis of informal multi-party
verbal interaction ('conversation'). The three part-whole
hierarchies of units of tagmemic analysis--grammatical, referential,
and phonological--are each discussed with reference to prior
analysts' choices of units. Methodological problems of analyzing
conversation are discussed and the hierarchies reevaluated and
modified in response to them.
Methodological questions include (1) identification of nuclei
and margins, and boundary definitions of units, (2) differences
between written and oral texts, and implications of the presence of
hearer/respondent(s) in spontaneous verbal interactions, and (3) the
nature of cohesion and the degree and kind of convergence of the
three hierarchies at their upper levels.
A central question is how to treat speakership in the analysis.
The conclusions reached are that alternation of speakers should not
be used as a feature of grammatical units; that speakership is
reflected in the purpose (an element of cohesion) of the 'move',
which is a unit of the referential analysis; and that the individual
speaker's voice is a feature of the unit labeled the 'turn' in the
phonological hierarchy of units.
Although the word 'turn' is used in this dissertation as a
technical term limited to a single hierarchy, the tri-hierarchical
approach of tagmemic analysis is found to contribute toward an
understanding of what is commonly referred to as a turn (an
interactional component). The analysis of speech into three
distinctive systems clarifies the problem of defining a turn by
identifying multiple points in an interaction--hierarchical unit
boundaries--at which a change of speakers may take place.
All three hierarchies as constructed for conversational analysis
include the Episode and History as their highest-level units. The
other units of the revised grammatical hierarchy are the Morpheme,
Morpheme Cluster, Word, Grammatical Phrase, Grammatical Clause,
Grammatical Sentence, and Grammatical Paragraph. For the
referential hierarchy, the other units are the Concept, Concept
Complex, Monolog, Exchange, Interlogue, and Speech Event. For the
phonological hierarchy, they are the Phoneme, Syllable, Word,
Phonological Phrase, Phonological Clause, Phonological Sentence,
Turn, Phonological Paragraph (projected), and Conversation.
AN University Microfilms Order Number ADG87-13789.
AU RAVIN, YAEL.
IN City University of New York Ph.D. 1987, 319 pages.
TI A DECOMPOSITIONAL APPROACH TO PREDICATES DENOTING EVENTS.
SO DAI V48(03), SecA, pp641.
DE Language, Linguistics.
AB The semantic representation of predicates has received renewed
attention in recent linguistic research, following the 1981
publication of Chomsky's Lectures on Government and Binding. One of
the major features of Chomsky's new theory is the reinstitution of
thematic roles, such as Agent and Patient, to express semantic
relations between predicates and their arguments. These roles are
posited as primitives and play a prominent part in the derivation of
syntactic structures. The first part of this dissertation argues
against theories such as Chomsky's, which rely on thematic roles.
It is shown that their underlying Restrictive approach prevents them
from accounting for the syntax and semantics of propositions
denoting events. The second part of the dissertation argues in
favor of a Nonrestrictive, non-thematic approach to semantics. J.
Katz's Decompositional Theory is the Nonrestrictive model adopted
here. The meaning of several predicates and propositions denoting
events is analyzed and represented in terms of Katz's Theory. The
Decompositional analysis is contrasted with the different thematic
analyses to reveal a formal system for semantic representation which
is complete and consistent and a set of principles which determine
semantic properties and relations.
AN University Microfilms Order Number ADG87-15458.
AU BRINGSJORD, SELMER C.
IN Brown University Ph.D. 1987, 226 pages.
TI THE FAILURE OF COMPUTATIONALISM.
SO DAI V48(04), SecA, pp937.
DE Philosophy.
AB This dissertation is composed of a number of arguments against
the thesis that persons are automata.
AN University Microfilms Order Number ADG87-14418.
AU CLING, ANDREW DEAN.
IN Vanderbilt University Ph.D. 1987, 216 pages.
TI DISAPPEARANCE AND KNOWLEDGE: AN EXAMINATION OF THE EPISTEMOLOGICAL
IMPLICATIONS OF ELIMINATIVE MATERIALISM.
SO DAI V48(03), SecA, pp667.
DE Philosophy.
AB The purpose of this dissertation is to consider Paul
Churchland's arguments for eliminative materialism and for the
abolition of traditional epistemology. It is shown that these
arguments are faulty and that there is more to be said for our
commonsense conception of mentality than the eliminative materialist
supposes.
The essay begins by explaining the eliminative materialists'
claim that our commonsense conception of mentality is an outmoded
theory which will, or at least should, be replaced by a theory to be
drawn from completed brain science. Drawing on contemporary work in
metaphysics and the philosophy of science, it is shown that
supervenience is an important intertheoretical relation which is not
equivalent to reduction or elimination. Supervenience allows us to
reconcile the claim that everything is physical with the claim that
not all properties are expressible in the language of physics.
Using this result, I argue that three of Churchland's four
arguments for eliminative materialism rest on the dubious
metaphysical assumption that all theories will either reduce to or
be eliminated by completed physical science. It is shown that this
failure is deeply ironic given Churchland's claim that disputes in
the philosophy of mind are largely empirical in character. It is
also shown, however, that eliminative materialists can easily
respond to charges that their view is somehow self-referentially
incoherent.
It is shown that Churchland's fourth argument for eliminative
materialism, and for the claim that traditional epistemology should
be abolished, depends upon his first three arguments and is,
therefore, flawed. It is also shown that the argument is a failure
in its own right. The essay concludes by showing that there are
some important respects in which our commonsense conception of
mentality and traditional epistemology are superior to purely
materialistic accounts. This superiority stems, in large part, from
the availability of intentional states such as beliefs.
AN University Microfilms Order Number ADG87-16367.
AU DADZIE, S. S.
IN Temple University Ph.D. 1987, 164 pages.
TI THE GRICE PROBLEM: A CRITICAL ANALYSIS OF THE CAUSAL THEORY OF
PERCEPTION.
SO DAI V48(04), SecA, pp937.
DE Philosophy.
AB The essay examines H. P. Grice's attempt to formulate the
necessary and sufficient conditions of perceiving in purely causal
terms. It involves appraisal of P. F. Strawson's criticism of the
thesis as inherently circular; George Pitcher's defence of it
against Strawson's challenge; Alvin I. Goldman's Historical
Reliabilism, a causal-cum-belief theory of knowledge which had
started off as a strictly Gricean analysis; and, finally, Donald
Davidson's theory of the explanation of action which construes
reasons as causes and, hence, explanation by reasons as only a
species of ordinary causal explanation.
According to our finding, Grice's thesis is indeed vulnerable to
Strawson's objection; Pitcher fails to deflect the force of
Strawson's attack, his own composite account of perception (couched
in causal, behavioral and direct realist terms) fails to improve the
prospects of Grice's doctrine; and its merits notwithstanding,
Strawson's critique lacks the wherewithal to make it a decisive
argument against the causal program. Our argument thence: the
necessary and sufficient conditions of perception cannot be provided
in causal terms; an adequate account has to be non-causal or, at
least, include (or reflect) factors which are demonstrably
refractory to causal analysis (for example, the concept's integrally
cognitive force, plus its intensional properties).
The study does not pretend to offer a comprehensive theory,
however, specifying the necessary and sufficient conditions of
perception in non-causal terms; it merely sketches the kind of lines
necessary for doing this if this were viable. The results are
fruitful, nonetheless: for, along with its central task of settling
a heretofore unresolved dispute in perception theory proper (that
between the Strawsons and the Pitchers), the study affords a sense
of the interconnections among seemingly disparate issues,
illuminating some age-old puzzles in philosophical debate; notable
among these being, of course, the two-fold flaw disclosed in the
causalist's program (Grice's as well as Goldman's and Davidson's),
namely, its weak grasp of the intensional complexity of the concepts
in question and, thence, its taking the general concession that
causal factors are relevant, to somehow lead to the
conclusion--without sufficient argument--that a causal theory of
those concepts is adequate.
AN University Microfilms Order Number ADG87-16113.
AU HARBORT, ROBERT A., JR.
IN Emory University Ph.D. 1987, 213 pages.
TI APPLICATION OF HERMENEUTICS TO MODELS OF MEDICAL INFORMATION.
SO DAI V48(04), SecA, pp939.
DE Philosophy.
AB A hermeneutic is an interpretation of something that integrates
understanding and application. Derived from the name of the god
Hermes, and referring to his bringing the gift of language to
humanity, it most often refers to interpretation and application of
biblical texts. From the late nineteenth century it has been used
by philosophers and literary critics to apply to a wider field of
interpretation. It is different from exegesis or explanation in the
scientific sense, which is divorced from practicality.
Hans-Georg Gadamer has been instrumental in linking the idea of
interpretation as the integration of explanation and application
with Aristotle's idea of "practical philosophy" as found in the
Ethics. He used analogies with everyday activities to illustrate
ideas about interpretation of text; I turned the process around to
ask whether there would be any advantage to modeling certain
nonliterary activities as interpretive processes. In particular, I
was interested in modeling various processes associated with
medicine. The hermeneutic model does not necessarily generate more
precision (in the scientific sense) in descriptions of medical
activities, but it does allow the model to include self-awareness.
This has not been available to models of medical activity with any
degree of objective content, yet treatises on the philosophy of
medicine list it as an important characteristic.
Medicine is an example of a hermeneutic activity at several
levels. Medical education, the individual practice of medicine by
one physician with one patient, the health care delivery system, and
medical ethics are all examples of medicine as hermeneutics.
Previous work in modeling of information and information processing
in medicine has been based primarily on scientific or existential
epistemologies. I will examine hermeneutics as a context in which
models of medical information and information processing are to be
judged for effectiveness. The purpose of the dissertation is to
establish the validity of the hermeneutic model and to use it to
evaluate several models of information and information processing in
medicine.
AN University Microfilms Order Number ADG87-12439.
AU SAUNDERS, RUTH ANN.
IN The University of Wisconsin - Madison Ph.D. 1987, 278 pages.
TI KNOWLEDGE WITHOUT BELIEF: FODOR VERSUS PIAGET ON COGNITIVE
EXPLANATIONS OF COGNITIVE CHANGE.
SO DAI V48(03), SecA, pp669.
DE Philosophy.
AB Jerry Fodor has recently argued for a version of nativism based
on the claim that it is impossible to give a cognitive account of
how new cognitive powers are acquired. Piaget has insisted that
without such an account, it is impossible to understand what
cognition is. My main concern in this work has been to expose and
clarify the deeper philosophical disagreements that underlie the
surface dispute.
This work brings to light basic disagreements over the nature of
knowledge, over what the fundamental units of cognitive psychology
are, and over what cognitive psychology ought to explain. For each
side of the dispute, I devote two chapters to articulating a set of
basic assumptions, defending their prima facie plausibility, and
showing how they lead to either Fodor's or Piaget's claim. Fodor's
nativism is presented as a true claim about the logical character of
certain sorts of representational theories of cognition. Piaget's
theory is interpreted as an account of increasing knowledge of
objects rather than as an account of internal mental organization.
So interpreted, Piaget's theory avoids Fodor's charge of
incoherence, avoids some common objections to the notions of stage
and equilibration, and presents a radically new understanding of
knowledge and cognition.
To explicate Fodor's claim, I show how it arises from one line
of thought within standard views about the nature of epistemology
and cognitive psychology. In the process, I identify assumptions
that are crucial for understanding the conflict between Fodor and
Piaget. The contrasting assumptions I develop to make sense of
Piaget's claim are that: (1) knowledge of objects is direct (rather
than mediated by knowledge of facts about the objects); (2) the
fundamental units of cognitive psychology are interactions and
interaction patterns (i.e., relationships between knowers and known
objects, rather than internal causal states with narrow content);
and, (3) cognitive explanations show how present interaction
patterns and the nature of the known object generate new cognitive
powers (rather than showing how processes of belief formation and
manipulation issue in behavior and new beliefs).
AN University Microfilms Order Number ADG87-12973.
AU SAYRE, PATRICIA ANN WHITE.
IN University of Notre Dame Ph.D. 1987, 260 pages.
TI MICHAEL DUMMETT ON THE THEORY OF MEANING.
SO DAI V48(03), SecA, pp669.
DE Philosophy.
AB The dissertation examines Dummett's recommendations regarding
the construction of a theory of meaning. It begins by taking up the
question of why a theory of meaning is wanted. It is argued that
the sense in which Dummett is concerned with meaning is broad enough
to give no offense to those with Quinean prejudices against
"meanings". It is also argued that the sense in which Dummett is
concerned to construct a theory is narrow enough to place a number
of constraints on the construction of a theory of meaning. Many of
these constraints may appear arbitrary at first, but can be given a
rationale by leaning hard on Dummett's suggestion that an adequate
theory of meaning must have a "genuinely scientific character".
This rationale can be extended to provide a basis for Dummett's
objections to Davidson's truth-conditional theory of meaning,
namely, his objections on the grounds that the theory is modest,
holistic, and faces difficulties in dealing with undecidable
sentences. Unfortunately, the rationale also provides a basis for
objections to Dummett's verificationist and falsificationist
alternatives to Davidson's theory. Dummett's alternatives are
explicitly designed to be neither modest nor holistic, but they do
face difficulties when it comes to undecidable sentences. It is
argued that although these difficulties are not in principle
insuperable, they do suggest that Dummett's constraints on the
construction of a theory of meaning may make such a theory
impossible to construct.
AN University Microfilms Order Number ADG87-16392.
AU SEREMBUS, JOHN HERMAN.
IN Temple University Ph.D. 1987, 214 pages.
TI ABSOLUTE COMPARATIVE PROBABILISTIC SEMANTICS.
SO DAI V48(04), SecA, pp941.
DE Philosophy.
AB The thesis of the dissertation is that relations between
statements of a formal language, which are suitably constrained to
mirror the non-quantitative probability relation 'is not more
probable than', can serve as a semantics for that language and that
this absolute, comparative, probabilistic semantics is a
generalization of absolute, quantitative, probabilistic semantics,
that is, the semantics for a formal language that employs one-place
functions that obey the laws of the probability calculus.
Chapter one provides an historical sketch of the area to which
the dissertation is a contribution. It traces the development of
what came to be known as probabilistic semantics from the work of
Sir Karl Popper through Robert Stalnaker, William Harper, Hartry
Field, Kent Bendall, and Hugues Leblanc. It also provides a brief
history of probability as a non-quantitative (comparative) concept
by discussing the work of Bruno De Finetti, Bernard Koopman, and
Charles Morgan. It concludes by explaining the thesis of the
dissertation in light of the just-sketched tradition and spells out
the program for the rest of the work.
Chapter two presents the syntax of a propositional language PL
and provides an absolute comparative probabilistic semantics for it.
It then shows that the language is sound and complete with respect
to that semantics. The last section gives an account of
generalization and argues that this semantics is a generalization of
the absolute comparative probabilistic semantics for PL. This
amounts to claiming that for every probability function there is a
corresponding probability relation and for every member of a proper
subset of probability relations, namely, that set which contains
only comparable relations, there is at least one probability
function corresponding to it.
Chapter three offers the same kind of results obtained in
chapter two for a first order language FL.
The final chapter offers a summation of the results and
highlights some of the features of absolute comparative
probabilistic semantics such as the intensionality of the logical
operators and the existence of what are termed 'assumption sets'.
It also suggests possible avenues of application and research
involving the new semantics.
AN University Microfilms Order Number ADG87-15497.
AU GOLDEN, RICHARD MARK.
IN Brown University Ph.D. 1987, 122 pages.
TI MODELLING CAUSAL SCHEMATA IN HUMAN MEMORY: A CONNECTIONIST APPROACH.
SO DAI V48(04), SecB, pp1174.
DE Psychology, Experimental.
AB Causal schemata represent knowledge of routine event sequences,
and are constructed from causal relationships. A computational
theory of causal schemata is proposed for controlling behavior and
recalling actions from memory. Within this theory, learning is
viewed as a procedure that involves estimating the probability
distribution of causal relationships in the world. The memory
recall process is a complementary procedure that uses the
probability distribution function estimated by the learning process
to select the most probable action to be executed or recalled within
a particular situation. A neurophysiological implementation of this
computational theory involving Anderson, Silverstein, Ritz, and
Jones's (1977, Psychological Review, 84, 413-451)
Brain-State-in-a-Box neural model and a procedure for representing
causal schemata as sets of neural activation patterns is proposed.
An important feature of the resulting system is that actions are
indirectly linked together through commonalities in the internal
structure of situations associated with those actions. The model
successfully accounts for the gap size effect in causal schemata (G.
B. Bower, J. B. Black, & T. J. Turner, 1979, Cognitive Psychology,
11, 177-220), effects of causal relatedness (J. M. Keenan, S. D.
Baillet, & P. Brown, 1984, Journal of Verbal Learning and Verbal
Behavior, 23, 115-126), certain types of confusion errors in human
memory for stories (Bower et al., 1979), and characteristics of
human memory for obstacles and irrelevancies in stories (Bower et
al., 1979; A. C. Graesser, S. B. Woll, D. J. Kowalski, & D. A.
Smith, 1980, Journal of Experimental Psychology: Human Learning and
Memory, 6, 503-515).
AN University Microfilms Order Number ADG87-13615.
AU WILLIAMS, PATRICK SWINNY.
IN Texas Tech University Ph.D. 1987, 257 pages.
TI QUALITY IN LINGUISTIC METAPHORS.
SO DAI V48(03), SecB, pp903.
DE Psychology, Experimental.
AB This study examined a neglected question about linguistic
metaphor: What are the psycholinguistic characteristics accounting
for some metaphors being better than others? Metaphor quality was
hypothesized to be a function of three
components--comprehensibility, aptness, and novelty. Nine other
variables were hypothesized to influence quality in metaphors.
Subjects rated a set of constructed metaphors on these 12
variables. Correlational analyses revealed that metaphor quality
was primarily a function of comprehensibility and novelty; higher
quality metaphors were highly comprehensible and familiar. Metaphor
quality, so defined, was found to be influenced primarily by
denotative and connotative similarity between a metaphor's subject
and predicate.
A second experiment examined hypotheses derived from Ortony's
compactness, inexpressibility, and vividness theses. It was
predicted that high quality metaphors would differ from low quality
metaphors by (1) being more difficult to paraphrase, (2) having
topics which undergo greater connotative meaning change, and (3)
being easier to recall. Differences between high and low quality
metaphors on connotative meaning change were significant.
In sum, the hypothesis that metaphor quality is a function of
comprehensibility, aptness, and novelty was partially supported.
Regarding the effects of quality level, it was concluded that
metaphors which are low in quality due to extremely high or low
levels of comprehensibility, aptness, and novelty differ from each
other in their effects on ease of paraphrase and recall. Truly high
quality metaphors, however, are hypothesized to have moderately high
levels of comprehensibility, aptness, and novelty. Such metaphors
differ from metaphors of moderately low quality primarily on
dimensions of connotative meaning.
AN University Microfilms Order Number ADG87-15237.
AU SCHOEN, LAWRENCE MICHAEL.
IN Kansas State University Ph.D. 1987, 81 pages.
TI SEMANTIC REPRESENTATION: TYPICALITY, FLEXIBILITY, AND VARIABLE
FEATURES.
SO DAI V48(04), SecB, pp1180.
DE Psychology, Personality.
AB Previous research has demonstrated that apparently simple
lexical items (e.g., piano) can be instantiated in very different
ways as a function of context. Schoen (1986) gathered salience
ratings of words' properties across various sentential contexts.
Using a variable feature system, he described the instantiated
meaning of a word as the collection of the mean salience ratings of
its properties for a given context. The present study continues and
expands upon this research by (1) examining shared properties of
multiple exemplars (both high and low levels of typicality) from
within the same category (e.g., robin and chicken), across a variety
of contexts; (2) correlating salience weights of semantic properties
with typicality ratings of category exemplars, (3) gathering
salience ratings of properties pertaining to superordinate
categories themselves (e.g., bird) and comparing these ratings to
ratings obtained for individual category exemplars, and (4)
exploring the merits of two different methodological procedures for
gathering property salience ratings. The results of these
experiments were discussed in terms of current models of semantic
representation, as well as a new perspective, the variable feature
approach.
End of File