leff@smu.UUCP (Laurence Leff) (12/15/88)
Subject: AI-Related Dissertations from Sigart No. 102, part 2 of 3
The following is a list of dissertation titles and
abstracts related to Artificial Intelligence taken
taken from the Dissertation Abstracts International
(DAI) database. The list is assembled by Susanne
Humphrey and myself and is published in the SIGART
Newsletter (that list doesn't include the abstracts).
The dissertation titles and abstracts contained here
are published with the permission of University Microfilms
International, publishers of the DAI database. University
Microfilms has granted permission for this list to be
redistributed electronically and for extracts and
hardcopies to be made of it, provided that this notice
is included and provided that the list is not sold.
Copies of the dissertations may be obtained by
addressing your request to:
University Microfilms International
Dissertation Copies
Post Office Box 1764
Ann Arbor, Michigan 48106
or by telephoning (toll-free) 1-800-521-3042
(except for Michigan, Hawaii, and Alaska).
In Canada: 1-800-268-6090.
From SIGART Newsletter No. 102
part 2 of 3
Economics to Linguistics
------------------------------------------------------------------------
AN University Microfilms Order Number ADG87-09801.
AU HALL, HOMER KEITH, JR.
IN Purdue University Ph.D. 1986, 196 pages.
TI THE PROCESS OF DECISION MAKING UNDER UNCERTAINTY WITH AN APPLICATION
TO THE THEORY OF EXPERT SYSTEMS.
SO DAI V48(01), SecA, pp194.
DE Economics, Theory.
AB The broad objective of this dissertation is to examine the
process of decision making under uncertainty. First, within the
framework of the economics of information, a theoretical model is
developed that describes rational choice as a process of gathering
information in a sequence of r > 1 actions, determining an
appropriate stopping time for the information collection, and then
making a final decision. The formulation of an a priori strategy is
described where each information gathering action, as well as the
final decision, is contingent upon information signals received from
previous actions. As information is collected during the execution
of such a strategy uncertainty is reduced. Therefore, at any step
in the sequence of information gathering actions the remaining
portion of the strategy, called a substrategy, is seen to solve a
subproblem of the original problem. It is shown that if a strategy
is optimal for a decision problem (maximizes the expected net
payoff), then any of its substrategies will be optimal for its
associated subproblem.
Second, the possible use of this, or some other
decision-theoretical model, as a framework for computer based
decision making systems is discussed. Since very little work has
been done on the decision-theoretic basis of expert systems as a
subject of artificial intelligence, this approach is first compared
to the approach taken in most current expert systems. The major
advantage is seen to be the focus on the efficiency of the decision
process and therefore necessarily on the trade-off between the
"correctness" and the cost of making a decision. Then, some of the
computational aspects of using our theoretical model as a basis for
an expert system are discussed. A method for acquiring sufficient
information from a decision maker to calculate a gross optimal joint
information and decision strategy (maximizing the expected gross
payoff of the decision process) is discussed that does not depend
upon fully specifying an outcome function.
AN University Microfilms Order Number ADG87-10235.
AU WRITT, PATRICK JAMES.
IN Columbia University Ph.D. 1987, 149 pages.
TI MATHEMATICAL PROBLEM SOLVING: AN EXPLORATION OF THE RELATIONSHIP
BETWEEN STRATEGIES AND HEURISTICS.
SO DAI V48(01), SecA, pp72.
DE Education, Mathematics.
AB The purpose of this study was to explore the effect strategy has
on the problem solving process, its relationship to the heuristic
process and, in particular, its effect on each of Polya's four
phases of problem solving: Understanding the Problem, Devising a
Plan, Carrying Out the Plan, and Looking Back. By strategy is meant
a problem specific procedure which will solve the problem correctly
and by heuristic is meant a non-problem specific procedure which
does not necessarily solve it.
To investigate the effect of strategy on the heuristic process a
computer program, The Square Problem, was written which measured
both the subject's strategy and heuristic process. The Square
Problem could be solved using four strategies--three successful and
one unsuccessful--and 12 heuristics, each assigned by a group of
experts to one of Polya's four phases of problem solving.
Seventy-five high school seniors and juniors attending seven
different independent high schools in the New York City metropolitan
area solved the problem four different times.
The results of the study indicate that there is a relationship
between the strategy used to solve the problem and the heuristic
process, with observed differences in the heuristic process directly
linked to the strategy used to solve the problem. The parts of the
process that were similar tended to be related more to the problem
itself, while the differences tended to be related to the strategy
employed to solve the problem. Examined from the perspective of
Polya's four-phase-model, most of the differences among the three
successful strategy groups are associated with the phase Carrying
Out the Plan.
Unsuccessful problem solvers showed no difference on the phase
Carrying Out the Plan but did spend more time than the successful
problem solvers on the phases Understanding the Problem, and
Devising a Plan. None of the strategy groups employed the phase
Looking Back. Based on the results it is recommended that
strategy--determined by a logical analysis of the problem--be taken
into consideration when examining the heuristic process.
AN University Microfilms Order Number ADG87-07794.
AU IRZIK, GUROL.
IN Indiana University Ph.D. 1986, 111 pages.
TI CAUSAL MODELING IN THE SOCIAL SCIENCES: FOUNDATIONS AND APPLICATIONS
TO PHILOSOPHY.
SO DAI v47(12), SecA, pp4317.
DE Education, Philosophy of.
AB Although causal modeling employed in the social sciences is
highly relevant to philosophical issues of statistical explanation
and probabilistic causation, it has gone largely unnoticed by
philosophers. The few exceptions to this were critical of the
enterprise of causal modeling as a whole. However, the similarities
between the sociological applications of causal modeling and recent
philosophical theories of causality extend beyond the technical
details, to cover the fundamental intuitions behind them: from the
use of statistical machinery to the principle of common cause, and
to the idea of process as the basis of causal relationships.
Accordingly, the first part of this dissertation includes a
discussion of different types of models (such as path models) and
various techniques associated with them and defends causal modeling
against charges of being methodologically defective, empiricist, and
reductionist.
The second part uses causal methodology to modify and extend
Wesley C. Salmon's model of statistical explanation, and to
establish the proper connection between causal and probabilistic
claims.
Finally, it is concluded that an intuitive and irreducible idea
of causality based on causal processes provides a suitable framework
both for models of scientific explanation and causal methodology in
the sciences.
AN University Microfilms Order Number ADG87-10163.
AU TULLY, MARIANNE C.
IN Columbia University Teachers College Ed.D. 1987, 195 pages.
TI THE EVENT OF KNOWING: AN EDUCATOR'S PERSPECTIVE ON HEIDEGGER AND
HERMENEUTICS.
SO DAI V48(01), SecA, pp75.
DE Education, Philosophy of.
AB A phenomenological description of the event of knowing yields
the essential characteristic of resonance. Resonance is stipulated
to be a sense of already being familiar with the world to be known.
Martin Heidegger's writings on knowledge and knowing support this
description, especially as regards his notion of knowing as a
founded mode of being-in-the-world, of knowing as ontological
understanding, and of knowing as a hermeneutic experience. The
Heideggerian images of the broken hammer and the clearing illustrate
each successive level of the explanation of what knowing is
according to Heidegger, and in turn offer a connection to the
educating experience. Certain conditions, created by the teacher,
may facilitate an educating experience in which the students
resonate with the lesson. Heidegger's views on knowing provide a
philosophical grounding for these conditions of learning.
AN University Microfilms Order Number ADG87-09066.
AU GAGE, BARBARA ANN.
IN University of Maryland Ph.D. 1986, 184 pages.
TI AN ANALYSIS OF PROBLEM SOLVING PROCESSES USED IN COLLEGE CHEMISTRY
QUANTITATIVE EQUILIBRIUM PROBLEMS.
SO DAI V48(01), SecA, pp94.
DE Education, Sciences.
AB This study investigates and compares the problem solving
behavior of college chemistry faculty (experts) and undergraduate
chemistry students (novices) in solving three quantitative
homogeneous gas phase equilibrium problems. Steps and sequence
taken by experts (n = 5) and novices (n = 20) were compared to a
standard general college chemistry textbook presentation for three
problem types: (1) computing K(,c) from equilibrium concentrations
of all species; (2) calculating new equilibrium concentrations of
species when a product is added to a system at equilibrium; (3)
calculating species equilibrium concentrations starting with amount
of one reactant.
Subjects interviewed during solution of the problems were asked
to think-aloud as they progressed, explaining each step taken.
Interviews were tape-recorded and transcribed. Resulting protocols
were analyzed to: (1) identify procedural steps taken; (2) record
sequence of steps taken; (3) compare expert and novice sequences to
textbook model; (4) identify procedural and conceptual errors made
by novices.
Textbook solution presentations were found to represent the step
sequence taken by experts. Novice approaches vary from textbook and
expert approaches in sequence. Step sequence was generally not
related to novice success. Experts consistently wrote chemical
equations for each problem while textbook presentations and novices
did not.
Major errors were committed by novices, independent of their
previous chemistry grades. Novices recognized problem types and
applied learned algorithms rather than analyzing problem systems.
When presented with a "disturbed equilibrium system" problem,
novices had difficulty visualizing the system and quantitatively
adjusting for new concentrations. Students confused amount for
concentration but generally knew that concentrations are used in
K(,c) expressions.
Results of this study support previous findings that novices are
algorithm or rule learners. Novices depend on problem type
recognition and recall of algorithms rather than analysis of problem
systems. No other general heuristics were found.
Findings also confirm that students do not apply the
implications of LeChatelier's Principle consistently but employ
algorithms instead of analysis in dealing with quantitative shifts
in equilibrium problems.
Further work is recommended in equilibrium problem type
recognition, problem system visualization, and effects of problem
descriptions on novice performance.
AN University Microfilms Order Number ADG87-11457.
AU SKOLYAN, KENNETH STEPHEN.
IN United States International University Ed.D. 1987, 199 pages.
TI ASSESSING AND FORECASTING THE IMPLICATIONS OF ARTIFICIAL
INTELLIGENCE SYSTEMS ON PEDAGOGY IN THE PUBLIC SECTOR.
SO DAI V48(02), SecA, pp371.
DE Education, Technology.
AB The Problem. The purpose of this study was to asssess and
forecast the implications Artificial Intelligence Systems would have
on teaching methods, teacher training, curriculum and
teacher-student roles in public education.
Method. A three round Delphi Study was conducted. The Delphi
Panel included 23 participants from the areas of A. I. research,
computer using teachers from universities and high schools and
writers in the field.
Results. (I) Teacher Training and Reaction. (1) Resistance to
the introduction of Artificial Intelligence Systems will occur. (2)
Additional training for teachers in computers is likely. (3) The
structure of teacher training will change from informational
transmittal to learning how people learn and the structure of what
they know. (II) Curriculum. (1) Schools will not be radically
changed. The goals of education and the importance of reading and
writing would remain. (2) Students would not be learning a large
portion of their lesson at home. (3) Elitist separation by subject
matter or social adjustment would not occur. (4) The classroom
would change. Multi-leveled, multi-topical learning centers would
be developed where students could learn at their own pace. Changes
in testing would occur along with deeper student involvement with
subject matter. (5) New subjects and increases in strategy
development and problem solving would occur as a result of A. I.
systems in the classroom. (6) The A. I. system would become an
indispensable tool with frequent upgrade of use skills and
development of new subjects occurring. (III) Teacher/Student Roles.
(1) Teacher roles would not change from deliverance of knowledge and
skills to parent, counselor or psychologist. (2) A. I. systems
would not fill the role of parent, counselor or psychologist nor
transmit values. (3) The role of teachers would change from
learning director to co-problem solver. (4) A. I. systems would
greatly assist teachers in improving learning experiences,
diagnosing problems and assisting students with learning handicaps.
AN University Microfilms Order Number ADG87-10410.
AU FRESNEDA, PAULO SERGIO VILCHES.
IN The George Washington University D.Sc. 1986, 281 pages.
TI ASSESSING THE POTENTIAL OF MICROCOMPUTER-BASED EXPERT SYSTEMS IN THE
PROCESS OF AGRICULTURAL TECHNOLOGY TRANSFER IN BRAZIL.
SO DAI V48(01), SecB, pp193.
DE Engineering, Agricultural.
AB This study focused on technology transfer problems in the
agricultural sector. The research hypotheses were to assess the
potential use of microcomputer-based expert systems as (1)
mechanisms for transferring technical information between
agricultural research and rural extension programs, (2) training
aids for extensionists' (county agents') programs, and (3) tools for
gathering relevant information from farmers and extensionists for
research and extension management. The study also addressed the
integrative role that expert systems technology plays in the overall
process of technology transfer in the agricultural area, as well as
the self-improving feature the technology introduces to the Total
System (Research + Extension + Farmers) of agricultural technology
development.
A prototype expert system was developed for diagnosing and
recommending treatment for selected potato diseases. In an
experiment carried out in an extension organization in Brazil, 56
extensionists used the prototype and filled out a questionnaire
designed to test the research hypotheses. Forty-five agricultural
researchers and university professors in the agricultural field were
also interviewed. Categorical data analysis procedures and
chi-square tests were used to test the research hypotheses and to
check for relationships between the various variables.
The findings of the study indicate that microcomputer-based
expert system technology has the potential to accomplish the three
objectives presented above. Research results have also indicated
that microcomputer and expert system technology can not only
integrate the information flow between research centers, extension
programs, and farmers, but can also introduce a tool for
self-improvement in the agricultural technology development system.
AN University Microfilms Order Number ADG87-06268.
AU KASSEM, MOSALLEM D.
IN The Catholic University of America Ph.D. 1987, 220 pages.
TI APPLICATION OF ARTIFICIAL INTELLIGENCE APPROACH TO DESIGN OF ONE-WAY
FLEXURAL MEMBERS IN REINFORCED CONCRETE.
SO DAI v47(12), SecB, pp4990.
DE Engineering, Civil.
AB This dissertation presents a methodology to use the principles
of artificial intelligence in engineering design using the design of
reinforced concrete one-way flexural members as a specific
application. The end product is a LISP-BASED AI-aided computer
program for the design of beams and slabs bending essentially in one
direction. The computer program was designed to act somewhat like a
human designer in that it simulated the learning or gaining
experience part of the human ability. This was done by introducing
into the American Concrete Institute (ACI) design procedures a
"preliminary work" module. The key element of the module was the
introduction of a feedback mechanism composed of three steps:
acquisition of experience, application of experience, and database
management. The final product was a program capable of applying
pattern recognition to obtain an educated, experience-based,
preliminary estimate of the cross-sectional dimensions, thus
considerably increasing design efficiency.
As the program is used, it gathers design experience in the form
of a database which selectively stores the input and output data of
a processed design problem. The benefits of this experience are fed
back to enhance the processing of subsequent designs.
It appears from this work that it is possible to incorporate the
use of some AI techniques in the process of engineering design and
thus gain considerable operational efficiency.
AN University Microfilms Order Number ADG87-12388.
AU MASOOD, MUHAMMAD TAHIR.
IN Virginia Polytechnic Institute and State University Ph.D.
1987, 187 pages.
TI FURTHER DEVELOPMENT AND APPLICATION OF COMPUTER-ASSISTED CREATIVITY
TO RURAL ROAD RESOURCES MANAGEMENT PROJECTS.
SO DAI V48(02), SecB, pp519.
DE Engineering, Civil. Transportation.
AB Artificial Intelligence (AI) is the part of computer science
concerned with designing computer systems, that is, systems that
exhibit the characteristics we associate with intelligence in human
behavior--understanding language, learning, reasoning, solving
problems, and so on. Many believe that insights into the nature of
the mind can be gained by studying the operation of such programs.
The AI concept has formed the basis for developing the
computer-assisted creativity techniques called The Computer
Consultant (TCC), and The Idea Machine (TIM).
TIM has, so far, been applied to topics in the engineering and
"hard sciences" fields. In this study these techniques are
presented/reviewed in detail and the research concentrated on the
expansion/development of a methodology for computer-assisted
creativity. This research will help in further evolution of TIM
into a richer process for idea generation and general problem
solving, and in enhancing the application capabilities. This is
done by: (1) expanding the conceptual and ideas data bases from
which analogies can be drawn; (2) conducting comprehensive trials
with TIM to establish its strengths and limitations; and (3) doing
research on techniques for the screening and packaging of ideas
techniques.
Rural road projects are an important part of rural development
programs in the Third World countries. For some years the
construction of such road projects, funded in part by international
donor agencies, has been a subject of some controversy. Most policy
makers in the developing or underdeveloped countries support the
practice of expanding the rural dirt (unpaved) roads rather than
spending limited resources on maintenance. Some donor agencies are
now inclined to only support maintenance-biased road projects.
A similar situation arose in Pakistan where the U.S. Agency for
International Development (USAID) proposed to fund a road resources
development project in the Sind Province. This real life situation
is selected as a basis for developing a road resources management
model, and generating ideas using TIM. These ideas are screened and
packaged to be used in revising the model for further trials.
The application of TIM to this problem from the civil
engineering field results in some useful outputs. This study
provides a good basis for further enhancing TIM capabilities.
AN University Microfilms Order Number ADG87-09476.
AU WHARTON, STEPHEN WAYNE.
IN University of Maryland Ph.D. 1986, 169 pages.
TI A SPECTRAL TARGET RECOGNITION EXPERT FOR URBAN LAND COVER
DISCRIMINATION IN HIGH RESOLUTION REMOTELY SENSED DATA.
SO DAI V48(01), SecB, pp214.
DE Engineering, Civil.
AB Parametric methods for spectral classification use statistics or
other thresholds to identify spectral categories in multispectral
remotely sensed data. A limitation of statistical classification
methods is that the analyst cannot utilize nonparametric spectral
knowledge in the classification process. It is necessary to treat
each set of observations as a separate case in which the
relationship between the observed spectral patterns and the set of
categories must be empirically defined. The results are not
generally applicable to other areas or dates because of spectral
variations induced by differing atmospheric or illumination
conditions.
A prototype expert system was developed to demonstrate the
feasibility of classifying multispectral remotely sensed data on the
basis of knowledge of the spectral relationships within and between
land cover categories. The spectral expert was developed and tested
with Thematic Mapper Simulator (TMS) data having eight spectral
bands and a spatial resolution of 5m. A knowledge base was developed
that describes the target categories in terms of three types of
spectral features: band to band relations that describe the shape of
the reflectance curve; category to background relations that
describe local contrast; and category to category relations that
describe contrast with other designated categories. The knowledge
base is used to direct the accumulation of spectral evidence for
each target category. The system makes classification decisions on
the basis of convergent evidence, i.e., two or more features that
support the same category. The spectral expert achieved an accuracy
of 80 percent correct or higher in recognizing eleven spectral
categories in TMS data for the Washington D.C. area.
AN University Microfilms Order Number ADG87-09669.
AU PARRA-LOERA, RAMON.
IN New Mexico State University Ph.D. 1986, 94 pages.
TI AN AUTOMATIC OBJECT RECOGNITION SYSTEM.
SO DAI V48(01), SecB, pp223.
DE Engineering, Electronics and Electrical.
AB The problem of devising a fast object recognition system is
addressed here. An implementation of the system is presented and
discussed. The concept Normalized Interval/Vertex Descriptors
(NI/VD) is presented and used to represent contours. Two systems
are presented: Global Shape Recognition System and Partial Shape
Recognition System (for occluded objects). Polygonal approximation
methods for contour representation are discussed and some of them
are implemented. Presorting techniques to increase classification
speed are developed. Data base management system concepts and
structural pattern recognition concepts are used in the development
of the presorting algorithms. Experimental results that demonstrate
the performance of the present system are given. Comparison of the
presented approach with different existing approaches is also
included. The system is implemented in "C" language and runs under
Unix Berkeley 4.2 in a VAX 11/750.
AN University Microfilms Order Number ADG87-05726.
AU TKACIK, THOMAS E.
IN University of Virginia Ph.D. 1986, 170 pages.
TI MACHINE VISION FOR MOBILE ROBOTS.
SO DAI V48(01), SecB, pp225.
DE Engineering, Electronics and Electrical.
AB A vision system using a pair of Linear Image Arrays is proposed
for use with mobile robots in a factory environment. This vision
system was developed to perform the function of guidance,
navigation, and obstacle detection required by mobile robots. Using
a standard 16-bit microprocessor for all of the computing tasks, low
cost, small size, and low power consumption can be obtained.
Real-time operation is achieved by using line images, which provide
less data then 2-dimensional images. However, this results in the
need for image processing techniques that are tailored for use with
line images.
Both guidance and navigation are aided by a new technique called
the difference of medians, a robust method of detecting and locating
known features in line images. Obstacle detection is performed
using a technique that locates feature points in thresholded images,
and template matching for determining the distances to these feature
points. A distance map is then generated, from which obstacles can
be located. The algorithms used are evaluated and chosen for their
efficiency, with the result that a pair of images can be analyzed in
less then 0.5 sec. Finally, an analysis is made of measurement
errors as a function of camera misalignment.
AN University Microfilms Order Number ADG87-08554.
AU VALAVANIS, KIMON P.
IN Rensselaer Polytechnic Institute Ph.D. 1986, 308 pages.
TI A MATHEMATICAL FORMULATION FOR THE ANALYTICAL DESIGN OF INTELLIGENT
MACHINES.
SO DAI v47(12), SecB, pp5007.
DE Engineering, Electronics and Electrical.
AB A mathematical formulation for the analytical design of
Intelligent Machines operating in structured or unstructured
uncertain environments with minimum supervision and interaction with
a human operator has been developed. The structure of the
Intelligent Machine is defined to be the structure of a
Hierarchically Intelligent Control System, composed of three levels
of interactive controls ordered according to the principle of
Increasing Intelligence with Decreasing Precision, namely: the
organization level performing information processing tasks in
association with a long-term memory, the coordination level dealing
with specific information processing tasks with a short-term memory
only, and the execution level which performs the execution of
various tasks through hardware using feedback control methods. A
three level probabilistic model is derived for such a system and an
Entropy function is proposed as a common measure of the performance
of the three levels. Special simple architectures specifically
designed to implement the mathematical model are derived. A
Generalized Partition Law of Information Rates which considers the
internal control procedures, interaction between the three levels
and memory exchange procedures within the Intelligent Machines is
also derived to indicate the flow of knowledge (information) in such
machines. The intelligent control problem is cast as the
mathematical programming solution that minimizes the total Entropy
of Intelligent Machines.
AN University Microfilms Order Number ADG87-06135.
AU WRIGHT, JAMES AUSTIN.
IN The University of Texas at Austin Ph.D. 1986, 128 pages.
TI QUANTITATIVE ANALYSIS OF HUMAN PERCEPTION AND JUDGMENT.
so DAI v47(12), SecB, pp5008.
DE Engineering, Electronics and Electrical.
Psychology, Psychometrics.
AB Understanding human perception is becoming increasingly
important in the field of engineering as we develop computing and
robotic systems which attempt to model the psychophysics and
psychology of the human being. A large amount of effort has been
expended in modeling human decision processes but less is known
about exactly what information is used by people in making decisions.
This research is concerned with identifying, evaluating and
expanding a method for finding out what knowledge is used by people
in making decisions and quantifying its significance to the decision
making process.
The specific application covered in the dissertation is
analyzing print quality surveys to determine what physical
characteristics of printed matter people use to judge between
samples and how the physical characteristics affect judgments of
preference. We demonstrate the survey design and analysis technique
on two print quality surveys, one of which shows how the techniques
can be used to judge samples relative to a reference sample.
These same techniques can be applied in many other areas to
ascertain exactly what attributes of a particular object or
situation are perceivable to people and what affect they have on
decisions related to them--for example, one could study the
characteristics of strip-chart recordings from well logging
equipment to identify the attributes used by experts to judge such
recordings. This could then be incorporated in an expert system to
identify potentially promising wells by computer.
We also present a new multidimensional scaling technique based
on maximum entropy and compare it to the previously preferred method
for this type of survey analysis which was based on maximum
likelihood.
AN University Microfilms Order Number ADG87-07650.
AU CURRAN, ALLEN RICHARD.
IN Stanford University Ph.D. 1987, 120 pages.
TI AN INTELLIGENT CONTROL SYSTEM DESIGN AID.
SO DAI v47(12), SecB, pp5025.
DE Engineering, Mechanical.
AB The objective of this thesis has been to develop the framework
for an "intelligent" Computer Aided Control System Design (CACSD)
Aid. Such systems are intended to allow engineers who are not
control system specialists to effectively solve control system
problems. To achieve this objective, a model was developed to
explain the behavior of a human designer when solving routine
controls problems. The model reflects that the designer solves
control system problems by breaking the overall problem into
successively smaller parts, each of which fall into one of three
classes: (1) Analyzing the problem and/or deciding on a design
approach to take, i.e., diagnostic or deductive tasks which are
heuristic in nature; (2) Following a "recipe" for design until
either the solution is reached or "something goes wrong"; (3)
Calculating a result, number crunching, etc.
The Design Aid has the following "intelligent" features: (1)
Introspection. The Design Aid is able to explain decisions made en
route to a problem solution. (2) Modifiable Knowledge Base. The
user can create, modify, and delete rules according to his own
preference and experience. (3) Flexible Division of Labor. The
user is able to work on any part of the design at any time and have
the Design Aid "fill in the rest." (4) Expertise. The problem
solving performance of the prototype system is roughly comparable to
that of a student who has taken an introductory controls course.
The Design Aid is structured as a hierarchy of specialists that
solve parts of the design problem and/or pass parts of the problem
to sub-specialists. The design process proceeds forward from the
initial state to the final goal state, with each specialist
satisfying an intermediate goal. Three kinds of specialists are
utilized by the Design Aid, each of which corresponds to one type of
human design activity enumerated above.
The thesis addresses a number of issues relevant to design
expert systems including representation of design plans, failure
handling, passing advice to design procedures, and dealing with
interacting design goals.
AN University Microfilms Order Number ADG87-08097.
AU LI, PEIGEN.
IN The University of Wisconsin - Madison Ph.D. 1987, 202 pages.
TI DEVELOPMENT OF AN EXPERT SYSTEM FOR MULTIFACET DRILLS.
SO DAI V48(02), SecB, pp540.
DE Engineering, Mechanical.
AB Multifacet drills (MFDs) have shown great success in real
drilling operations. However, their applications have been limited
due to the complicated drill geometry design and the difficulty in
drill point grinding. An expert system called DRILLEX with
high-level expertise about MFDs and with learning algorithms is
proposed and developed for use in automated manufacturing
environments. The system is possessed of capabilities for (1)
design of MFD geometry and grinding parameters; (2) selection of
drilling conditions (speed, feedrate, drill material, coolant); (3)
trouble diagnosis; (4) recognition of drill wear states; and (5)
selection of optimal speed and feedrate by use stochastic
automation.
The major elements of DRILLEX, i.e. the knowledge base and the
inference engine are designed and analyzed. This thesis
investigates the details needed for the development of knowledge
base by use of VAX-11 Datatrieve.
The fuzzy concept is adopted in this study to deal with some
phenomena which are vague in nature. The strategy of Fuzzy Logic
Control is used to select suitable MFD types during the geometry
design of Multifacet Drills.
The expert system DRILLEX is capable of recognizing the drill
wear states through the learning process known as Fuzzy C-Means
algorithm. The thrust force and torque are considered as features
for clustering and classification. The feasibility of this
algorithm is shown by experiments and through simulations. The
approach of this fuzzy pattern recognition method, indicates good
suitability for varying environments and better classification
accuracy than the conventional pattern recognition techniques.
The learning process for optimizing cutting speed and feedrate
is introduced based on stochastic automaton theory. Analysis of the
algorithm, formulated as a learning controller of cutting
parameters, is performed. Simulation studies have been carried out
to verify the effectiveness of the proposed method.
AN This item is not available from University Microfilms International
ADG03-74262.
AU SCHNEITER, JOHN LEWIS.
IN Massachusetts Institute of Technology SC.D. 1986.
TI AUTOMATED TACTILE SENSING FOR OBJECT RECOGNITION AND LOCALIZATION.
SO ADD X1986.
DE Engineering, Mechanical.
(no abstract provided in the DAI database])
AN University Microfilms Order Number ADG87-11622.
AU ZAREFAR, HORMOZD.
IN The University of Texas at Arlington Ph.D. 1986, 190 pages.
TI AN APPROACH TO MECHANICAL DESIGN USING A NETWORK OF INTERACTIVE
HYBRID EXPERT SYSTEM.
SO DAI V48(02), SecB, pp544.
DE Engineering, Mechanical.
AB Artificial intelligence (AI) is the branch of computer science
that attempts to create "intelligent" behavior in computing machines.
Expert systems is an applied field in the realm of AI which deals
with emulating human expertise and problem solving capabilities in a
particular domain. Successful systems have been developed in a
number of disciplines such as medicine, geology, biochemistry and
computer configuration.
Mechanical design is a process of combining informal and common
sense knowledge about a design domain with the more formal and well
developed analysis and synthesis methods. Those aspects of the
design which deal with the intuitive human thought process are not
well defined for computer programming at any level. However, the
more routine redesign class can be formalized for expert systems
application.
The objective of this research is to develop a taxonomy for a
class of redesign process within the framework of available expert
systems technology and mechanical design practice. The approach
follows the practical design procedure in an industrial environment.
It involves partitioning of the design space into subspaces and
developing interactive expert subsystems to address the design
process. A coordinator at each level of the design activity is
suggested to emulate the lead engineer in a design team and direct
the flow of the design process. An important aspect of this
research is integration of data processing routines with the
rule-based expert systems.
To illustrate the proposed approach, a prototype parallel axis
spur gear-drive system was developed. The program encompasses a
network of interacting rule-based and procedural modules to design
the components of a prototype speed reducer. Experience gained from
this research suggests that the proposed approach could be employed
for a variety of decomposable design tasks. Employing similar
systems in the real industrial environment should result in more
creativity and productivity from the design engineers.
AN University Microfilms Order Number ADG87-11623.
AU ATHEY, SUSAN.
IN The University of Arizona Ph.D. 1987, 167 pages.
TI A MENTOR SYSTEM INCORPORATING EXPERTISE TO GUIDE AND TEACH
STATISTICAL DECISION-MAKING.
SO DAI V48(02), SecA, pp238.
DE Information Science. Business Administration, Management.
Education, Business.
AB The statistical mentor system incorporates a knowledge base into
an educational tool for novices in statistical decision making to
use in choosing a statistical technique. The novices are students
in a business school curriculum who are expected to learn the basic
statistical processes in business applications. The purpose of the
system is to stimulate learning of the data analysis process on the
part of the novice, usually a difficult task. The system acts as a
consultant to the novice and approaches the task using a top-down
problem solving strategy rather than the traditional bottom-up
strategy used by novices.
The heart of the system is the rule base for differentiating
between statistics. These rules were built by gathering expertise
from two experts in statistical analysis. The rules are based on
five questions which the data can answer, as well as the type of
data, the number of variables, and any dependent/independent
relationships which exist between the variables. The knowledge base
consists of five rule sets and can be represented either by
condition/conclusion rules or by a set of multi-dimensional tables.
Twenty-nine statistics and the rules for choosing them are in the
rules sets. The knowledge base was used to define the logic
incorporated in the consultant system in order to aid the user in
selecting a correct technique. A dialogue mode is employed in the
consultant to determine which conditions are true for the problem
and data set. The rule sets are then checked to find the conclusion
satisfying the conditions.
The computer mentor was tested against the usual textbook mentor
method (search through a textbook until one finds a statistic that
looks promising) with two different groups of subjects, 25
undergraduates and 19 doctoral students. The results were that the
computer-assisted students in both samples correctly solved a larger
proportion of problems and had a higher average number of problems
correct than did the textbook assisted groups.
AN University Microfilms Order Number ADG87-02901.
AU MULLIN, THERESA M.
IN Carnegie-Mellon University Ph.D. 1986, 196 pages.
TI UNDERSTANDING AND SUPPORTING THE PROCESS OF PROBABILISTIC ESTIMATION.
SO DAI v47(12), SecA, pp4217.
DE Information Science.
AB There is an increasing interest in and recognition of the need
to explicitly incorporate factors of uncertainty in public policy
analysis, often in the form of subjective probability distributions.
Many experimental studies involving the subjective judgments of
nonexperts have shown there to be widespread and systematic biases
towards overconfidence. These biases appear to result from
subjects' failure to use available data in a theoretically
appropriate way, and through the faulty use of a variety of
judgmental heuristics, such as anchoring and adjustment. The
implications of these laboratory studies for the quality of expert
judgments are not clear, however. Many decision analysts consider
these potential shortcomings in judgments can be ameliorated through
structuring the judgment problem and through conditioning the
assessor to be more aware of and account for various sources of
uncertainty.
A study conducted to investigate the potential benefits of
problem decomposition in the probabilistic estimation of almanac
quantities. It was found that assessments based on decomposition
models were no more accurate nor better calibrated than estimates
made directly, and use of the simple multiplicative decomposition
models significantly altered the direction of the bias in subjects'
judgments, from systematic underestimation to overestimation of the
unknown quantities.
A second study involved the probabilistic estimation of almanac
quantities in a test of three suggested debiasing strategies thought
to encourage assessors to account for more uncertainties in their
estimates: these strategies involved telling the assessor (1) to
think of contradictory reasons; (2) to describe alternative
scenarios; and (3) about problems with anchoring and adjustment.
Only warnings about the use of anchoring and adjustment reduced
subjects' overconfidence significantly.
To study the underlying processes involved in probabilistic
estimation and determine whether there were any appreciable
differences between the estimation processes of experts and
nonexperts, estimation protocols for these two types of assessors
were examined. Results of this study indicate that there may be
significant differences in the approaches to probabilistic
estimation used by experts versus nonexperts. These include
differences in attitude, in the way the assessor works through the
estimation problem, in the use of the anchoring and adjustment
heuristic and in responses to debiasing questions. (Abstract
shortened with permission of author.).
AN University Microfilms Order Number ADG87-06126.
AU WIDMEYER, GEORGE ROBERT, III.
IN The University of Texas at Austin Ph.D. 1986, 150 pages.
TI PREFERENCE DIRECTED REASONING IN DECISION SUPPORT SYSTEMS.
SO DAI v47(12), SecA, pp4218.
DE Information Science.
AB This research compares two rule-based reasoning approaches and a
cardinal dominance method with an assumed multi-attribute value
function. The goal of these approaches is to reduce the cognitive
demands on the decisionmaker in terms of preference elicitation from
that required by conjoint analysis. This benefit is offset by the
error that can result when compared with a multiattribute value
function. A measure of effort is developed by deriving a value
function based on the number of pairwise comparisons along
attributes and the number of tradeoff comparisons across attributes.
The accuracy of each approach is measured by the number of
alternatives eliminated and the probability that the optimal
alternative is not eliminated. This is done using a Monte-Carlo
simulation for 10, 20 and 30 alternatives versus 2 through 10
attributes.
The results of the research are that rule-based methods are not
significantly less accurate than a normative model and this is more
than offset by the reduced effort. Unlike conjoint analysis, the
rule-based method studied is valid when only partial information is
available and since it uses symbolic logic its results can be
explained more directly. The significance of this research is that
it presents a theory for decision support systems that integrates
symbolic reasoning and preference technologies.
AN University Microfilms Order Number ADG87-05945.
AU ALAM, YUKIKO SASAKI.
IN The University of Texas at Austin Ph.D. 1986, 283 pages.
TI A THEORETICAL APPROACH TO THE JAPANESE VERBAL SYSTEM WITH
COMPUTATIONAL IMPLICATIONS.
SO DAI v47(12), SecA, pp4373.
DE Language, Linguistics.
AB This study presents a set of phrase structure rules for Japanese
and an analysis of the verbal component, both of which are intended
to serve a unified treatment of case assignment. Analyses presented
here are expressed in X-bar schema as well as in the formalism of
Lexical Functional Grammar.
The set of phrase structure rules is formulated on the following
hypotheses: (a) Japanese phrase strucure is strictly
binary-branching and head-final, (b) the surface linear order of
constituents is base-generated, (c) grammatical relations in
Japanese are not defined configurationally, but mainly through
cases, (d) Japanese differentiates between logical and constituent
sentences. Logical sentences presuppose the presence of a subject,
whereas constituent sentences do not. Finally, (e) the minimal
elements necessary to compose a Japanese constituent sentence are a
verb, a tense marker and an element indicating the use of the
sentence, whereas the minimal elements for a logical sentence are a
verb and a subject.
This study also postulates a new syntactic category U(se), which
stands for not only 'function words' but inflections. The
postulation of this category permits the claim that Japanese phrase
structure consists of the complement and the head at all levels,
thus making it possible for the set of phrase structure rules to be
compact.
Comparison is made between the present model of grammar with
other hypotheses: the flat structure hypothesis, the move-alpha
hypothesis and case-linking grammar. It is also shown that the
present model facilitates an attempt at an overall treatment of the
causative construction which has been problematic to other analyses.
In addition, a case theory is presented which is an extension of
the work by localist case theorists. Unlike those of case
theorists, however, the aspect system supporting the proposed case
theory is an extension of the work by Vendler. Based on this
theory, lexical entries for verbs are postulated and designed so
that they, together with the proposed phrase structure rules and
lexical entries for elements in the category U, make possible a
unified treatment of case assignment.
AN University Microfilms Order Number ADG87-07094.
AU BEAUVAIS, PAUL JUDE.
IN Michigan State University Ph.D. 1986, 157 pages.
TI A SPEECH ACT THEORY OF METADISCOURSE.
SO DAI v47(12), SecA, pp4374.
DE Language, Linguistics. Education, Teacher Training.
AB Metadiscourse commonly is defined as "discourse about
discoursing." In its brief history, the term has appeared in several
models of text structure; however, theorists disagree concerning the
range of metadiscursive structures and the role of metadiscourse in
a larger theory of text linguistics. This dissertation provides a
detailed history and a critical analysis of the existing
metadiscourse theories, and it offers an alternative theory that
defines metadiscourse as a component of speech act theory.
The first chapter surveys the history of metadiscourse from
Zellig S. Harris' early use of the term to recent studies by Joseph
M. Williams, Avon Crismore, and William J. Vande Kopple.
The second chapter introduces four criteria for evaluating the
utility of theoretical models. The existing metadiscourse models
are analyzed in light of these criteria and are found to contain
imprecise definitions of key terms. The models also are found to be
collections of disparate structures instead of principled systems.
The third chapter provides an overview of important works on
speech act theory by J. L. Austin and John R. Searle. Particular
attention is devoted to the distinction between illocutionary acts
and propositions, the differences between explicit performative
structures and implicit expressions of illocutionary intent, and the
types of illocutionary acts that are possible.
In the fourth chapter, metadiscourse is defined as those
illocutionary force indicators that identify expositive
illocutionary acts. A taxonomy of metadiscourse types is provided,
and canonical forms using performative or near-performative
structures are identified for each type. Partially explicit forms
of metadiscourse that do not provide an attributive subject also are
identified.
The dissertation concludes with suggestions for experimental
studies using the proposed metadiscourse model.
AN University Microfilms Order Number ADG87-08276.
AU BLACK, EZRA WILLIAM.
IN City University of New York Ph.D. 1987, 181 pages.
TI TOWARDS COMPUTATIONAL DISCRIMINATION OF ENGLISH WORD SENSES.
SO DAI v47(12), SecA, pp4374.
DE Language, Linguistics.
AB An experiment is conducted which compares three different
methods of deciding which of three or four senses characterizes each
occurrence of a word for which a Key Word In Context concordance has
been constructed. The three methods consist of a dictionary-based
approach (DG) where categories intended to classify the words and
expressions occurring in each concordance line are simply the
subject codes of a major dictionary; an approach (DS1) in which
categories are obtained via a frequency analysis of words occurring
in the immediate neighborhood of the "node word"--the word in
focus--of the concordance, and of "content" words occurring anywhere
in a given line; and an approach (DS2) chiefly based on the
content-analytic categories obtained by closely reading the
concordances of a 100-type sampling of words occurring in the
20-25-million-token English text source, consisting of the official
proceedings of the Canadian House of Commons. Results are that DG
performs extremely poorly--in fact, near-randomly; DS1 and DS2 yield
better and substantially similar performances. The conclusion is
that domain-general, syntax-based approaches to automatic word sense
discrimination and domain-specific, content-analytic approaches need
and complement each other.
AN University Microfilms Order Number ADG87-07013.
AU CAIN, EILEEN.
IN University of Hawaii Ph.D. 1986, 319 pages.
TI LEXICAL RETRIEVAL DISTURBANCES IN A CONDUCTION APHASIC.
SO DAI v47(12), SecA, pp4374.
DE Language, Linguistics. Health Sciences, Speech Pathology.
AB This dissertation is a case study of a conduction aphasic, C.G.
Chapter 1 presents the clinical features of conduction aphasia.
This is followed by a discussion of many of the linguistic
impairments manifested in this syndrome: an impairment affecting the
use of various word classes (including noun facilitation), anomia,
and lexical substitution errors, which include verbal paraphasias
(replacement of a target word by a word similar to it in meaning
and/or form in speech) and paralexias (lexical substitutions in
reading). Lexical substitution errors can affect both the open and
the closed class vocabularies. It will be my position in this
dissertation that all of these errors involve lexical retrieval
disturbances.
Chapter 2 discusses accounts of conduction aphasia in the
literature: the classical disconnection model of Wernicke
(1874/1969), Goldstein's (1948) central aphasia, the encoding
deficit approach, studies attributing the repetition deficit in
conduction aphasia to a defect of audio-verbal short-term memory,
and several phonological approaches. These accounts are described
and evaluated.
Chapter 3 describes the subject in this study and the
methodology used for data-gathering. The corpus includes samples of
spontaneous speech, repetition, naming, reading, and writing, as
well as metalinguistic tasks where the subject was asked to judge
sentences as to their grammaticality and to correct them if
possible.
The fourth chapter presents the results of the Boston Diagnostic
Aphasia Examination (Goodglass and Kaplan 1972) and the Token Test
(De Renzi and Vignolo 1962).
Chapter 5 discusses the results of the tests which I designed in
order to provide a more complete picture of C.G.'s language
abilities and impairments than can be obtained from the Boston
Diagnostic Aphasia Examination or the Token Test. Results from the
Wug Test (Berko 1958) are also included in Chapter 5. The data
includes instances of lexical omissions and substitutions in
repetition, reading, and spontaneous speech.
Chapter 6 presents accounts of the linguistic impairments
described in the previous chapters in terms of lexical selection
errors, relying on models by Merrill Garrett (1980, 1981, 1982) and
Katz and Fodor (1963), as well as network models of the lexicon.
AN University Microfilms Order Number ADG87-10433.
AU CHAO, WYNN.
IN University of Massachusetts Ph.D. 1987, 228 pages.
TI ON ELLIPSIS.
SO DAI V48(02), SecA, pp380.
DE Language, Linguistics.
AB This work is intended as an investigation into elliptical
phenomena in natural language. It is argued that at least two major
classes of elliptical constructions must be distinguished, on the
basis of the presence or absence of their major phrasal heads. The
recovery of the missing material in the class in which the relevant
heads are missing (H('(TURN)) class) is of a syntactic nature, and
this along with other 'characteristic properties' of H('(TURN))
constructions follow as a direct consequence of the omission of
their syntactic heads. In constrast, constructions in the H+ class
are pronominal in nature, and their characteristic properties follow
from the fact that pronominals may be interpreted either in the
syntax (e.g., as bound variables) or in the discourse
representation.
An account of these constructions is proposed within the
Government-Binding framework, and consists of four main components:
(i) a 'defective' X-bar schema, which allows for the base-generation
of H('(TURN)) constructions, (ii) a licensing principle on
D-structure representations, which constrains the output of the
defective rule schema, and (iii) a process of elliptical
reconstruction at LF, which applies to both H('(TURN)) and H+
constructions, and (iv) the reintroduction of the more general
notion of 'recovery of content' to subsume the more narrowly defined
notion of 'syntactic identification.'
It is argued that this proposal has several desirable
consequences. In the first place, it provides a strictly syntactic
basis for the H+/H('(TURN)) classification, and derives the
distributional and interpretive properties of these constructions
from the interaction of this syntactic fact with existing principles
in the theory. It accounts for interesting similarities between
null arguments (pro) and H+ phenomena. It sheds light on various
aspects of the interpretation of both elliptical and overt
pronominal elements. And finally, it makes predictions about the
range of variation in the manifestations of H('(TURN)) and H+
constructions that one may expect to find in natural language.
The linguistic data in this work is drawn primarily from
English, but constructions from French, Brazilian Portuguese and
Chinese are considered as well.
AN University Microfilms Order Number ADG87-07343.
AU FINN, KATHLEEN ELLEN.
IN Georgetown University Ph.D. 1986, 261 pages.
TI AN INVESTIGATION OF VISIBLE LIP INFORMATION TO BE USED IN AUTOMATED
SPEECH RECOGNITION.
SO DAI v47(12), SecA, pp4376.
DE Language, Linguistics.
AB Although acoustically-based automatic speech recognition (ASR)
systems have witnessed enormous improvements over the past ten
years, they still experience difficulties in several areas,
including operation in noisy environments.
Research with human subjects has shown that visible information
about the talker provides extensive benefit to speech recognition in
difficult listening situations. The primary purpose of this study
was to demonstrate the feasibility of performing automatic speech
recognition based only on optical information. Other purposes
included characterizing the nature of the optical recognition
process, comparing it to human visual speech perception, and
estimating its potential value to acoustic-based ASR.
An optically-based algorithm was developed to recognize a set of
23 English consonant phonemes. Distance measurements were derived
from a set of 12 dots placed on or near the speaker's mouth. The
dots offered a computationally simple means of tracking lip
movements.
After data reduction and selective weighting of the variables,
five distance variables were found capable of identifying 74% of the
phonemes, with no acoustic information whatsoever. The same
variables correctly identified 87% of the phonemes by viseme groups.
The machine's viseme set was very similar to the human set.
The preponderance of the variables measured vertical distances,
suggesting that vertical opening, as opposed to horizontal movement
or area of mouth opening, is a critical cue to optical speech
recognition. The optical recognition process was subjected to the
effects of random visual noise at various levels, and was found to
be fairly robust.
The results of the optical recognition algorithm were compared
against the results of an acoustic algorithm operating over the same
speech tokens. The acoustic algorithm's performance, subjected to
signal-to-noise ratios ranging from +25 dB to +65 dB, measured 64%
correct phoneme recognition. It was estimated that an effective
combination of the optical and acoustic recognition systems could
result in 95% recognition.
AN University Microfilms Order Number ADG87-11796.
AU FUKADA, ATSUSHI.
IN University of Illinois at Urbana-Champaign Ph.D. 1987, 149
pages.
TI PRAGMATICS AND GRAMMATICAL DESCRIPTIONS (JAPANESE).
SO DAI V48(02), SecA, pp380.
DE Language, Linguistics.
AB The goal of the dissertation is to argue for the recent position
that strictly distinguishes between grammar and pragmatics and
discuss its consequences. By now it is clear that "raw" linguistic
data contain many pragmatic elements, whether they are speech act
properties, implicatures, beliefs and intentions of the speech
participants, or what not. In analyzing such data linguists, in my
view, are constantly faced with two problems; one is how to
distinguish pragmatic matters from purely grammatical aspects of the
data, and the other is what to do with such pragmatic elements. The
second problem has to do with a proper conceptualization of the
relationship between pragmatics and grammar. In particular,
linguists must have clear conception as to what the proper domain of
each field is, and what the exact nature of the mode of their
interaction is. This is, in my opinion, one of the outstandingly
important empirical issues in current theoretical linguistics.
The first problem concerns ways of determining, in a given
situation, what is pragmatic and what is grammatical. If one
decides to take the position that denies the heterogeneous nature of
raw linguistic data, this problem will not arise at all. I will
argue, however, that such a position cannot be seriously maintained.
These are the two major issues this study addresses. The
arguments in the body of the thesis will take the form of analyses
or reanalyses of some problematic phenomena in Japanese and English
where one's position on the above issues will have a serious effect
on resulting grammatical descriptions of the phenomena. Two highly
controversial areas of Japanese grammar, i.e. passives and
causatives, issues concerning honorifics and politeness in general,
and an analysis of the English complement-taking verb have are some
of the major descriptive issues taken up in this study. In each
case, it will be shown that the position being argued for can
provide solutions to the controversies and/or lead to what seems to
be the optimal over-all descriptions.
AN University Microfilms Order Number ADG87-08406.
AU GIOTTA, FRANCISC ANTONIO.
IN University of California, Davis Ph.D. 1986, 887 pages.
TI THE ARTICLE SYSTEM OF FRENCH AND FUZZY SEMANTIC MODELS.
SO DAI v47(12), SecA, pp4376.
DE Language, Linguistics.
AB The study offers a comprehensive semantic representation of
definiteness, indefiniteness and partitivity in French article uses.
The crucial assumptions are: (1) natural languages use imprecise
concepts and (2) the article marks the precision/imprecision level
under which a nominal is used and becomes recoverable in the
'universe of discourse'. This suggests the application of a
possibilistic model in the true/false semantic 'continuum' to
account for subjective 'degrees of truth'.
Explicit definition of fuzzy-semantic notions and a considerable
amount of linguistic data supporting the theory makes the present
work particularly useful in the area of French linguistics,
universal grammar and artificial intelligence.
The semantic representation of definite and indefinite reference
in classical logic, pragmatics and discourse dynamics are described
and discussed in terms of adequacy and explanatory power to handle
linguistic evidence found in French article uses.
A tentative categorial representation of fuzzy nominalizations
with respect to verbal subcategorization yields three major types of
assertive content: 'features', 'kinds' and 'abstracts'. This model
is tested in special uses of the article: articled proper nouns,
definites in topicalization and specific and non-specific
indefinites 'a certain', 'such a', 'any'. Fuzzy set-theoretical
categories of functors, 'monomorphism', 'epimorphism',
'representative' and 'isomorphism' define, respectively, indefinite,
definite, demonstrative and articleless nominals. The consideration
of plurality demands a drastic reformulation of the notions of set
and combinatorics. The 'partitive' is analysed as a pairing between
nameability (presence in the universe of discourse) and
assertability (argument-value for the current predication) of a
nominal theme.
The last three chapters are extensions and applications of the
fuzzy semantic model to anaphora, genericness and pragmatic
presuppositions. Anaphora is defined as a behavior map in a
discourse linear-ordered dynamic system (dynamics is described by
standard information retrieval notions: reacheability,
observability, minimal realization, free trajectory, memory and
forgetful functors). The incrementation law of discourse is held as
a generalized entailment structure: a 'crisp' conclusion is
inferable from fuzzy premises under some nicety conditions. The
pragmatic component makes use of fuzzy sentential connectors. The
ordinary 'truth-tables' are replaced by offer/challenge risk-values
distributed among speakers in a dialogue interpretation.
AN University Microfilms Order Number ADG87-06620.
AU LOBECK, ANNE C.
IN University of Washington Ph.D. 1986, 300 pages.
TI SYNTACTIC CONSTRAINTS ON VP ELLIPSIS.
SO DAI v47(12), SecA, pp4378.
DE Language, Linguistics.
AB The theory of government proposed by Chomsky (1981) is argued to
include a set of principles which restrict the distribution of empty
categories, in particular those derived by movement. This study
investigates two types of empty categories which are not derived by
movement, and which have not been systematically analyzed in
government-binding theory. They include those base-generated empty
categories which arise from ellipsis in NP, S and S', and the set of
non-phrase nodes which Emonds (1985) argues are generated empty in
the base, and filled at a post-transformational level. The aim of
this study is to show that these two types of empty categories are
subject to a uniform set of restrictions, and interact with each
other in a way best expressed in terms of proper government.
To incorporate the restrictions on ellipted categories, a theory
of government is developed where both X('0) heads and specifiers are
potential governors, and where phrasal categories, but not heads,
are subject to the ECP. As a result, intermediate projections,
which are shown to be those which are systematically ellipted in
English, must be properly governed. Such projections are licensed
only by 'specifier' government, but not by lexical or antecedent
government.
Ellipsis in S is discussed in detail, where the constraints on
the distribution of empty phrasal projections of V are shown to
follow from principles of specifier government and their interaction
with conditions on empty heads, the Empty Head Conditions and the
Generalized Head Movement Constraint. INFL is analyzed as SP(V),
which, aside from accounting for several properties INFL shares with
specifiers, allows ellipsis across categories to be uniformly
expressed as licensed by specifier government. The proposed
analysis obviates analyzing INFL as a lexical head when tensed but
not untensed, in contrast to previous proposals.
AN This item is not available from University Microfilms International
ADG05-60077.
AU ROBERGE, YVES.
IN The University of British Columbia (Canada) Ph.D. 1986.
TI THE SYNTACTIC RECOVERABILITY OF NULL ARGUMENTS.
SO DAI V48(01), SecA, pp119.
DE Language, Linguistics.
AB In most natural languages, a sentence may include a variety of
missing elements the recoverability of which is made possible by
different processes. This thesis investigates the type of syntactic
recoverability found in null argument languages. It is supposed
that the mechanisms responsible for this type of recoverability are
deeply embedded in Universal Grammar and that this suggests that
there is no need for a parameter designed to allow empty arguments
per se.
The main goal pursued here is to present a systematic account of
the similarities between recoverability through verbal agreement and
recoverability through clitics. This results in the proposal that
languages with subject clitics and/or object clitics are the same as
languages with rich subject agreement and/or object agreement as far
as the licensing of the empty pronominal pro is concerned.
We then examine the relationship between clitics and overt NPs in
the so-called clitic doubling constructions. The hypothesis
defended here is that subject clitics and object clitics are surface
realizations of the same abstract element and that this can account
for the symmetry existing between various types of clitic regarding
the licensing of pro, the possibilities for doubling, and
extractions out of doubling constructions at S-structure and at LF.
AN University Microfilms Order Number ADG87-10499.
AU ROBERTS, CRAIGE.
IN University of Massachusetts Ph.D. 1987, 364 pages.
TI MODAL SUBORDINATION, ANAPHORA, AND DISTRIBUTIVITY.
SO DAI V48(02), SecA, pp381.
DE Language, Linguistics.
AB The analysis of pronominal anaphora provides us with tools to
explore linguistic structures involving the scope of operators. In
this dissertation, I develop a theory of anaphora, modifying and
extending existing proposals in the literature, and then use it to
explore distributivity and related phenomena.
I assume that pronouns are interpreted as variables, and base a
theory of anaphora on the claim that there are two kinds of
constraints on how these variables may be bound. One type of
constraint involves the relative positions of antecedents and
anaphors in the hierarchical structure of discourse. I propose an
extension of Discourse Representation Theory wherein a relation of
subordination between propositions is induced by their mood. Mood
is analyzed in terms of modality, and establishes the position of a
proposition in the Discourse Representation. The structure which
results constrains both inference and the potential for anaphora.
The other type of constraint on anaphoric binding is based on
the configurational notion of c-command in the Government and
Binding Theory. Recognizing that the Binding Theory and the theory
of discourse anaphora are both necessary in a comprehensive theory
of anaphora permits a clarification and simplification of each. It
is argued that the Binding Principles hold at S-Structure, and that
coindexation is only a guide to interpretation in discourse, and not
necessarily an indication of coreference.
This comprehensive theory of anaphora serves as a tool for the
exploration of the phenomenon of distributivity, including the
group/distributive ambiguity in examples such as four men lifted a
piano. It is argued that distributivity arises in predication when
either the determiner in the subject is quantificational or there is
an implicit or explicit adverbial distributivity operator.
Anaphoric phenomena associated with distributivity are shown to be a
consequence of the scope of operators.
This theory of distributivity, implemented in the mapping from
S-Structures onto Discourse Representations, then provides further
arguments that coindexation is not to be interpreted as coreference,
and also illuminates the contribution of the number of a pronoun to
its interpretation.
AN University Microfilms Order Number ADG87-07465.
AU ROSS, GARRY.
IN Texas A&M University Ph.D. 1986, 174 pages.
TI COHERENCE THEORY: AN INTERDISCIPLINARY STUDY.
SO DAI v47(12), SecA, pp4378.
DE Language, Linguistics.
AB What writers of composition texts have written about coherence
is not useful to teachers or students. The valuative nature of the
language they use to describe coherence does not further instruction.
They confuse three aspects of language use--correct grammar, correct
usage, and cohesion--with coherence. Their failure to recognize the
separateness of these three has resulted in them writing about
coherence in a way that is uninstructive and confusing.
Those relevant aspects of cohesion which textbooks discuss are
reference, connection, and lexical cohesion. Reference is of two
kinds, exophoric and endophoric. Exophoric reference points outside
the text. Endophoric reference, which can be broken down into
anaphoric and cataphoric reference, functions within a text.
Connection is of four types: additive, adversative, causal, and
temporal. It can point out internal or external relationships.
Lexical cohesion depends on the lexicosemantic relationship between
two words.
The second chapter of Huckleberry Finn shows that standard
English is not what gives a text coherence. Twain's use of dialect
and non-standard English does not distract from the global structure
of the episode. Just as standard English is not a clear requirement
for coherence, cohesion is not a clear requirement for coherence.
Cohesion does not deal with global restraints. Coherence does.
Philosophy offers much to the discussion of coherence. Kant's
Categories and logical form offer a base from which to discern form
in discourse. The coherence theory of truth and the philosophy of
language-in-use suggest that there are realities which can function
as bases for determining the wholeness of a work.
Gestalt psychology and cognitive psychology show that structure
is important to perception. Cognitive structures function in the
process of knowing. These structures are discernible and can be
used as bases to identify textual coherence.
Linguistics offers the global structures of text grammars as
clues to the generation of coherent texts. Linguists working from
the base of language-in-use philosophy have identified structures
that make reference to context of situation to establish the
boundaries of a text. Both linguistic endeavors offer grounds from
which texts can be judged coherent.
AN This item is not available from University Microfilms International
ADG03-74350.
AU SCHEIN, BARRY JAY.
IN Massachusetts Institute of Technology Ph.D. 1986.
TI EVENT LOGIC AND THE INTERPRETATION OF PLURALS.
SO ADD X1986.
DE Language, Linguistics.
(no abstract provided in the DAI database])
AN This item is not available from University Microfilms International
ADG03-74644.
AU SPROAT, RICHARD WILLIAM.
IN Massachusetts Institute of Technology Ph.D. 1985.
TI ON DERIVING THE LEXICON.
SO ADD X1986.
DE Language, Linguistics.
(no abstract provided in DAI database])
AN University Microfilms Order Number ADG87-06125.
AU WEIR, CARL EDWARD, JR.
IN The University of Texas at Austin Ph.D. 1986, 117 pages.
TI ENGLISH GERUNDIVE CONSTRUCTIONS.
SO DAI v47(12), SecA, pp4379.
DE Language, Linguistics.
AB English gerundives are nominalizations containing a verb phrase
constituent whose initial, non-adverbial element has the 'ing'
morpheme suffixed to it. Often gerundives also contain a possessive
NP constituent, and consequently they are sometimes referred to as
POSS-ing constructions.
Evidence is presented in this dissertation which demonstrates
that gerundives contribute to the formation of characteristic and
episodic readings in sentences in much the same way that bare
plurals do, and that they too may serve as complements of
kind-predicates. Additional evidence is presented to show that on
one reading gerundives denote events. Two fundamental points of
view on how to formally represent events are discussed, and an
analysis of gerundives proposed by Gennaro Chierchia is shown to
have serious flaws.
An analysis of gerundives is formulated in which stages in the
sense of Greg Carlson's work are abandoned--this is made possible
through the use of Davidsonian predicates, since the event argument
in such a predicate can be accessed to compute the relevant
spatio-temporal slice of an individual under consideration. Also
abandoned is Carlson's view that bare plurals unambiguously serve as
proper names for kinds of things. Instead, they are taken to denote
kinds of things or indefinite manifestations of kinds of things. By
asserting that bare plurals have an indefinite reading, it is
possible to explain their ability to arise in the object position of
existential 'there' constructions, a position in which definite NPs
have traditionally been observed to sound anomalous. Gerundives are
claimed to be similar to bare plurals, except that instead of
denoting indefinite manifestations of kinds on one of their
readings, they denote a definite event or state.
end of part 2 of 3
*************************