leff@smu.UUCP (Laurence Leff) (01/15/89)
Subject: AI-Related Dissertations from SIGART No. 104, part 2 of 3
The following is a list of dissertation titles and
abstracts related to Artificial Intelligence taken
taken from the Dissertation Abstracts International
(DAI) database. The list is assembled by Susanne
Humphrey and myself and is published in the SIGART
Newsletter (that list doesn't include the abstracts).
The dissertation titles and abstracts contained here
are published with the permission of University Microfilms
International, publishers of the DAI database. University
Microfilms has granted permission for this list to be
redistributed electronically and for extracts and
hardcopies to be made of it, provided that this notice
is included and provided that the list is not sold.
Copies of the dissertations may be obtained by
addressing your request to:
University Microfilms International
Dissertation Copies
Post Office Box 1764
Ann Arbor, Michigan 48106
or by telephoning (toll-free) 1-800-521-3042
(except for Michigan, Hawaii, and Alaska).
In Canada: 1-800-268-6090.
From SIGART Newsletter No. 104
File 2 of 3
Engineering to Linguistics
-----------------------------------------------------------------------
AN University Microfilms Order Number ADG87-17025.
AU PIETRANSKI, JOHN FRANK.
IN Louisiana Tech University D.E. 1987, 282 pages.
TI APPLICATION OF AN EXPERT FUZZY LOGIC CONTROLLER TO A ROTARY DRYING
PROCESS.
SO DAI v48(05), SecB, pp1440.
DE Engineering, Chemical.
AB An expert fuzzy logic controller has been developed that consists
of both precise and imprecise state descriptions and decision
rules. The expert fuzzy logic controller incorporates two
contrasting types of controllers under an intelligent supervisory
expert system which supervises control of a simulated industrial
drying process. The rotary drying operation contains multiple
process input and output variables. Some of the variables are
nonlinear and exhibit interaction effects. Concurrently, the
proposed control system uses "crisp logic" deterministic based
controllers for typical and familiar control situations which are
precise and well known, as well as the more complex "fuzzy logic"
and fuzzy algorithms for situations that demand linguistic valued
determinations. The control system structure is hierarchical in
that the knowledge-based expert system determines the selection
and sequence of the control rules utilized by the fuzzy logic
controllers. The fuzzy logic controllers are themselves master
controllers in a cascade configuration and drive the setpoint for
their respective slave flowrate controllers. The slave controllers
are represented by the crisp logic flow controllers.
The control system configuration uses multiple predetermined rule
sets for the fuzzy logic controllers. The selection of the
appropriate rule set is determined on-line by the expert system
which is designated the expert fuzzy supervisor. Supervisory
control of the process is performed using a five task algorithm.
The tasks include interpretation of the process data, evaluation
of the current state attributes, comparison of the current state
and goal state attributes, determination of requisite process
setpoint changes, and selection and sequence of valid operations
which are to be executed.
A major advantage of the configuration used in this study is the
control system's ability to automatically drive the process to
selected steady-states as determined by the rules and facts within
the knowledge-base expert-system supervisor. In addition, this
work demonstrates that an overall objective for a typical
industrial operation, such as the rotary drying process, can be
designed and implemented using intelligent supervisory control in
the decision-making process. In order to be successful in
realizing the control objective, a standard decision-making
procedure that incorporates the desired results and anticipated
problem situations into an agreeable knowledge-base supervisory
program was developed. The expert fuzzy logic controller responded
well to changes in the requirements for the process attributes.
AN University Microfilms Order Number ADG87-12265.
AU SIRCAR, JAYANTA KUMAR.
IN University of Maryland Ph.D 1986, 534 pages.
TI COMPUTER AIDED WATERSHED SEGMENTATION FOR SPATIALLY DISTRIBUTED
HYDROLOGIC MODELING.
SO DAI v48(05), SecB, pp1457.
DE Engineering, Civil.
AB Physically based hydrologic models that simulate streamflow in
terms of watershed characteristics are important tools in the
water resource decision making process. The parameters for
physically based models in current use are defined as averages for
the watersheds involved. A logical step in improving the quality
of the simulated streamflows would be the development of
practical, spatially distributed models that are capable of
simulating the hydrologic consequences of physical variations
within watersheds. A key element in spatially distributed
hydrologic simulation is the interdependence between runoff and a
three-dimensional flow conveyance network defined by the
topography of the watershed. While remote sensing can provide an
adequate definition of land cover and, in some cases, even the
soil type and soil moisture, an efficient means of defining
elements of the topographic network and related catchment areas
must be available if practical versions of spatially distributed
hydrologic models are to be realized. If the models are to be
available for general use on larger watersheds, this topographic
analysis must be computer assisted using digital format data
extracted from hard copy maps.
The objective of the present dissertation is to use a binary image
to: (1) define the elevation, slope and aspect of any point on a
map surface; and (2) delineate those catchments that drain into
any user defined channel segment or point along the channel
system.
The technique developed is a hardware/software system that uses a
graph-theoretic approach to represent and manipulate raster
scanned digital contours in a computer. Segmentation of the
watersheds is accomplished through the development of a set of
"expert heuristics" that simulate the manual operations of
subbasin delineation on maps. The designed system includes
components that: (1) label the digitized contour traces output
from a scanner with an elevation attribute; (2) create a
corresponding digital matrix of elevations that describe the
spatial variation of topography; (3) register digitized drainage
networks to the elevation database using a geographic information
system framework; and (4) delineate the basin or subbasins that
contribute runoff to user defined points or reaches along the
channel network. (Abstract shortened with permission of author.).
AN University Microfilms Order Number ADG87-23098.
AU THURSTON, HOWARD MICHAEL.
IN Stanford University Ph.D 1987, 251 pages.
TI AN EXPERT-KNOWLEDGE-BASED APPROACH TO BUILDING SEISMIC PERFORMANCE
EVALUATION.
SO DAI v48(07), SecB.
DE Engineering, Civil.
AB Evaluating the seismic performance of buildings often requires
substantial engineering expertise gained from technical training
and accumulated practical experience. The research work described
herein investigates the applicability of expert-knowledge-based
techniques in evaluating the seismic performance of buildings.
Expert-knowledge-based techniques are defined as those which are
critically dependent upon expertise, expert opinion, or
rules-of-thumb in order to achieve satisfactory results.
There is considerable data available on the seismic performance of
unreinforced masonry buildings. These buildings generally tend to
perform very poorly when subjected to seismic forces. However,
certain buildings in this category have characteristics which
greatly improve their seismic resistance.
By contrast, an important category of building which lacks
meaningful seismic performance data is tilt-up construction.
Furthermore, modern tilt-up buildings are being built for retail
and commercial applications. There is concern among some members
of the engineering community about the seismic performance of some
new types of tilt-up buildings.
Expert-knowledge-based damage evaluation methodologies are
presented for multistory unreinforced masonry and tilt-up
buildings. These methodologies are building-specific, and are
tailored to best represent the damage data and evaluative
expertise available for each building type. Finally, a general
framework is developed to aid in designing an
expert-knowledge-based methodology for evaluating any building.
The methodology for unreinforced masonry buildings is valuable in
making the characteristics which are critical to performance of
these structures explicit. Although developed on the basis of
damage data from Mainland China, the methodology could be modified
to more closely reflect the damage data and construction practices
in other countries or regions. The tilt-up methodologies show good
agreement with United States data for such structures.
Expert-knowledge-based methodologies have the potential to be
useful tools in the hands of knowledgeable professionals in
understanding the damage-ability of buildings subjected to seismic
loads. In addition, these methodologies make the expertise used in
the evaluation process explicit, thereby aiding in understanding
how engineering experts make such evaluations.
AN University Microfilms Order Number ADG87-25105.
AU BABA, MUTASIM FUAD.
IN Virginia Polytechnic Institute and State University Ph.D 1987
179 pages.
TI INTELLIGENT AND INTEGRATED LOAD MANAGEMENT SYSTEM.
SO DAI v48(08), SecB.
DE Engineering, Electronics and Electrical.
AB The design, simulation and evaluation of an intelligent and
integrated load management system is presented in this
dissertation. The objective of this research was to apply modern
computer and communication technology to influence customer use of
electricity in ways that would produce desired changes in the
utility's load shape. Peak clipping (reduction of peak load) using
direct load control is the primary application of this research.
The prototype computerized communication and control package
developed during this work has demonstrated the feasibility of
this concept.
The load management system consists of a network of computers,
data and graphics terminals, controllers, modems and other
communication hardware, and the necessary software. The network of
interactive computers divide the responsibility of monitoring of
meteorological data, electric load, and performing other
functions. These functions include: data collection, processing
and archiving, load forecasting, load modeling, information
display and alarm processing. Each of these functions requires
certain amount of intelligence depending on the sophistication and
complication of that function. Also, a high level of reliability
has been provided to each function to guarantee an uninterrupted
operation of the system. A full scale simulation of this concept
was carried out in the laboratory using five microcomputers and
the necessary communication hardware.
An important and integral part of the research effort is the
development of the short-term load forecast, load models and the
decision support system using rule-based algorithms and expert
systems. Each of these functions has shown the ability to produce
more accurate results compared to classical techniques while at
the same time requiring much less computing time and historical
data. Development of these functions has made the use of
microcomputers for constructing an integrated load management
system possible and practical. Also, these functions can be
applied for other applications in the electric utility industry
and maintain their importance and contribution. In addition to
that, the use of rule-based algorithms and expert systems promises
to yield significant benefits in using microcomputers in the load
management area.
AN This item is not available from University Microfilms International
ADG05-61010.
AU DA MOTA TENORIO, MANUEL FERNANDO.
IN University of Southern California Ph.D 1987.
TI PARALLEL PROCESSING TECHNIQUES FOR PRODUCTION SYSTEMS.
SO DAI v48(07), SecB.
DE Engineering, Electronics and Electrical.
AB Production systems static and dynamic characteristics are modeled
with the use of graph grammar, in order to create means to
increase the processing efficiency and the use of parallel
computation through compile time analysis. The model is used to
explicate rule interaction, so that proofs of equivalence between
knowledge bases can be attempted. Solely relying on program static
characteristics shown by the model, a series of observations are
made to determine the system dynamic characteristics and
modifications to the original knowledge base are suggested as a
means of increasing efficiency and decreasing overall search and
computational effort. Dependences between the rules are analyzed
and different approaches for automatic detection are presented.
From rule dependences, tools for programming environments, logical
evaluation of search spaces and Petri net models of production
systems are shown. An algorithm for the allocation and
partitioning of a production system into a multiprocessor system
is also shown, and addresses the problems of communication and
execution of these systems in parallel. Finally, the results of a
simulator constructed to test several strategies, networks, and
algorithms are presented. (Copies available exclusively from
Micrographics Department, Doheny Library, USC, Los Angeles, CA
90089-0182.).
AN University Microfilms Order Number ADG87-22633.
AU GRIER, JAMES THOMAS.
IN The Union for Experimenting Colleges and Universities PH.D
1986, 76 pages.
TI A PROCEDURAL MODEL IN A KNOWLEDGE SYSTEM OF A GENERALIZED
INTELLIGENT DECISION SUPPORT SYSTEM WHICH EMPLOYS PSYCHOLOGICAL AND
BIOLOGICAL CHARACTERISTICS.
SO DAI v48(07), SecB.
DE Engineering, Electronics and Electrical.
AB In the past decade there has been a notable increase in the use of
computer-based information systems to support decision-making in
organizations. Much of the research on models of Decision Support
Systems (DSS) has been concerned with organizational
characteristics, computer systems (hardware), or information
structure. Very few models have been designed to integrate a
persons psychological and biological characteristics of the
decision maker into the decision-making process.
This dissertation creates a structured example which could contain
the above characteristics. The research describes a framework of a
knowledge-based Decision Support system which has a Management
Information System (MIS) as a subset. The knowledge system model
would contain individual behavioral characteristics of the user
(e.g. cognitive, personalities, attitudes, biorhythms (emotional,
physical, intellectual), values, and other variables which might
be predictions of what the user's decision patterns might be).
These user characteristics might impinge upon the decision makers
ability as an information processor and have the potential to
affect the quality of decisions.
The model is designed to integrate recent research done by
behaviorists on the behavioral aspects of the user and his/her
interface with information system and the system's interaction
with the organizational context and decision environment.
AN University Microfilms Order Number ADG87-18499.
AU HABERSTROH, RICHARD.
IN Polytechnic University Ph.D 1987, 183 pages.
TI GRAECO-LATIN SQUARES AND ASSOCIATED METHODS FOR THE PROBLEM OF LINE
DETECTION IN DIGITAL IMAGES.
SO DAI v48(05), SecB, pp1464.
DE Engineering, Electronics and Electrical.
AB In this dissertation the four-way experimental design known as the
Graeco-Latin square (GLS) is used as a basis for robust statistics
for the detection of lines of elevated grey level intensity in a
digital image with a noisy and/or structured background. All of
the methods that are developed use a 5 x 5 pixel mask analyzed as
a nonrandomized GLS. The statistical tests which comprise the
detector algorithms are unbiased for certain families of narrow,
straight lines of arbitrary orientation, although four natural
directions produce the highest power of test.
The fundamental problem of line detection by means of ANOVA
methods is developed using one-way analysis of variance (ANOVA),
while two-way ANOVA illuminates the general advantages of
multiple-way ANOVA techniques when the background of the image
contains some unknown type of structure. These techniques are
extended to the GLS with the algorithms described in detail. The
basic GLS line detectors are improved upon by a "reduced" GLS
detector, in which the alternative subspace is restricted to more
closely correspond to the line targets of interest. Special
methods for the suppression of false alarms due to edges in the
image are proposed, which include an implied shape test of the
treatment means directly in the test statistic. An adaptive GLS
procedure is also discussed, in which the number of "ways" of the
final test is based upon the information in the basic statistics
coming out of the GLS.
The problem of the estimation and removal of background structure
leads to the introduction of some modified analysis of covariance
(ANCOVA) models and their application to the line detection
problem. It is shown how such models can completely remove image
background structure of the form of an intensity plane with an
arbitrary gradient. The methods of edge suppression are adapted
for use with this model, and the ANCOVA model is also applied to
images containing correlated noise. It is demonstrated how some
elements of the structure induced by the correlation can be
estimated and removed by employing the ANCOVA model.
AN University Microfilms Order Number ADG87-21406.
AU HSYUNG, NIEY-BOR.
IN The University of Iowa Ph.D 1987, 239 pages.
TI MORPHOLOGICAL TEXTURE ANALYSIS.
SO DAI v48(07), SecB.
DE Engineering, Electronics and Electrical.
AB A new strategy has been developed for the analysis of texture
called morphological texture analysis. The morphological texture
is defined as "the pattern of the gray level distribution within
the boundary of the field of view". This new strategy was
developed from synthesizing three distinct disciplines:
statistical textural features, symmetry operations, and
mathematical methods including variational principles and fabric
tensors. This methodology is superior to currently favored methods
of texture analysis--which are structural and statistical
approaches--because it analyzes texture without assuming that the
image is periodic and because it is independent of neighborhood
gray levels.
In addition in this new strategy, there is a major contribution in
the mathematical representation of morphological texture. This
representation is based on the gray level distribution
G(r,$\theta$) and boundary conditions of the image. These have
been derived using two different methods, each of which produces
the same result in the form of a Bessel-Fourier function.
Statistical textural features are calculated based on the
variations of the gray level from point to point along the r and
$\theta$ directions of the field of view. The symmetry operation,
which is used to characterize the picture, can be classified as
two different types: rotational symmetry and translational
symmetry. These two symmetries can be obtained directly from the
coefficients of the Bessel-Fourier function. Some training set
(templates) were tested by employing the Shape Analyzer$\sp{\rm
TM}$, which is a 68000 UNIX-based system programmable in "C"
language. A restoration technique was used in this research in
order to be assured that the scanning process is independent of
the illumination system.
Some applications have been reported to demonstrate the potential
and utility of morphological texture analysis; these include
classification of natural pictures, interpretation of images, and
differentiation of biological cells. An experimental comparison of
human vision and the computerized textural feature has been tested
in this research.
Classification of Brodatz's textures, which are based on
morphological textural parameters has also been done. Results
indicate that natural images can be recognized by this new method
with 90% accuracy.
AN University Microfilms Order Number ADG87-15777.
AU JOHNSON, KENNETH.
IN Clarkson University Ph.D 1987, 175 pages.
TI THE BIOLOGICAL IMPLEMENTATION OF NONLINEAR SPACE-TIME CODING IN THE
RETINA AND ITS APPLICATION TO THE LOW LEVEL REPRESENTATION OF VISUAL
INFORMATION IN MACHINE VISION SYSTEMS.
SO DAI v48(06), SecB, pp1771.
DE Engineering, Electronics and Electrical.
AB In this dissertation a model for the processing and representation
of early visual information is proposed. The results fall into
four categories. First, a new theory describing the functional
nature of neural coding in the retina is proposed. Second, the
surround delay is incorporated into the difference of Gaussian
model. Third, a discrete version of the model is developed.
Fourth, the implications of these findings on the structure and
function of simple cells in the visual cortex are considered.
A thorough review of the eye's projection characteristics and the
sampling of the retinal image by the cone mosaic is presented.
A theory of processing in the retina that relates the temporal
behavior of the bipolar responses to the opponent coding of
chromatic information in ganglion cells is introduced. The theory
indicates that what are interpreted as X and Y type responses are
in reality chromatically dependent responses. The proposed
mechanisms are used to explain how color is perceived in the
temporally modulated achromatic stimulus of Festinger et al.
(1971). The importance of stimulus timing in generating the
different colors is verified by the inner layer timing results of
Sugawara (1985).
The delayed surround of Werblin and Dowling (1969b) is
incorporated into Rodieck's (1965) difference of Gaussian (DOG)
model. The new model is analyzed in the image and frequency
domains. The image domain representation shows that the receptive
field is capable of representing both nonmoving and moving edges
in its output. Frequency domain analysis shows that the behavior
of the operator is identical to the recently documented
spatio-temporally coupled responses of Dawis et al. (1984).
A practical implementation of the receptive field model is
developed for use on a digital imaging system. The response of the
discrete operator to simple steps and ramps is demonstrated for
linear and homomorphic processing schemes. Its relationship to
optical flow and the responses of simple cells in the visual
cortex is considered. It is shown that the output of the
convolutional operator proposed here is more compatible with the
responses of the simple cells than the proposals made by Marr
(1982). (Abstract shortened with permission of author.).
AN This item is not available from University Microfilms International
ADG05-60749.
AU LIU, ZHI-QIANG.
IN University of Alberta (Canada) Ph.D 1987.
TI ASPECTS OF IMAGE PROCESSING IN NOISY ENVIRONMENTS: RESTORATION,
DETECTION, AND PATTERN RECOGNITION.
SO DAI v48(05), SecB, pp1468.
DE Engineering, Electronics and Electrical.
AB In one sense, the aim of computer vision is to extract and
interpret the organization of images in ways which are useful in
specific problem domains. However, such images taken in realistic
environments inevitably suffer from various degradations including
noise, blurring, geometrical distortion, etc.. These degradations
pose many problems for implementing computer vision techniques. An
extensive literature review has identified the interesting areas
requiring further study. Of these areas, nonstationary image
restoration, and image signal detection in noisy environments are
pursued extensively in this thesis. Four processing algorithms and
techniques have been developed in the framework of stochastical
estimation theory.
In order to obtain perceptually more satisfying restored images,
image fields are assumed to be nonstationary. Two restoration
algorithms are derived, namely, a sequential Kalman filter and an
adaptive iterative filter. The sequential filter is developed
based on a causal state-space image model. The adaptive filter
uses a modified received image model. Both algorithms include some
local spatial activity measurements in the filter gains such that
the restored images retain edge information. Simulations show that
the visual quality of the restored images is significantly
improved.
Matched filters have long been used for signal detection. They are
optimal only when the noise is stationary and white. In order to
effectively apply the matched filters to images embedded in
nonstationary, nonwhite noise backgrounds, a new adaptive
postfiltering technique is derived. Superiority of this technique
is shown by experiments.
In the fourth algorithm a hierarchical approach for multi-object
detection is presented. In this approach, the detection of object
is divided into three steps: prefiltering, pattern recognition,
and detection. The images are prefiltered to suppress noise which
would otherwise affect the entire detecting operation. In pattern
recognition, a linear least square mapping technique is applied
for classification. In the detection part, a known object of a
class is used to match the received image of the same class. It is
shown by simulations that with this approach, the computation time
is reduced by more than 50%.
Finally, suggestions for further research in the related areas are
also included.
AN University Microfilms Order Number ADG87-24311.
AU MEADOR, JACK LEWIS.
IN Washington State University Ph.D 1987, 144 pages.
TI PATTERN RECOGNITION AND INTERPRETATION BY A BINARY-TREE OF
PROCESSORS.
SO DAI v48(08), SecB.
DE Engineering, Electronics and Electrical.
AB High-level-language-architecture research is an area which focuses
upon narrowing the so-called "semantic gap" between high-level
programming languages and computer hardware. The goal is to
improve processing efficiency by bringing hardware and software
design closer together. This goal is typically achieved by
considering software and hardware aspects concurrently as part of
an overall design process.
A variety of approaches exist within this area, ranging from
machines having optimized instruction sets to direct execution
architectures where high-level tokens are fetched from program
memory like low-level instructions. A key aspect of any
high-level-language architecture is that the execution algorithm
can be modeled as language translation. Any high-level-language
architecture is effectively a direct implementation of an
interpreter.
A large numer of multiprocessor organizations exist today. A
fundamental problem of multiprocessing is becoming less one of how
to physically organize the processor, and more one of how to
program it. The difficulty associated with programming
multiprocessors is characterized here as a "parallel semantic
gap".
The research described within is motivated by the direct
interpretation model used to narrow the sequential semantic gap.
The direct implementation of an interpreter on some multiprocessor
organization is proposed. The specific approach is to study
syntax-directed interpretation on a binary-tree multiprocessor
organization.
Any interpretation scheme must use some pattern recognition
algorithm to discern the actions that programs are to carry out.
This dissertation presents two new recognition algorithms for a
binary-tree multiprocessor and studies the application of these
algorithms to parallel interpretation.
Language interpretation is not the only application which these
algorithms have. Compelling research directions are suggested for
architectures supporting expert systems and complex pattern
analysis. Included among these are machines for information
retrieval from a semantic-network knowledge base and ones which
perform scene analysis by detecting graph isomorphisms.
AN University Microfilms Order Number ADG87-25190.
AU MIYAHARA, SHUNJI.
IN University of Pennsylvania Ph.D 1987, 198 pages.
TI AUTOMATED RADAR TARGET RECOGNITION BASED ON MODELS OF NEURAL NETS.
SO DAI v48(08), SecB.
DE Engineering, Electronics and Electrical.
AB This dissertation describes a new approach to target recognition,
using radar returns and parallel processing based on models of
neural networks. Target recognition usually consists of three
steps: data acquisition, data representation (generation of
feature vectors) and data classification. Two methods of target
recognition are proposed and several results from their study are
discussed. The methods are: (1) The use of sinogram
representations as learning set in associative memory, based on
models of neural nets as classifier. Such memories are known to be
robust and fault tolerant, and (2) use of polarization
representation for use in neural net based associative memory as a
classifier.
The advantages of these methods are: (1) they represent a new
approach to signal processing and target recognition, (2) they
have the potential to identify targets from small fractions of the
target data (robustness), (3) they are insensitive to slight
degradation in system hardware (fault-tolerance), (4) they can
identify targets automatically by generating identifying labels or
symbols, and (5) they can be implemented efficiently using optical
hardware, since optics provides the parallelism and massive
interconnectivity required in neural net implementations of
associative memory employing parallel processing.
Using microwave scattering data of scaled model targets, the
concepts for the target recognition were demonstrated by computer
simulation of a 1024 (32 $\times$ 32) element neural net
associative memory based on the so called outer product model. The
simulations show that partial input, consisting of less than 10%
of the total information can identify the targets. This result
illustrates the robustness of associative memory and the potential
usefulness of the approach.
2-D optical implementations of a neural net of 8 $\times$ 8 (= 64)
binary neurons were studied. Fault tolerance and robustness are
examined, using a four dimensional (8 $\times$ 8 $\times$ 8
$\times$ 8) clipped outer product ternary $T\sb{ijkl}$ mask to
establish the weighted interconnections of the net and electronic
feedback based on closed loop TV system. The performance was found
to be in agreement with that of computer simulation, even though
aberration of lenses and the defects of the system, were present.
These results confirm the practical suitability of the
opto-electronic approach to the neural net implementation and pave
the way for the implementation of larger networks.
AN University Microfilms Order Number ADG87-19949.
AU REBER, WILLIAM LOUIS, JR.
IN University of California, Los Angeles Ph.D 1987, 195 pages.
TI ARTIFICIAL NEURAL SYSTEM DESIGN: ROTATION AND SCALE INVARIANT
PATTERN RECOGNITION.
SO DAI v48(06), SecB, pp1774.
DE Engineering, Electronics and Electrical.
AB The design of an artificial neural system (ANS) for generalizing
the recognition of two dimensional patterns having variations in
rotation and scale is demonstrated using alpha-numeric characters.
An ANS is a multi-layer network of parallel computational
structures. These structures are based upon large numbers of
artificial neurons or processing elements organized into disjoint
sets or slabs having the interconnections and behavior appropriate
for performing specific computational tasks. They provide the ANS
with a parallel processing capability for performing signal
processing, image processing, and various adaptive learning
algorithms.
An ANS design methodology is defined and used for developing the
experimental ANS design. It consists of the functional
specification of the ANS, the design of the ANS and its parallel
computational structures, and the verification of the ANS. The
experimental ANS design consists of a hierarchy of slabs for
display of input patterns, for pattern transformation, for pattern
classification, and for display of the ANS output.
A sequence of pattern transformations including the log, polar,
and 1-dimensional Fourier transforms, are developed for producing
a generalized pattern representation for rotation and scale
changes. The pattern transformations are embedded within the ANS,
and implemented by designing appropriate parallel computational
structures for each transformation. All of the necessary
computations are performed in parallel by the artificial neural
system. Mapping functions and behavior equations are defined for
specifying the necessary processing element interconnections and
behavior. The generalized pattern representations for a typical
set of alpha-numeric characters are used for training a
nearest-neighbor classifier. Computer simulations are used to
verify the performance of the ANS. Results demonstrate rotation,
scale, and both rotation and scale invariant pattern
classification.
AN This item is not available from University Microfilms International
ADG05-61056.
AU WEBER, ALLAN GUY.
IN University of Southern California Ph.D 1987.
TI HIERARCHICAL TEXTURE SEGMENTATION USING MULTIPLE RESOLUTION
OPERATORS.
SO DAI v48(07), SecB.
DE Engineering, Electronics and Electrical.
AB Texture information can be used as a basis for segmenting an image
into regions. Image texture is basically a local area phenomenon
that is sensitive to the size of the area. What appears as a
non-textured area at one resolution level can appear as a region
with distinctive texture at a different resolution. The
performance of texture segmentation schemes is often highly
dependent on the size of the local area operator used to generate
the classification features. The size of the operators has a major
impact on the performance near the boundaries between texture
regions. Features based on large operators perform better overall
but are highly affected by the mixing of class statistics when the
operator overlaps more than texture, such as near the texture
boundaries. Features based on small operators show poorer overall
performance but are more likely to maintain an acceptable
performance level in the boundary areas. The trade-off is between
statistical accuracy of the classification versus the final
accuracy of the texture region boundaries. The problem being
studied here is how to combine information from texture
classifiers operating at different resolutions into a segmentation
process that gives acceptable performance in all areas of an
image.
In this study, the nature of the mixing problem is examined and a
solution is proposed based on using multiple resolution features
in a hierarchical decision process. A key component of the
solution is an analysis of the image data prior to performing any
classification. From this analysis, we determine the expected
location in the feature space of the mixture points. Three
different methods of isolating the mixture points in the feature
space are proposed and tested. During the initial classification
phase, image points that are within the mixture areas are left
unclassified. Spatial information is incorporated into the
segmentation process by the use of a local area cohesion
operation. The final segmentation is based on a hierarchical
decision process that uses the classification choice at both
resolutions and the spatial cohesion data. Several decision
processes are tested that use the information in different ways.
(Copies available exclusively from Micrographics Department,
Doheny Library, USC, Los Angeles, CA 90089-0182.).
AN University Microfilms Order Number ADG87-22405.
AU HENNINGSEN, JACQUELINE R.
IN The University of Nebraska - Lincoln Ph.D 1987, 190 pages.
TI A CONCEPTUAL FRAMEWORK FOR DISTRIBUTED EXPERT SYSTEM USE IN TIME
SENSITIVE HIERARCHICAL CONTROL.
SO DAI v48(07), SecB.
DE Engineering, Industrial.
AB There are many problems faced by decision makers involved in
complex, time sensitive hierarchical control systems. These may
include maintaining knowledge of the functional status of the
system components, forecasting the impact of past and future
events, transferring information to a distant or poorly connected
location, changing the requirements for an operation according to
resources available, or creating a independent course of action
when system connectivity falls. These problems are
transdisciplinary in nature, so decision makers in a variety of
organizations face them.
This research develops a framework for the use of distributed
expert systems in support of time sensitive hierarchical control
systems. Attention is focused on determining ways to enhance the
likelihood that a system will remain functional during a crisis in
which one or more of the system nodes fail. Options in the use of
distributed expert systems for this purpose are developed
following investigation of related research in the areas of
cooperative and distributed systems.
A prototype under development of a generic system model called DES
(distributed expert systems) is described. DES is a "trimular"
form of support structure, where a trimule is defined to be a
combination of a human decision agent, a component system model
and an expert system. This concept is an extension of the domular
theory of Tenney and Sandell (1981).
AN University Microfilms Order Number ADG87-20509.
AU KHAJENOORI, SOHEIL.
IN University of Central Florida Ph.D 1986, 219 pages.
TI COMPUTER-ASSISTED DESIGN AND ASSEMBLY OF STANDARDIZED MODULES USING
A ROBOT MANIPULATOR.
SO DAI v48(06), SecB, pp1777.
DE Engineering, Industrial.
AB Graphical Motion to Robot Motion Translator (GMRMT), a new system
for the automatic generation of robot motion control programs, is
presented. The GMRMT system is an interactive, menu-driven
software package which allows the design and subsequent assembly
of three-dimensional structures built from standardized component
modules to be accomplished using a common database. The
standardized component modules used as example building blocks in
the project are rectangular solids of several sizes.
Presented are an object placement-sequencer algorithm, a height
specification and interference checking algorithm, and a
balance-checking algorithm. To avoid the creation of dynamic
obstacles in the assembly robot's motion paths, and possible
collision and interference of the robot arm with these obstacles,
the proper sequencing of the blocks in the design database is
essential. The object placement-sequencer algorithm is responsible
for proper sequencing of the blocks in the design database prior
to the building of the designed structure by the robot. The height
specification and interference checking algorithm automatically
generates the proper positioning of a block in the design by
performing a sequential search over the accumulated design
structure. The stacking feasibility of the blocks in the design is
verified by the balance-checking algorithm, prior to the
acceptance of the block as a permanent part of the design.
When the structure design has been completed, it may be visualized
using interactive computer graphics techniques. Then, upon the
user's request, the system will produce the "VAL" robot motion
control language program necessary to construct the designed
structure and download the program to the robot controller for
execution.
AN University Microfilms Order Number ADG87-24899.
AU CRANE, CARL DAVID, III.
IN The University of Florida Ph.D 1987, 193 pages.
TI MOTION PLANNING AND CONTROL OF ROBOT MANIPULATORS VIA APPLICATION OF
A COMPUTER GRAPHICS ANIMATED DISPLAY.
SO DAI v48(08), SecB.
DE Engineering, Mechanical.
AB It is often necessary in a hazardous environment for an operator
to effectively control the motion of a robot manipulator which
cannot be observed directly. The manipulator may be either
directly guided via use of a joystick or similar device, or it may
be autonomously controlled, in which case it is desirable to
preview and monitor robot motions. A computer graphics based
system has been developed which provides an operator with an
improved method of planning, evaluating, and directly controlling
robot motions.
During the direct control of a remote manipulator with a joystick
device, the operator requires considerable sensory information in
order to perform complex tasks. Visual feedback which shows the
manipulator and surrounding workspace is clearly most important. A
graphics program which operates on a Silicon Graphics IRIS
workstation has been developed which provides this visual imagery.
The graphics system is capable of generating a solid color
representation of the manipulator at refresh rates in excess of 10
Hz. This rapid image generation rate is important in that it
allows the user to zoom in, change the vantage point, or translate
the image in real time. Each image of the manipulator is formed
from joint angle datum that is supplied continuously to the
graphics system. In addition, obstacle location datum is
communicated to the graphics system so that the surrounding
workspace can be accurately displayed.
A unique obstacle collision warning feature has also been
incorporated into the system. Obstacles are monitored to determine
whether any part of the manipulator comes close or strikes the
object. The collision warning algorithm utilizes custom graphics
hardware in order to change the color of the obstacle and produce
an audible sound as a warning if any part of the manipulator
approaches closer than some established criterion. The obstacle
warning calculations are performed continuously and in real time.
The graphics system which has been developed has advanced
man-machine interaction in that improved operator efficiency and
confidence has resulted. Continued technological developments and
system integration will result in much more advanced interface
systems in the future.
AN University Microfilms Order Number ADG87-19113.
AU LI, CHING.
IN The University of Wisconsin - Madison Ph.D 1987, 170 pages.
TI ON-LINE BEARING CONDITION MONITORING BY PATTERN RECOGNITION AND
MATCHED FILTERING.
SO DAI v48(08), SecB.
DE Engineering, Mechanical.
AB This thesis presents an on-line bearing condition monitoring
system which imparts the following three newly developed signal
processing algorithms into a digital computer: bearing localized
defect detection/diagnosis scheme, in-process defect induced
resonance self-learning scheme and the matched filter. Thus, the
capability of automatically detecting/diagnosing the presence of
localized defects in a given bearing system as well as evaluating
the extent of bearing damage may be built into a digital computer.
For automatic detection/diagnosis of bearing localized defects, a
pattern recognition analysis technique has been developed for
analyzing vibration signatures carrying the necessary information
about the defect induced vibration bursts. Extracting normalized
and dimensionless features by short-time signal processing
techniques, linear discriminant functions discriminatory
information provided is 38% more accurate than the best analysis
procedure presently available.
In order to on-line identify the resonances excited by the impulse
produced by a damaged bearing for a given bearing system without
relying on its historical records and human intelligence, an
in-process defect induced resonance self-learning scheme was
developed. Pinpointing every instant that a roller strikes a
localized defect, two segments of subsequences may be formed from
just one piece of bearing vibration signature. A template learning
=========================================================================
Date: 10 March 1988, 21:30:13 CST