leff@smu.CSNET (Laurence Leff) (06/09/87)
TECHNICAL REPORT LISTING
Artificial Intelligence Laboratory
University of Texas at Austin
Taylor Hall 2.124
Attn: Chrissie Sawyer
Austin, TX 78712
(512) 471-9562
ai.chrissie@r20.utexas.edu
May 1987
ALL REPORTS FURNISHED FREE OF CHARGE
-----------------------------------------------------------------
AI87-55 Expert Systems for Monitoring and Control, D. Dvorak,
May 1987.
Many large-scale industrial processes and services are centrally
monitored and controlled under the supervision of trained
operators. Common examples are electrical power plants, chemical
refineries, air-traffic control, and telephone networks --- all
impressively complex systems that are challenging to understand
and operate correctly. The task of the operator is one of
continuous, real-time monitoring and control, with feedback. The
job can be difficult when the physical system is complex (tight
coupling and complex interactions). Also, there may be faults not
only in the system but also in its sensors and controls. Deciding
the correct control action during a crisis can be difficult; a bad
decision can be disastrous.
This paper surveys existing work in the field of knowledge-based
systems that assist plant/process operators in the task of
monitoring and control. The goal here is to better define the
information processing problems and identify key requirements for
an automated operator assistant. A high-level design of an
``expert system for operators'' is presented. The design relies
on a functional/causal model of the physical system as a source of
deep knowledge (in addition to several sources of shallow
knowledge). The major processes described in this design are
focusing, model-building, tracking, envisioning and advising.
-----------------------------------------------------------------
AI87-54 Coda: An Extended Debuger for PROLOG, D. Plummer, April
1987.
In this paper I describe Coda, an extension of the de facto
standard debugger which presents more information about the
execution of the program to the user as the program is debugged.
Coda extends the standard debugger in a number of ways. First,
Coda allows the user to interact with the pattern matching
computation step. Thus the reason for the failure of a particular
goal may be more precisely determined by the programmer. Second,
Coda displays the program trace in terms of the clauses of the
program rather than the goals that are executed. Thus, the
program trace is directly related to the program that was written,
and is at a level more appropriate to the programmer than that of
the standard debugger. Finally, Coda allows finer control over
the information that is displayed by the debugger, by an extended
command set and a more powerful language for describing ``spy
points''.
-----------------------------------------------------------------
AI87-53 Protos: An Exemplar-Based Learning Apprentice,
E. R. Bareiss, B. W. Porter and C. C. Weir, April 1987.
Building Protos, a learning apprentice system for heuristic
classification, has forced us to scrutinize the usefulness of
inductive learning and deductive problem solving. While these
inference methods have been widely studied in machine learning,
their seductive elegance in artificial domains (e.g., mathematics)
does not carry-over to natural domains (e.g., medicine). This
paper briefly describes our rationale in the Protos system for
relegating inductive learning and deductive problem solving to
minor roles in support of retaining, indexing, and matching
exemplars. The problems that arise from "lazy generalization" are
described along with their solutions in Protos. Finally, an
example of Protos in the domain of clinical audiology is
discussed. (to appear in the Proceedings of the Fourth
International Workshop on Machine Learning, University of
California at Irvine, Morgan Kaufmann Publishers, 1987.)
-----------------------------------------------------------------
AI87-52 TMYCIN Expert System Tool, G. Novak, April 1987.
TMYCIN ("Tiny EMYCIN") is a simple expert system tool patterned
after the EMYCIN tool developed at Stanford. TMYCIN does not
attempt to provide all of the features of EMYCIN; it is intended
to provide the most commonly used features in a package that is
small and simple. The internal implementation of TMYCIN has been
written from scratch and is therefore different from EMYCIN.
-----------------------------------------------------------------
AI87-51 The UTIPS Image Processing System, V. Hwang, April 1987.
This document describes the UTIPS image processing software for
the SYMBOLICS 3600 series Lisp machine with optional 8-bit color
system running Genera 7.0 operating system. UTIPS contains more
than 40 image manipulation and image processing functions and is
implemented using Common-Lisp.
-----------------------------------------------------------------
AI87-50 A Survey of Psychological Models of Concept
Representation, E. R. Bareiss and B. W. Porter, March
1987.
This paper surveys the psychological literature on human
representation of object concepts. There are three major views of
how people represent concepts: the Classical, Probabilistic, and
Exemplar Views. The Classical View holds that a concept is defined
by a collection of features which are singly necessary and jointly
sufficient for classifying an object as an instance of the
concept. The Probabilistic View relaxes the notion of a definition
to make classification a probabilistic function of the subset of
concept members' possible features which are present in an object.
The Exemplar View holds that a concept is defined extensionally;
past instances of a concept are retained for use in
classification. Each of these views is discussed in turn. Its
strengths and weaknesses are pointed out, and major research in
the spirit of that view is identified. The paper also provides an
appendix which defines some of the terms used in this area of the
psychological literature and discusses some important supporting
ideas.
-----------------------------------------------------------------
AI87-49 A Prototype Natural Language Understanding Legal
Consultant, L. K. Branting, February 1987.
Because of the intimate connection between legal reasoning and
natural language understanding, a robust natural language
interface is important for a useful automated legal reasoning
system. This project attempts to demonstrate how existing
techniques of natural language understanding and discourse
analysis could be used to construct a system able to interpret and
resolve legal problems and express their solutions. The program
uses frame instantiation both in text understanding and in legal
analysis.
-----------------------------------------------------------------
AI87-48 A Program for Translating English Sentences into Lisp
Programs, A. Flatau, February 1987.
-----------------------------------------------------------------
AI87-47 Interpreting Narratives with Evaluative Statements,
A. Bhattacharjee, January 1987. (Master's Thesis)
Discourse structure is one of the foremost areas of study in
natural language. Discourse is concluded to be governed by
structures that go beyond the sentence
-----------------------------------------------------------------
AI87-46 A Parallel Implementation of Iterative-Deepening-A*,
V. Nageshwara Rao, Vipin Kumar and K. Ramesh, February
1987.
This paper presents a parallel version of the Iterative-deepening
A* (IDA*) algorithm. Iterative-Deepening-A* is an important
admissible algorithm for state-space search which has been shown
to be optimal both in time and space for a wide variety of
state-space search problems. Our parallel version retains all the
nice properties of the sequential IDA* and yet does not appear to
be limited in the amount of parallelism. To test its
effectiveness, we have implemented this algorithm on Sequent
Balance 8000 parallel processor to solve the 15-puzzle problem,
and have been able to obtain almost linear speedups on the 9
processors that are available on our machine. On machines with
larger number of processors, we expect that the speedup will still
grow linearly. The parallel version seems suitable even for
loosely coupled architectures such as the Hypercube.
-----------------------------------------------------------------
AI87-45 Parallel Heuristic Search on a Shared Memory
Multiprocessor, V. Nageshwara Rao, V. Kumar, and
K. Ramesh, February 1987.
In this paper we discuss two different ways of parallelizing the
A* state space search algorithm for shared memory multiprocessors.
We implement these parallel algorithms to solve the 16-puzzle
problem and the TSP problem on the Sequent Balance 8000, a shared
memory multiprocessor, and present performance results. The
results are very encouraging, as we are able to obtain speedups of
as much as 7 or 8 processors. Since A* is a very commonly used
heuristic search algorithm, we expect these parallel versions to
be effective for a variety of problems.
-----------------------------------------------------------------
AI87-44 Developments Towards Constraining Qualitative
Simulation, W. W. Lee, C. Chiu and B. J. Kuipers,
January 1987.
Qualitative simulation is a useful and powerful tool for causal
reasoning with physical mechanisms. However, qualitative
simulation algorithms can predict impossible behaviors. In
working with Kuipers' QSIM algorithm, an approach to identify and
eliminate spurious predictions was developed. This approach
addresses underlying shortcomings of qualitative simulation, which
are due to a combination of locality and qualitative description.
The differential equation describing a mechanism is analyzed to
produce constraints which eliminate spurious predictions. This
paper describes the application of this approach to the simulation
of a damped spring. One local and two global constraints
eliminated all spurious behaviors in simulating one period of
oscillation. Some implications are discussed.
-----------------------------------------------------------------
AI87-43 A Survey of Specialized Parallel Architectures Designed
to Support Knowledge Representation, Daniel P. Miranker,
January 1987.
While modern data processing systems have evolved from arithmetic
calculating machines, knowledge representation schemes, developed
for artificial intelligence programs, have evolved from cognitive
models of intelligent behavior. Instead of the basic arithmetic
operations used in data processing, knowledge representation
schemes require extensive set manipulation, pattern matching and
graph theoretic abilities. Thus, the implementation of artificial
intelligence programs on conventional computers has suffered from
a mismatch of basic abilities.
This paper surveys computer architectures that attempt to overcome
this mismatch by developing computer organizations that
intrinsically support the structure and basic operations required
to manipulate artificial intelligence knowledge bases. Common to
all these proposals is the introduction of large scale parallelism
in the form of many simple processing elements.
-----------------------------------------------------------------
AI86-42 Extracting Data Base Knowledge from Medical Text,
Hae-Chang Rim and Robert F. Simmons, December 1986.
Relational database knowledge can be extracted automatically from
appropriate sublanguage texts. This paper shows the feasibility
of translating a particular sublanguage text concerning cold
symptoms and treatments into the semantically uniform
propositional forms, and thus a possible method for reducing the
number of inference rules needed to relate a text to a question.
-----------------------------------------------------------------
AI86-41 An Intelligent Backtracking Scheme for Prolog, Vipin
Kumar and Yow-Jian Lin, December 1986.
This paper presents a scheme for intelligent backtracking in
prolog programs. This scheme incurs a small overhead and yet can
eliminate a lot of redundant backtracking. The main features of
our scheme are as follows. Overhead due to intelligent
backtracking occurs only when a goal fails. Upon failure of a
goal, our scheme invokes some extra work to select a backtrack
point. But unlike the overhead in many other approaches this
overhead is not very large. To demonstrate the usefulness of our
scheme, we have modified PLM level I simulator to incorporate our
intelligent backtracking scheme, and have investigated its
performance on a number of problems.
-----------------------------------------------------------------
AI86-40 Question Answering with Rhetorical Relations, Wanying
Jin and Robert F. Simmons, December 1986.
A method is presented for computing discourse relations among text
sentences and using the resulting data to generate new sentences
to enrich a text knowledge base for the purpose of improving
question answering. The coherence relations defined by Hobbs and
Lockman and Klappholz, and the event coherence relations developed
by Alterman are employed as a theoretical basis. Rhetorical
sentences, which enrich the knowledge base, are derived from the
discourse relations. The resulting text knowledge base is able to
answer questions that would otherwise fail to match the text.
-----------------------------------------------------------------
AI86-39 An Intelligent Remote File Server, Kim Korner, December
1986. (PhD Dissertation)
Limitations of current disk blocking caching strategies are
discussed. A new model for providing remote file service using
knowledge based caching algorithms is proposed. The knowledge
based algorithms generate expectations of user process behavior
which are used to provide hints to the file server. Surplus
resources of the remote file server permit incorporation of these
hints into caching algorithms. The research involves gathering
trace data from a modified Unix kernel and driven simulation of
remote file server models. Comparisons are made between
conventional, knowledge based and optimal models. Further
applications of knowledge based strategies in operating systems
are discussed.
-----------------------------------------------------------------
AI86-38 Intuitionistic Modelling and Simulation of Mechanical
Devices, Chin Yang Chee, December 1986. (Master's
Thesis)
The objective of this thesis is to explore and develop
representational schemes to represent both the textual and
pictorial description of a mechanical device (e.g. an engine) to
produce an explanation of the device's operation by generating an
animation of the device while highlighting the part of the text
being simulated. For this purpose we have built a prototype which
includes a highly interactive and hierarchical 2-D graphics editor
to construct/edit devices and a semantic parser which parses the
text into events of the device operation which is then used as a
"script" for animating the device.
-----------------------------------------------------------------
AI86-37 A Review of the First International Meeting on Advances
in Learning, Bruce W. Porter, November 1986. (To appear
in Machine Learning, Kluwer Academic Publishers, 1987)
This report is a review of the First International Meeting on
Advances in Learning (IMAL) held the week of July 28, 1986 in Les
Arcs, France.
-----------------------------------------------------------------
AI86-36 A Framework for Intelligent Backtracking in Logic
Programs, Vipin Kumar and Yow-Jian Lin, November 1986.
This paper presents a scheme for intelligent backtracking in
Horn-clause logic programs. The scheme is simple and yet
effective for backtracking within a clause. We also present a
framework for using extra analysis to make within-clause-
backtracking even more intelligent and also to perform
across-the-clause backtracking intelligently. The primary
strength of our scheme over other schemes is that it incurs very
small overhead and yet can eliminate a lot of redundant
backtracking. Our backtracking scheme can also be used when
AND-parallelism is exploited in logic programs (i.e., when
multiple literals of a clause are executed simultaneously).
-----------------------------------------------------------------
AI86-35 PROTOS: An Experiment in Knowledge Acquisition for
Heuristic Classification Tasks, Bruce W. Porter and
E. Ray Bareiss, August 1986.
PROTOS is an experiment in acquiring, applying and revising expert
knowledge for heuristic classification. Learned knowledge
includes exemplar-based categories and domain knowledge to support
explanation and pattern matching. Research on PROTOS has revealed
fundamental issues in concept formation and classification
concerning representation of ill-defined, ``fuzzy'' categories,
multiple applications of learned knowledge, and the context in
which learning takes place. This paper highlights these issues,
describes the PROTOS approach to learning heuristic
classification, and surveys related research.
-----------------------------------------------------------------
AI86-34 An Execution Model for Exploiting AND-Parallelism in
Logic Programs, Yow-Jian Lin and Vipin Kumar, September
1986.
This paper presents a parallel execution model for exploiting
AND-parallelism in pure Horn Clause logic programs. The model is
based upon the execution model of Conery and Kibler, and compares
favorably with various related execution schemes (including Conery
and Kibler's). We also present two implementation schemes to
realize our model: one which has a coordinator to control the
activities of processes solving different literals; and the other
one which achieves synchronization by letting processes pass
messages to each other in a distributed fashion. Trade-offs
between these two schemes are then discussed.
-----------------------------------------------------------------
AI86-33 New Algorithms for Dependency-Directed Backtracking,
Charles J. Petrie, September, 1986. (Master's thesis)
The problem of providing dependency-directed backtracking in a
Doyle-style Truth Maintenance System (TMS) is solved with three
algorithms. The first modifies Doyle's basic algorithm by
eliminating the necessity for conditional proof justifications.
Unlike other attempts to do so, this method maintains the
capability of the TMS to provide complete explanations from
justifications. The second algorithm, which eliminates the
generation of a maximal assumption set, provides a novel
description of backtracking as a simple search. Finally, an
extension of this algorithm allows contradiction resolution to be
extended as an inference technique for expert systems. The
dependency-directed backtracking algorithms presented are also the
first to correctly ensure that an assertion is not justified
unnecessarily and that unsatisfiable circularities are not
created.
-----------------------------------------------------------------
AI86-32 Talus: Automatic Program Debugging for Intelligent
Tutoring Systems, William R. Murray, August 1986.
This report summarizes the author's dissertation (AI86-27).
-----------------------------------------------------------------
AI86-31 A Rule Language for the GLISP Programming System,
Christopher A. Rath, August 1986. (Master's thesis)
GLISP is a programming language written by Dr. Gordon Novak at the
University of Texas at Austin. It is an object oriented language
based on LISP, a list processing language used in artificial
intelligence programming. GLISP is often used in the university
environment for the construction of expert systems programs, yet
it has no formal rule language.
The thesis is the design and implementation of a general rule
language incorporated into GLISP. The project includes an
investigation into currently implemented rule languages, an
implementation of the rule language compiler in InterLisp, and
sample applications. The resulting system is expected to be
efficient, general purpose, and easy to learn for those
programmers already familiar with GLISP.
-----------------------------------------------------------------
AI86-30 Metaphorical Shift and The Induction of Similarities,
Phillipe M. Alcouffe, July 1986. (Master's thesis)
This thesis presents a view of metaphor as the syntactic
realization of an underlying cognitive process: analogical
reasoning. The semantic changes resulting from the 'anomalous'
juxtaposition of the metaphor's referents can be represented in a
conceptual semantic space by rules of construal. These rules
describe the semantic changes induced by a comparison component
part of analogical reasoning. The meaning of metaphor is then a
set of valid transferred similarities. Some similarities are
pre-encoded as part of our general knowledge and are `discovered'
through the transfer process, whereas other similarities are
`created'. A taxonomy of conventional metaphorical themes is
presented that supports the creativity of metaphor by the means of
similarities induced by metaphorical shifts and inherited through
this taxonomy.
-----------------------------------------------------------------
AI86-29 A Theory of Argument Coherence, Wing-Kwong C. Wong, July
1986.
Previous research has suggested different approaches to analyze
the coherence of discourses. But these approaches are not
suitable to represent the structure of logical arguments. We
develop a notion of reasoning relations and argue that it is more
appropriate for characterizing argument coherence. Reasoning
relations are naturally classified into two types, deductive and
rhetorical, based on theories of classical logics and
argumentation rhetorics. A set of commonly used relations is
described. Ways of employing these relations to restructure and
evaluate arguments are discussed. Possible connections between
discourse coherence and Discourse Representation Theory are also
explored.
-----------------------------------------------------------------
AI86-28 The Role of Inversion, Clecting and PP-Fronting in
Relating Discourse Elements, Mark V. Lapolla, July 1986.
Languages use various types of syntactic constructions to convey
more information than what is expressed by the sum of words in a
sentence. This paper will explore and discuss the less obvious
ways syntactic structure is used to convey information and how
this information could be used by a natural language database
system to organize and search a discourse space.
-----------------------------------------------------------------
AI86-27 Automatic Program Debugging for Intelligent Tutoring
Systems, William R. Murray, June, 1986. (PhD
dissertation)
Program debugging is an important part of the domain expertise
required for intelligent tutoring systems that teach programming
languages. This dissertation explores the process by which
student programs can be automatically debugged in order to
increase the instructional capabilities of these systems. This
research presents a methodology and implementation for the
diagnosis and correction of nontrivial recursive programs. In
this approach, recursive programs are debugged by repairing
induction proofs in the Boyer-Moore Logic.
The potential of a program debugger to automatically debug widely
varying novice programs in a nontrivial domain is proportional to
its capabilities to reason about computational semantics. By
increasing these reasoning capabilities a more powerful and robust
system can result. This thesis supports these claims by examining
related work in automated program debugging and by discussing the
design, implementation, and evaluation of Talus, an automatic
debugger for LISP programs.
Talus relies on its abilities to reason about computational
semantics to perform algorithm recognition, infer code teleology
and to automatically detect and correct nonsyntactic errors in
student programs written in a restricted, but nontrivial, subset
of LISP. Solutions can vary significantly in algorithm,
functional decomposition, role of variables, data flow, control
flow, values returned by functions, LISP primitives used, and
identifiers used. Solutions can consist of multiple functions,
each containing multiple bugs. Empirical evaluation demonstrates
that Talus achieves high performance in debugging widely varying
student solutions to challenging tasks.
-----------------------------------------------------------------
AI86-26 Symmetric Rules for Translation of English and Chinese,
Wanying Jin and Robert F. Simmons, May 1986.
A system of grammars using symmetric phrase structure and
translation rules in a Lisp version of Prolog is shown to provide
symmetric bidirectional translation between English and Chinese
for a fragment of the two languages. It is argued that symmetric
grammars and translation rules significantly reduce the total
grammar writing requirement for translation systems, and that
research on symmetric translation systems deserves further study.
-----------------------------------------------------------------
AI86-25 Fault Diagnosis Using Qualitative Simulation, Ray
Bareiss and Adam Farquhar, April 1986.
This paper presents a new approach to using qualitative simulation
to diagnose faults in mechanisms. A mechanism is described by
three models. A component model associates a set of measurement
parameters with aspects of its physical structure. A constraint
model defines relationships which must hold between parameters
during the mechanism's normal behavior. A fault model embodies
relevant aspects of a domain theory for a class of mechanisms.
The Diagnose algorithm evaluates an observed state of a mechanism
in terms of its constraint model. If violated constraints exist,
Findfaults uses the component model to localize the problem and
then uses the fault model to interpret it. These algorithms have
been implemented in Prolog and have successfully identified faults
in a variety of simple mechanisms.
-----------------------------------------------------------------
AI86-24 Qualitative Simulation as Causal Explanation, Benjamin
J. Kuipers, April 1986.
To give a causal explanation of an event means to deduce a
statement which describes it, using as premises of the deduction
one or more universal laws, together with certain singular
statements, the initial conditions." (Karl Popper) Traditional
diagnostic systems do not explain their findings in this sense.
Using the QSIM qualitative simulation representation and
algorithm, we demonstrate that we can construct models of
physiological mechanisms, capable of explaining the observed
behavior of the healthy mechanism, of the disease state, and of
the response to therapy. We also present a new type of
abstraction relation for linking simple equilibrium mechanisms
into a hierarchical description of a complex mechanism such as
human physiology.
-----------------------------------------------------------------
AI86-23 A Parallel Execution Scheme for Exploiting AND-
parallelism of Logic Programs, Yow-Jian Lin and Vipin
Kumar, March 1986.
In this paper we present a parallel execution scheme for
exploiting AND-parallelism of logic programs. This scheme follows
the generator-consumer approach of the AND/OR process model.
Using problem-specific knowledge, literals of a clause are
linearly ordered at compile time. This ordering and run-time
binding conditions are then used to dynamically select generators
and consumers for different variables at run time. The scheme can
exploit all the AND-parallelism available in the framework of
generator-consumer approach. Compared with other execution
schemes based on the same approach, our scheme is simpler, and
potentially more efficient.
-----------------------------------------------------------------
AI86-21 GT: A Conjecture Generator for Graph Theory, Wing-Kwong
Wong, January 1986. (Master's thesis)
The process of knowledge acquisition can be automated with
programs that learn from observation and discovery. A better
understanding of this learning strategy would facilitate the
construction of expert systems and the exploration of scientific
domains. GT, a frame-based program written for this thesis,
learns relationships among classes of graphs by observing the
examples and non-examples of these classes. The relationships
considered are subclass, superclass, exclusion, and complement
exclusion. GT's knowledge is partly represented as frames and
partly as formulas of predicate calculus. The learned
relationships are stored hierarchically. GT has found
non-trivial, well-known theorems of graph theory.
-----------------------------------------------------------------
AI86-22 An Intelligent Backtracking Algorithm for Parallel
Execution of Logic Programs, Yow-Jian Lin, Vipin Kumar
and Clement Leung, March 1986.
In this paper we present a simple but efficient backtracking
scheme which works when AND-parallelism is exploited in a logic
program. The scheme is well suited for implementation on a
parallel hardware. We show that the backtracking scheme presented
by Conery and Kibler in the context of AND/OR process model is
incorrect, i.e., in some cases it may miss solutions while
performing backtracking. Even if no AND-parallelism is exploited
(i.e., all literals are solved sequentially), our scheme is more
efficient than the "naive" depth-first backtracking strategy used
by Prolog because our scheme makes use of the dependencies between
literals in a clause. Chang and Despain have recently presented a
backtracking scheme which also makes use of the dependencies
between literals. We show that our scheme is more efficient than
their scheme in the sense that our scheme does less backtracking.
-----------------------------------------------------------------
AI86-20 Experimental Goal Regression: A Method for Learning
Problem Solving Heuristics, Bruce W. Porter and Dennis
Kibler, January 1986.
This research examines the process of learning problem solving
with minimal requirements for a priori knowledge and teacher
involvement. Experience indicates that knowledge about the
problem solving task can be used to improve problem solving
performance. This research addresses the issue of what knowledge
is useful, how it is applied during problem solving, and how it
can be acquired. For each operator used in the problem solving
domain, knowledge is incrementally learned concerning why it is
useful, when it is applicable, and what transformation it
performs. The method of experimental goal regression is
introduced for improving the learning rate by approximating the
results of analytic learning. The ideas are formalized in an
algorithm for learning and problem solving and demonstrated with
examples from the domains of simultaneous linear equations and
symbolic integration.
-----------------------------------------------------------------
AI85-19 Explanation of Mechanical Systems Through Qualitative
Simulation, Stuart Laughton, December 1985. (Master's
thesis)
This thesis documents an investigation into the feasibility of
Qualitative Simulation as the basis of a computer program that can
represent and explain the function and structure of mechanical
systems. First, the general principles of qualitative reasoning
and the theories of several researchers in this field are
reviewed. Next, the process of modelling a specific simple
mechanical device, a piston driven pump, and the re sulting
model's simulation output are discussed in detail. Finally, some
techniques are presented for generating explanatory animation and
natural language from a qualitative simulation model and its
output.
-----------------------------------------------------------------
AI85-18 Menu-Based Creation of Procedures for Display of Data,
Man-Lee Wan, December 1985. (Master's thesis)
We have investigated methods of creation of interface procedures
that extract data needed by a library display procedure from user
data of arbitrary structure and present it to the display
procedure for generation of a display for presentation to the
user. A procedure is created by the GLISP compiler from the
following pieces of information:
1. the knowledge about how to produce the display, which
is stored as a library procedure.
2. description of the user's data in the GLISP structure
description language.
3. selection of the data from the user object that are to
be displayed.
4. specification of the data needed by the display
procedure.
A system called LINK has been written that creates and runs
interface procedures automatically using the information outlined
above. The LINK system has been integrated with the GEV data
inspector of the GLISP system.
-----------------------------------------------------------------
AI85-17 The Map-Learning Critter, Benjamin J. Kuipers, December
1985.
The Critter is an artificial creature which learns, not only the
structure of its (simulated) environment, but also the
interpretation of the actions and senses that give it access to
that environment. The Map-Learning Critter embodies a strong a
priori hypothesis: that its environment is, at least in part,
structured as a large-scale space consisting of places connected
by paths. The Critter's learning strategy begins by diagnosing
its actions and senses. By performing experiments and examining
the periodicity of its sense-impressions, it classifies actions as
``turn-like,'' ``travel-like,'' and ``others.'' After the actions
are classified, it becomes possible to aggregate sets of
sense-impressions to define places; then places are linked into
paths. An exploration strategy assures that the entire
environment will be explored and assimilated into this model. The
Map-Learning Critter has been implemented and has experienced a
variety of reasonable and unreasonable environments. Some
implications of the results are discussed, and directions for
future research are outlined.
-----------------------------------------------------------------
AI85-16 Negotiated Interfaces for Software Reusability, Rick
Hill, December 1985. (Master's thesis)
This thesis presents an algorithm which constructs an interface
between a user's structure and a generic program, allowing the
GLISP compiler to specialize the generic program to work with the
user's data structure. The interface between the user's data and
the data structure expected by the generic program is "negotiated"
by the user through menu selection.
-----------------------------------------------------------------
AI85-15 The Knower's Paradox and the Logics of Attitudes,
Nicholas Asher and Hans Kamp, August 1985.
As Montague and Kaplan (Kaplan and Montaque, 1960, Montaque, 1963)
pointed out long ago, a syntactic treatment of propositional
attitudes has a fundamental weakness; virtually all of epistemic
logic must be sacrificed, if it, like ordinary logic, is to be
applicable to arbitrary subject matter. Thomason (Thomason, 1980)
extended these results to include a syntactic treatment of the
logic of belief, and showed how they also apply to analyses of the
attitudes that, while not strictly "syntactic", use
representations or structured propositions as the objects of the
attitudes. In what follows, we shall call all such treatments
"representational". The Hangman's Paradox as represented by
Montague and Kaplan is another example of how even a minimal
amount of epistemic logic in conjunction with a self referential
attitude can lead to disaster. A more abstract formulation of the
same self-referential attitude is exemplified in the "Knower's
Paradox" and is perhaps best expressed in colloquial English by
'my negation is known' (we shall call this sentence the "Knower
Sentence").
The usual moral drawn from these results is that a formally
serious analysis of the attitudes should treat expressions of
propositional attitudes, like modality, as predicates of
propositions qua intensions, not as predicates of sentences or
sentence-like entities. It is well-known how to provide a
coherent logic of propositional attitudes with this approach. One
of our main points, however, is that this "solution" is bought at
far too dear a price: it leads neither to a satisfactory analysis
of attitude reports nor to a general theory of attitudes. Since
we think that these are desiderata of any theory of attitudes, we
think that the standard "solution" is no solution at all, but
rather refusal to take a theory of the attitudes seriously. A
real solution to the Knower Paradox, one developed within the
framework that leads to the desiderata, demands a thorough
reanalysis of the logic of attitudes. We have both been
developing such frameworks in other papers using discourse
representation theory (Asher, 1984a, Asher, 1984b, Kamp, 1985a,
Kamp 1985b). In this paper, however, we shall only make use of
very general features of those frameworks that are relevant to the
discussion of the Knower Paradox.
-----------------------------------------------------------------
AI85-14 Technologies for Machine Translation, Robert F. Simmons,
August 1985.
Advances in hardware have made available micro-coded LISP and
PROLOG workstations, supported by text editing and formatting
software. Some of these have been augmented with linguistic
technology including large bilingual dictionaries, parsers,
generators, and translators to make them powerful tools for
research and development of automated translation. Some
techniques of linguistic engineering for accomplishing translation
are described, and it is suggested that the present barely
satisfactory approach involving sentence-by-sentence translation
will eventually be improved by incorporating the results of
research on analyzing discourse.
-----------------------------------------------------------------
AI85-13 Computer Science and Medical Information Retrieval,
Robert F. Simmons, August 1985.
Presented at the Conference on the Medical Information Sciences,
University of Health Science Center, San Antonio, Texas, July 1,
1985.
-----------------------------------------------------------------
AI85-12 Computational Treatment of Metaphor in Text
Understanding: A First Approach, Olivier Winghart,
August 1985. (Master's thesis)
This thesis starts with a general presentation of the phenomenon
of metaphor, necessary to define our object of study. It also
contains an ordered presentation of the current research
approaches to deal computationally with metaphors in text
analysis. As an illustration, a case study system that analyses a
mundane text (containing metaphors) is included, with a
description of its functioning. Finally, more general issues are
raised, that require elucidation to enlarge the field of
application of such systems.
-----------------------------------------------------------------
AI85-11 Branch-And-Bound Search, Vipin Kumar, August 1985.
-----------------------------------------------------------------
AI85-10 Parallel Processing for Artificial Intelligence, Vipin
Kumar, August 1985.
-----------------------------------------------------------------
AI85-09 A General Paradigm for AND/OR Graph and Game Tree
Search, Vipin Kumar, Dana S. Nau and Laveen N. Kanal,
August, 1985.
This paper represents a general procedure for finding an optimal
solution tree of an acyclic AND/OR graph with monotone cost
functions. Due to the relationship between AND/OR graphs and game
trees, it can also be used as a game tree search procedure.
Seemingly disparate procedures like AO*, SSS*, alpha-beta, B* are
instantiations of this general procedure. This sheds new light on
their interrelationships and nature, and simplifies their
correctness proofs. Furthermore, the procedure is applicable to a
very large class of problems, and thus provides a way of
synthesizing algorithms for new applications. The procedure
searches an AND/OR graph in a top-down manner (by selectively
developing various potential solutions) and can be viewed as a
general branch-and-bound (B&B) procedure.
-----------------------------------------------------------------
AI85-08 A General Heuristic Bottom-up Procedure for Searching
AND/OR Graphs, Vipin Kumar, August 1985.
This paper presents a general heuristic bottom-up procedure for
finding a least-cost solution tree of an AND/OR graph when the
cost functions associated with the arcs are monotone. Since
monotone cost functions are very general, the procedure is
applicable to a very large number of problems. The procedure
works for both cyclic and acyclic AND/OR graphs, and subsumes most
of the known bottom-up procedures for searching AND/OR graphs.
Many state-space search procedures and dynamic procedures are also
special cases of our procedure.
-----------------------------------------------------------------
AI85-07 Heuristic and Formal Methods in Automatic Program
Debugging, William R. Murray, June 1985. (Appears in
IJCAI-85 proceedings.)
TALUS is an automatic program debugging system that both detects
and corrects nonsyntactic bugs in student programs written to
solve small but nontrivial tasks in pure Lisp. TALUS permits
significant variability in student solutions by using heuristic
methods to recognize different algorithms and formal methods to
reason about computational equivalence of program fragments. A
theorem prover intensionally represents an infinite database of
rewrite rules, thus allowing for unanticipated implementations.
TALUS detects bugs using formal methods in both symbolic
evaluation and case analysis. Heuristic methods conjecture the
exact location of bugs and alterations necessary to correct the
bugs. Finally, formal methods establish or disprove these
heuristic conjectures, reflecting a generate and test methodology.
-----------------------------------------------------------------
AI85-06 Lisp Programming Lecture Notes, Gordon S. Novak, July
1985.
This tutorial is intended as an introduction to Lisp programming
for persons who already have experience programming in some
language, e.g. FORTRAN. This course presents a set of basic
system functions that are frequently used and are present in
virtually every Lisp implementation.The material follows the
conventions of Common Lisp.
Five programming assignments are included.
-----------------------------------------------------------------
AI85-05 A Self Organizing Retrieval System for Graphs , Robert
A. Levinson, May 1985. (PhD dissertation)
This report describes the theory, design and implementation of a
graph-based, self-organizing database retrieval system. The
system is designed to support the expert problem solving tasks of
recall, design and recovery. The fundamental design principle is
the production of a partial ordering by the relation subgraph-of.
This relation is considered to be equivalent to more-general-than.
This document discusses this design from three different levels:
an abstract level in which the nodes in the partial ordering are
concepts, the implementation level described above (the nodes are
graphs), and an application level in which the nodes are domain
specific objects such as molecules or reactions.
The primary problem domain explored is organic chemistry. A large
database of organic reactions and starting materials can be
queried to extract reactions or molecules that match, either
exactly or approximately, desired structures. The system may also
suggest precursors to a desired target molecule. The queries are
answered by exploiting a set of concepts that are commonly
subgraphs of molecule or reaction graphs. Concepts serve multiple
purposes: They constrain the search involved in the matching
process so that the time required to answer a query grows
sub-linearly in the size of the database. Concepts define the
notion of "similarity" that is crucial if approximate match is
desired. They also may be useful generalizations of reactions or
molecular structures. The concepts can be "discovered" (i.e.,
constructed) by the system itself using largely syntactic criteria
based on the topology of the database. A variety of performance
tests are performed, including a comparison of the system's
precursor recommendation capability with graduate students in
organic chemistry.
The system is also applied to the retrieval and generalization of
chess positions.
-----------------------------------------------------------------
AI85-04 Using and Revising Learned Concept Models: A Research
Proposal, Bruce W. Porter, May 1985.
People improve their learning performance with experience yet most
machine learning systems do not. The premise of this research is
that a learning system can use learn knowledge to guide the
acquisition of further knowledge. Furthermore, learned knowledge
can guide the interpretation of training examples that are not
strictly prototypical of the (potentially fuzzy) concept being
learned. The learning system can revise this learned bias when
new training violates expectations. This research combines
results from learning from examples, learning with background
knowledge, and learning with a domain model to study the
progression from knowledge-poor to knowledge-rich learning. This
research is directed toward development of a computational model
of concept acquisition which learns, uses, and revises domain
knowledge.
-----------------------------------------------------------------
AI85-03 Learning Problem Solving: A Proposal for Continued
Research, Bruce W. Porter, March 1985.
Many of the tasks which people perform involve problem solving:
applying a sequence of operators to solve a problem. This
research explores how efficient problem solutions are discovered
from the myriad of less efficient alternatives. Results in
machine learning are applied both to explain findings from
psychological experimentation and to expand the utility of
computers.
Learning to problem solve requires acquiring multiple forms of
knowledge. Problem solving is viewed as a search of a state-space
formulation of a problem. With this formalism, operators are
applied to states to transit from the initial state to the goal
state. The learning task is to acquire knowledge of the
state-space to guide search. In particular, three forms of
knowledge are required: why each operator is useful, when to
apply each operator, and what each operator does. This research
builds on an existing PROLOG system that learns problem solving in
the domains of simultaneous linear equations and symbolic
integration. First the current learning system is described.
Then new research directions are proposed. These include:
- critically comparing machine learning techniques
demonstrated in a variety of problem solving domains.
- using learned knowledge to guide the acquisition of
further learning.
- dynamically re-defining the concept description language
by discovering useful descriptors from the training.
-----------------------------------------------------------------
AI85-02 Knowledge Based Contextual Reference Resolution for Text
Understanding, Michael Kavanaugh Smith, January 1985.
(PhD dissertation)
This report extends the concept of reference resolution in a
discourse context to cover a broad range of connective inference
required for text understanding. Access to all conceptual
relations is restricted or facilitated by the context established
by preceding text. This contextual filter greatly simplifies the
establishment of connections between the surface text and
previously instantiated discourse representation.
The reference procedure requires a taxonomically organized
knowledge base of structured concepts, in the sense of frames and
scripts. The procedure selects lexical senses and generates
reference candidates, which may be either explicit or implicit in
the discourse context. These are matched against constraints
imposed by the surface text and a conceptual representation is
constructed and integrated with the accumulated discourse
structure.
-----------------------------------------------------------------
AI84-05 A Text Knowledge Base for the AI Handbook, Robert
F. Simmons, December 1983.
This research aims at defining a consistent set of text
representation conventions for organizing fifty pages of the AI
handbook as an inferential knowledge base founded on a procedural
logic system of general inference schemas for answering questions
from it. After a year of research on the AI handbook project, we
have developed a prototype, natural-language, text knowledge
system that includes a data base manager to compile the text
knowledge and to make it available to navigational commands. The
text is represented as logical propositions which form a set of
text axioms to model its content. English questions and commands
are translated to corresponding logical formulae and treated as
theorems to be proved with respect to the text model. The logical
form is that of semantic relations (SR) -- logical Predicates with
varying numbers and orders of arguments. To compute effectively
with such a free form, a relaxed unification procedure was defined
as the basis of the SR theorem prover. The use of procedural
logic augmented with fast, compiled Lisp functions has shown that
questions can be answered in times ranging from a few tenths of a
second to minutes of CPU time on a DEC2060 system. The
navigational capabilities of the data base manager make available
larger contexts surrounding the text and offer the user complete
freedom to explore the text and to extract any desired information
from it.
-----------------------------------------------------------------
AI84-04 From Menus to Intentions in Man-Machine Dialogue, Robert
F. Simmons, November 1984.
Operating systems are designed to achieve goals, not to recognize
user intentions. But the use of Help systems and Menu-selection
make them more useful and friendly. Natural Language interfaces
must go farther by guessing what the user wants and what is meant
but not specified. Natural language programming systems -- still
in infancy -- promise explicit capability for a user to define
his/her intentions explicitly. Text Knowledge systems -- at the
research frontier -- bewilder us with the complexity of intentions
expressed and implied. Current techniques for recognizing
intentions and computing appropriate representations and responses
are discussed in this paper.
-----------------------------------------------------------------
AI84-03 Translating Horn Clauses From English, Yeong-Ho Yu,
August 1984. (Master's thesis)
This work introduces a small grammar which translates a subset of
English into Horn Clauses. The parallel between the syntactic
structures of sentences and corresponding literals of their
Procedural Logic representation facilitates the translation. An
interpreter is also described which accepts English descriptions
of problem solving methods and facts about an example domain, the
Blocks World. The interpreter translates then into Horn Clauses
by using the grammar, and interprets them as programs and data.
It also carries out commands on data by using the programs, and
answers queries about data. In addition, it provides a mechanism
called "Consistency Rules" which maintains integrity of the
database. This experiment shows that Procedural Logic provides a
unified system to accomplish a wide variety of computations, and
that a small grammar without any semantic or domain-specific
knowledge can translate a small subset of English sentences into
Horn Clauses. Further research on several other domains is
suggested to evaluate the usefulness of this translation.
-----------------------------------------------------------------
AI84-02 Computing Discourse Conceptual Coherence: A Means to
Contextual Reference Resolution, Ezat Karimi, August
1984. (Master's thesis)
This thesis discusses the problem central to the interpretation of
the discourse of a text: contextual reference resolution. The
problem itself devolves to problems about the nature of the
inferencing mechanism, knowledge base organization and discourse
representation. A framework for an inferencing mechanism based on
a theory of discourse coherence and focusing is discussed. A
framework is described for a knowledge base which is composed of
the knowledge about entities (through frame structures) and the
knowledge about the coherence relations between different
event/states. A model for discourse representation based on a
small set of intersentential (coherence) relations and the
relations between the conceptual structures for the entities
discussed in the text is also presented.
-----------------------------------------------------------------
AI84-01 Artificial Intelligence Project at The University of
Texas at Austin, Gordon S. Novak and Robert L. Causey,
et al., 1984.
This report is the technical part of a proposal to the U.S. Army
Research Office (Electronics Division, Dr. Jimmie R. Suttle,
Director) in late 1983 in response to their call for proposals on
"Artificial Intelligence Research and Education." The University
of Texas at Austin (along with The University of Pennsylvania) was
selected for substantial funding over a five-year period out of a
total of thirty-five proposals submitted. This report also
contains a subsequent proposal by Dr. Bruce Porter.
The individual project proposals in this document illustrate the
breadth of research in Artificial Intelligence at the University
of Texas, though not all of the proposed projects were selected
for funding. The research reports that are currently supported by
the Army grant are those of Novak, Simmons, Kumar, and Porter.
-----------------------------------------------------------------
-------