[mod.techreports] wisconsin2 tech reports

E1AR0002@SMUVM1.BITNET.UUCP (07/18/86)

Bibliography of Technical Reports
Computer Science Department
University of Wisconsin
January 1986 - June 1986

For copies, send requests to:

Technical Reports Librarian
Computer Science Department
University of Wisconsin
1210 W. Dayton St.
Madison, WI  53706

Note that many reports are not free of charge.  Make checks payable to
the University of Wisconsin.  We cannot accept purchase orders.


%A Jiawei Han
%T Pattern-Based and Knowledge-Directed Query Compilation for
Recursive Data Bases
%D January 1986
%R TR 629
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $5.70
%X Abstract:  Expert database systems (EDS's) comprise an interesting class of
computer systems which represent a confluence of research in artificial
intelligence, logic, and database management systems.  They involve
knowledge-directed processing of large volumes of shared information and
constitute a new generation of knowledge management systems.
Our research is on the deductive augmentation of relational database
systems, especially on the efficient realization of recursion.  We study
the compilation and processing of recursive rules in relational database
systems, investigating two related approaches:  pattern-based recursive rule
compilation and knowledge-directed recursive rule compilation and planning.
Pattern-based recursive rule compilation is a method of compiling and processing
recursive rules based on their recursion patterns.  We classify recursive rules
according to their processing complexity and develop three kinds of algorithms
for compiling and processing different classes of recursive rules: transitive
closure algorithms, SLSR wavefront algorithms, and stack-directed compilation
algorithms.  These algorithms, though distinct, are closely related.  The more
complex algorithms are generalizations of the simpler ones, and all apply the
heuristics of performing selection first and utilizing  previous processing
results (wavefronts) in reducing query processing costs.  The algorithms are
formally described and verified, and important aspects of their behavior are
analyzed and experimentally tested.
To further improve search efficiency, a knowledge-directed recursive rule
compilation and planning technique is introduced.  We analyze the issues raised
for the compilation of recursive rules and propose to deal with them by
incorporating functional definitions, domain-specific knowledge, query
constants, and a planning technique.  A prototype knowledge-directed relational
planner, RELPLAN, which maintains a high level user view and query interface,
has been designed and implemented, and experiments with the prototype are
reported and illustrated.

%A Raphael Finkel
%A A. P. Anantharaman
%A Sandip Dasgupta
%A Tarak S. Goradia
%A Prasanna Kaikini
%A Chun-Pui Ng
%A Murali Subbarao
%A G. A. Venkatesh
%A Sudhanshu Verma
%A Kumar A. Vora
%T Experience With Crystal, Charlotte and Lynx
%D February 1986
%R TR 630
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $1.44
%X Abstract:  This paper describes the most recent implementations of
distributed algorithms at Wisconsin that use the Crystal multicomputer, the
Charlotte operating system, and the Lynx language.  This environment is an
experimental testbed for design of such algorithms.  Our report is meant to
show the range of applications that we have found reasonable in such an
environment and to give some of the flavor of the algorithms that have been
developed.  We do not claim that the algorithms are the best possible for
these problems, although they have been designed with some care.  In
several cases they are completely new or represent significant
modifications of existing algorithms.  We present distributed
implementations of B trees, systolic arrays, prolog tree search, the
travelling salesman problem, incremental spanning trees, nearest-neighbor
search in k-d trees, and the Waltz constraint-propagation algorithm.  Our
conclusion is that the environment, although only recently available, is
already a valuable resource and will continue to grow in importance in
developing new algorithms.

%A O. L. Mangasarian
%A R. De Leone
%T Error Bounds for Strongly Convex Programs and (Super)Linearly
Convergent Iterative Schemes for the Least 2-Norm Solution of Linear
Programs
%D February 1986
%R TR 631
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  Given an arbitrary point (x,u) in &R sup n& x &R sup m&+, we
give bounds on the Euclidean distance between x and the unique solution x
to a strongly convex program in terms of the violations of the Karush-Kuhn-
Tucker conditions by the arbitrary point (x,u).  These bounds are then used to
derive linearly and superlinearly convergent iterative schemes for obtaining
the unique least 2-norm solution of a linear program.  These schemes can be
used effectively in conjunction with the successive overrelaxation (SOR)
methods for solving very large sparse linear programs.

%A Yeshayahu Artsy
%A Hung-Yang Chang
%A Raphael Finkel
%T Interprocess Communication in Charlotte
%D February 1986
%R TR 632
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  Charlotte is a distributed operating system that provides a
powerful interprocess communication mechanism.  This mechanism differs from
most others in that it employs duplex links, does not buffer messages in
the kernel, and does not block on communication requests.  A link is a bound
communication channel between two processes upon which messages can be sent,
received, awaited, or canceled.  Processes may acquire new links, destroy their
end of the link, or enclose their end as part of an outgoing message.  A link
is thus a dynamic capability-like entity that permits only the holder access to
its communication resource.
We first introduce the Charlotte interprocess communication structure and
discuss our motivation for choosing the specific operations.  We then
describe distributed computing environments in which Charlotte is
applicable.  Next, we discuss how we implemented the
interprocess-communication facility.  We close with a comparison between
Charlotte and related research and justify our design and its
implementation cost.

%A Hongjun Lu
%A Michael J. Carey
%T Load-Balanced Task Allocation in Locally Distributed Computer
Systems
%D February 1986
%R TR 633
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  A task force is a distributed program which consists of a set of
communicating tasks.  This paper presents an algorithm for allocating the
tasks in a task force to a set of execution sites in a locally distributed
computer system.  The main objective of this dynamic task allocation
algorithm is to balance the load of the system, and the algorithm takes the
current load status of the system into account when making its allocation
decisions.  In addition to balancing the system's load, the algorithm also
tries to minimize the communications costs between the tasks to the extent
possible within the constraints imposed by load balancing.  The paper
begins by quantitatively defining the load unbalance factor, and the
problem  of load-balanced task allocation is then defined.  A heuristic
algorithm for finding solutions to the problem is presented, and allocations
generated by the heuristic algorithm are compared to those obtained via
exhaustive search.  The results of this comparison indicate that the heuristic
algorithm finds the optimal allocation in most cases.  The execution time and
the complexity of the heuristic task allocation algorithm are also addressed.

%A David J. DeWitt
%A Robert H. Gerber
%A Goetz Graefe
%A Michael L. Heytens
%A Krishna B. Kumar
%A M. Muralikrishna
%T GAMMA: A High Performance Dataflow Database Machine
%D March 1986
%R TR 635
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $1.80
%X Abstract:  In this paper, we present the design, implementation techniques,
and initial performance evaluation of Gamma.  Gamma is a new relational
database machine that exploits dataflow query processing techniques.  Gamma
is a fully operational prototype consisting of 20 VAX 11/750 computers.
The design of Gamma is based on what we learned from building our earlier
multiprocessor database machine prototype (DIRECT) and several years of
subsequent research on the problems raised by the DIRECT prototype.
In addition to demonstrating that parallelism can really be made to work in a
database machine context, the Gamma prototype shows how parallelism can be
controlled with minimal control overhead through a combination of the use
of algorithms based on hashing and the pipelining of data between
processes.  Except for 2 messages to initiate each operator of a query tree
and 1 message when the operator terminates, the execution of a query is
entirely self-scheduling.

%A Eric Norman
%T Tracking the Elusive Eureka
%D March 1986
%R TR 636
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  The algebraic approach to computation of FP is used to transform
recursive definitions of the Fibonacci and factorial functions into
iterative code.  The approach is similar to the one used to solve many
differential equations and also similar to what many call "higher order" or
"semantic" unification.  An algorithmic procedure is not obtained; the
intent is to demonstrate the usefulness of the algebraic approach.

%A Tobin J. Lehman
%A Michael J. Carey
%T Query Processing in Main Memory Database Management Systems
%D March 1986
%R TR 637
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract: Most previous work in the area of main memory database systems
has focused on the problem of developing query processing techniques that
work well with a very large buffer pool.  In this paper, we address query
processing issues for memory resident relational databases, an environment
with a very different set of costs and priorities.  We present an architecture
for a main memory DBMS, discussing the ways in which a memory resident database
differs from a disk-based database.  We then address the problem of processing
relational queries in this architecture, considering alternative algorithms for
selection, projection, and join operations and studying their performance.  We
show that a new index structure, the T Tree, works well for selection and join
processing in memory resident databases.  We also show that hashing methods
work well for processing projections and joins, and that an old join method,
sort-merge, still has a place in main memory.

%A Michael J. Carey
%A David J. DeWitt
%A Joel E. Richardson
%A Eugene J. Shekita
%T Object and File Management in the EXODUS Extensible Database
System
%D March 1986
%R TR 638
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  This paper describes the design of the object-oriented storage
component of EXODUS, an extensible database management system currently
under development at the University of Wisconsin.  The basic abstraction in
the EXODUS storage system is the storage object, an uninterpreted
variable-length record of arbitrary size; higher level abstractions such as
records and indices are supported via the storage object abstraction.  One
of the key design features described here is a scheme for managing large
dynamic objects, as storage objects can occupy many disk pages and can grow
or shrink at arbitrary points.  The data structure and algorithms used to
support such objects are described, and performance results from a
preliminary prototype of the EXODUS large-object management scheme are
presented.  A scheme for maintaining versions of large objects is also
described.  We then describe the file structure used in the EXODUS storage
system, which provides a mechanism for grouping and sequencing through a
set of related storage objects.  In addition to object and file management,
we discuss the EXODUS approach to buffer management, concurrency control,
and recovery, both for small and large objects.

%A Mark A. Holliday
%A Mary K. Vernon
%T The GTPN Analyzer:  Numerical Methods and User Interface
%D April 1986
%R TR 639
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $2.28
%X Abstract:  The GTPN (Generalized Timed Petri Net) is a performance model
based on Petri Nets.  The state space for a model of the system is
automatically constructed and analyzed using results from Markov chain
theory.  We address some of the key computational issues involved in the
Markov chain theory approach.  In particular, we discuss two types of
algorithms.  The first type compute the absorption probabilities and mean
time to absorption.  The second type compute the steady state probability
distribution for a single, possibly periodic, recurrent Markov chain class.
We also describe the GTPN's user interface for input of the model description,
execution of the tool, and the output of the performance results.

%A Klaus Hollig
%T Box Splines
%D April 1986
%R TR 640
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  This report gives an introduction to multivariate cardinal spline
theory.  It is based on joint work with Carl de Boor and Sherman
Riemenschneider and the particular topics discussed are:  recurrence
relations for box splines, approximation order, interpolation, convergence
to functions of exponential type and subdivision algorithms.

%A Richard A. Brualdi
%A Rachel Manber
%T On Strong Digraphs With  A Unique Minimally Strong Subdigraph
%D April 1986
%R TR 641
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  In this paper we determine the maximum number of edges that a
strong digraph can have if it has a unique minimally strong subdigraph.  We
show that this number equals n (n + 1)/2 + 1, a surprisingly large number.
Furthermore we show that there is, up to an isomorphism, a unique strong
digraph which attains this maximum.

%A Michael D. Chang
%A Michael Engquist
%A Raphael Finkel
%A Robert Meyer
%T A Parallel Algorithm for Generalized Networks
%D June 1986
%R TR 642
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  A parallel algorithm for the solution of linear generalized
network optimization problems is presented.  This method decomposes the
original network into initial collections of quasi-trees that are
distributed among a set of processors that optimize in parallel their
corresponding "local" problems.  Periodically, the processors exchange
information on their local problems, and one or more quasi-trees may be
moved between processors as desirable links between quasi-trees on
different processors are determined or for load balancing purposes.  This
procedure has been implemented on the CRYSTAL multicomputer, and
computational results have been obtained on problems with up to 28,000
variables.  For problems whose optimal solutions contain numerous
quasi-trees, the efficiency of this parallel implementation is about 50%.

%A Charles V. Stewart
%A Charles R. Dyer
%T Convolution Algorithms on the Pipelined Image-Processing Engine
%D May 1986
%R TR 643
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  In this paper we present two algorithms for computing an N by N
convolution on the Pipelined Image-Processing Engine (PIPE).  PIPE is a
special-purpose machine for low-level vision consisting of eight processing
stages connected in a pipelined fashion.  Each stage can compute a number
of different basic image functions.  Each of the algorithms decomposes the
solution into a sequence of 3 by 3 neighborhood operations, shifts and
additions.  The first algorithm divides the results of the 3 by 3 partial
convolutions into groups of concentric rings of images and successively
collapses the images on the outer ring onto the next ring, merging the
results until a single result image is computed.  The second algorithm
divides the partial convolution images into eight sectors and computes each
sector independently.  For a 27 by 27 convolution of a 256 by 256 image the
first algorithm requires 0.633 seconds, while the second requires 0.683
seconds.  These algorithms for PIPE are also shown to compare favorably
with algorithms for arbitrary sized kernels on fast general purpose
hardware, the MPP and Warp.

%A Michael J. Carey
%A David J. DeWitt
%A Daniel Frank
%A Goetz Graefe
%A Joel E. Richardson
%A Eugene J. Shekita
%A M. Muralikrishna
%T The Architecture of the EXODUS Extensible DBMS: A Preliminary Report
%D May 1986
%R TR 644
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  With non-traditional application areas such as engineering
design, image/voice data management, scientific/statistical applications,
and artificial intelligence systems all clamoring for ways to store and
efficiently process larger and larger volumes of data, it is clear that
traditional database technology has been pushed to its limits.  It also
seems clear that no single database system will be capable of
simultaneously meeting the functionality and performance requirements of
such a diverse set of applications.  In this paper we describe the
preliminary design of EXODUS, an extensible database system that will
facilitate the fast development of high-performance, application-specific
database systems.  EXODUS provides certain kernel facilities, including a
versatile storage manager and a type manager.  In addition, it provides an
architectural framework for building application-specific database systems,
tools to partially automate the generation of such systems, and libraries of
software components (e.g., access methods) that are likely to be useful for
many application domains.

%A Klaus Hollig
%T Geometric Continuity of Spline Curves and Surfaces
%D June 1986
%R TR 645
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  We review &beta&-spline theory for curves and show how some of
the concepts can be extended to surfaces.  Our approach is based on the
Bezier form for piecewise polynomials which yields simple geometric
characterizations of smoothness constraints and shape parameters.  For
curves most of the standard "spline calculus" has been developed.  We
discuss in particular the construction of B-splines, the conversion for
B-spline to Bezier representation and interpolation algorithms.  A
comparable theory for spline surfaces for general meshes does at present
not exist.  We merely describe how to join triangular and rectangular
patches and discuss the corresponding &beta&-spline constraints in terms of
the Bezier representation.

%A Leonard Uhr
%T Parallel, Hierarchical Software/Hardware Pyramid Architectures
%D June 1986
%R TR 646
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  This paper examines pyramid-structured software/hardware
systems, both programs and multi-computers.  It explores how efficient
parallel computers can be designed.  It poses the extremely difficult
problem of perception, and briefly surveys the major alternative
approaches.  It examines the basic structures with which pyramids can be
built, and describes pyramid multi-computers.  It sketches out how pyramid
hardware can be used to execute programs, with special emphasis on the
"recognition cone" structures being developed to model living visual
systems.  It briefly suggests how pyramids can be augmented and embedded
into larger software/hardware structures.  In addition, it gives a short
history of massively parallel hierarchical pyramid structures.

%A O. L. Mangasarian
%A R. De Leone
%T Parallel Successive Overrelaxation Methods for Symmetric Linear
Complementarity Problems and Linear Programs
%D June 1986
%R TR 647
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%O $0.00
%X Abstract:  A parallel successive overrelaxation (SO) method is proposed for
the solution of the fundamental symmetric linear complementarity problem.
Convergence is established under a relaxation factor which approaches the
classical value of 2 for a loosely coupled problem.  The parallel SOR
approach is then applied to solve the symmetric linear complementarity
problem associated with the least norm solution of a linear program.