[comp.doc.techreports] tr-input/wisc.89

leff@smu.UUCP (Laurence Leff) (02/26/89)

Bibliography of Technical Reports
Computer Sciences Department
University of Wisconsin
November 1988 - January 1989

For copies, send requests to:

Technical Report Librarian
Computer Sciences Department
University of Wisconsin
1210 W. Dayton Street
Madison, WI 53706 USA


%A R. E. Kessler
%A Richard Jooss
%A Alvin Lebeck
%A Mark D. Hill
%T Inexpensive Implementations of Set-Associativity
%D November 1988
%R TR 803
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X The traditional approach to implementing wide set-associativity is
expensive, requiring a wide tag memory (directory) and many comparators.
Here we examine alternative implementations of associativity that use 
hardware similar to that used to implement a direct-mapped cache.  One approach scans tags serially from most-recently used to least-recently used.  An
alternative approach uses a partial compare of a few bits from each tag to 
reduce the number of tags that must be examined serially.  The drawback of
both approaches is that they increase cache access time by a factor of two or
more over the traditional implementation of set-associativity, making them
inappropriate for cache designs in which a fast access time is crucial 
(e.g. level one caches, caches directly servicing processor requests).
These approaches are useful, however, if (1) the low miss ratio of wide
set-associative caches is desired, (2) the low cost of a direct-mapped
implementation is preferred, and (3) the slower access time of these 
approaches can be tolerated.  We expect these conditions to be true for 
caches in multiprocessors designed to reduce memory interconnection traffic,
caches implemented with large, narrow memory chips, and level two (or higher)
caches in a cache hierarchy.

%A Yannis E. Ioannidis
%T Commutativity and its Role in the Processing of Linear Recursion
%D November 1988
%R TR 804
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We investigate the role of commutativity in query processing of linear
recursion.  For a restricted class of recursions we give a necessary
and sufficient condition for two rules to commute. The condition depends
on the form of the rules themselves.  Using the algebraic structure of
such rules, we study the relationship of commutativity with several other
properties of linear recursive rules. We show that commutativity is in
the center of several important special classes of linear recursion, i.e.,
separable recursion and recursion with recursively redundant predicates.

%A Vasant Honavar
%A Leonard Uhr
%D November 1988
%R TR 805
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X This paper presents and compares results for three types of 
connectionist networks:  (A) Multi-layered converging networks of neuron-like
units, with each unit connected to a small randomly chosen subset of units in
the adjacent layers, that learn by re-weighting of their links;  (B) Networks
of neuron-like units structured into successively larger modules under
brain-like topological constraints (such as layered, converging-diverging
heterarchies and local receptive fields) that learn by re-weighting of their
links;  (C) Networks with brain-like structures that learn by generation-
discovery, which involves the growth of links and recruiting of units in 
addition to re-weighting of links.  
Preliminary empirical results from simulation of these networks for
perceptual recognition tasks show large improvements in learning from
using brain-like structures (e.g., local receptive fields, global convergence)
over networks that lack such structure; further substantial improvements in
learning result from the use of generation in addition to reweighting of links.
We examine some of the implications of these results for perceptual learning
in connectionist networks.

%A Gilbert Verghese
%A Charles R. Dyer
%T Real-Time, Model-Based Tracking of Three-Dimensional Objects
%D November 1988
%R TR 806
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X In this paper we present new algorithms for real-time, model-based 
tracking of three-dimensional (3D) objects which are represented using 
polyhedral models.  Given a previous viewpoint solution, this is the problem
of continuously revising the camera viewpoint parameters based on the steadily
changing image of a3D object.  We do not consider the problems of initially
identifying the object or calculating the initial viewpoint; we only consider
the dynamic aspects of 3D tracking.  We assume a single object is moving 
arbitrarily in space, permitting changes in all 6 degrees of freedom.  The 
object can be moving at an arbitrary velocity, but we assume a dense 
sequence of images so that the object has both temporal and spatial continuity.
We do not assume any knowledge of the motion parameters, however.
Two algorithms are described in detail.  Each algorithm is implemented as a
parallel program running on two tightly-coupled multiprocessors - a Sequent
Symmetry for high-level processing (HLP) and an Aspex Pipe for low-level 
processing (LLP).  The two algorithms differ in that one uses a camera 
parameter hypothesize-and-test approach and the other uses dynamic edge
tracking for camera parameter adjustment.  The hypothesize-and-test algorithm
uses the HLP to hypothesize projections from slightly different viewpoints
and the LLP cross-correlates these hypotheses with successive images.  For
each set of hypotheses, the HLP computes a revised set of viewpoint 
parameters based on the best cross-correlation results.  In the second 
algorithm the current projected-view of the model is sent to the LLP by the
HLP; the HLP incrementally readjusts the viewpoint based on results of
continual local shifting of the projected points with respect to the edge
points detected in the images.  This dynamic edge tracking is performed by 
the LLP.  Predictive methods which assume bounded 3D translational and 
angular accelerations are used whenever speeds exceed the bounds determined
by the camera parameter update rate.

%A Steven Scott
%A Gurindar Sohi
%T Using Feedback to Control Tree Saturation in Multistage 
Interconnection Networks
%D November 1988
%R TR 807
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X In this paper, we propose the use of feedback schemes in 
multiprocessors that use an interconnection network with distributed
routing control.  We show that by altering system behavior so as to
minimize the occurrence of a performance-degrading situation, the
overall performance of the system can be improved.
As an example, we have considered the problem of tree saturation caused by hot 
spots in multistage interconnection networks.  Tree saturation causes 
degradation to all processors in a system, including those not 
participating in the hot spot activity.  We see that feedback schemes can be
used to control tree saturation, reducing degradation to other memory requests
and thereby increasing total system bandwidth.  As a companion to feedback
schemes, damping schemes are also considered.  Simulation studies presented
in this paper show that feedback schemes can improve overall system 
performance significantly in many cases.

%A M. J. Carey
%A D. J. DeWitt
%A G. Graefe
%A D. M. Haight
%A J. E. Richardson
%A D. T. Schuh
%A E. J. Shekita
%A S. L. Vandenberg
%T The EXODUS Extensible DBMS Project: An Overview  
%D November 1988
%R TR 808
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X This paper presents an overview of EXODUS, an extensible database
system project that is addressing data management problems posed by a variety
of challenging new applications.  The goal of the project is to facilitate
the fast development of high-performance, application-specific database systems.EXODUS provides certain kernel facilities, including a versatile storage 
manager.  In addition, it provides an architectural framework for building
application-specific database systems; powerful tools to help automate the
generation of such systems, including a rule-based query optimizer generator
and a persistent programming language; and libraries of generic software 
components (e.g., access methods) that are likely to be useful for many
application domains.  We briefly describe each of the components of EXODUS 
in this paper, and we also describe a next-generation DBMS that we are now
building using the EXODUS tools.

%A Gurindar S. Sohi
%A Sriram Vajapeyam
%T Tradeoffs in Instruction Format Design for Horizontal Architectures
%D December 1988
%R TR 810
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X With recent improvements in software techniques and the enhanced
level of fine grain parallelism made available by such techniques, there has
been an increased interest in horizontal architectures and large instruction
words that are capable of issuing more than one operation per instruction.
This paper investigates some issues in the design of such instruction formats.
We study how the choice of an instruction format is influenced by factors 
such as the degree of pipelining and the instruction's view of the register
file.  Our results suggest that very large instruction words capable of
issuing one operation to each functional unit resource in a horizontal
architecture may be overkill.  Restricted instruction formats with limited
operation issuing capabilities are capable of providing similar performance
(measured by the total number of time steps) with significantly less
hardware in many cases.

%A G A Venkatesh
%A Charles N. Fischer
%T Construction of Program Analysis Techniques for Use in Program Development 
Environments
%D January 1989
%R TR 811
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Program analysis techniques have been used in the past to aid
in translation of programs.  Recently, techniques have been developed to
aid in the construction of programs.  Use of such techniques in 
interactive program synthesizers result in effective program development
environments.  The growing sophistication of these analysis techniques
necessitates a structured approach to their design to ease their
development as well as to ensure their correctness.
This report provides an overview of a framework that has been designed to
facilitate the construction of program analysis techniques for use in
programming environments generated by a tool such as the Synthesizer 
Generator.  The framework supports high-level specifications of analysis
techniques in a denotational fashion where the implementation details
can be ignored.  Many of the features commonly required in program
analysis techniques are provided as primitives in the framework resulting
in clear and concise specifications that aid in the understanding of the
corresponding analysis techniques.  the framework is exemplified by
specifications for dependence analysis and scalar range analysis for
imperative programs.

%A Charles K. Chui
%A Amos Ron
%T On The Convolution of a Box Spline with a Compactly Supported 
Distribution: Linear Independence for the Integer Translates
%D January 1989
%R TR 812
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X The problem of linear independence of the integer translates of
\(*m * \fIB\fR, where \(*m is a compactly supported distribution and 
\fIB\fR is an exponential box spline, is considered in this paper.  The main
result relates the linear independence issue with the distribution of the
zeros of the Fourier-Laplace transform @mu hat@ of \(*m on certain linear
manifolds associated with \fIB\fR.  The proof of our result makes an 
essential use of the necessary and sufficient condition derived in (11).
Several applications to specific situations are discussed.  Particularly,
it is shown that if the support of \(*m is small enough then linear
independence is guaranteed provided that @mu hat@ does not vanish at a
certain finite set of critical points associated with \fIB\fR.  Also, the
results here provide a new proof of the linear independence condition for
the translates of \fIB\fR itself.

%A Amos Ron
%T On the Convolution of a Box Spline with a Compactly Supported
Distribution: The Exponential-Polynomials in the Linear Span
%D January 1989
%R TR 813
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X For a given box spline \fIB\fR and a compactly supported 
distribution @mu@, we examine in this note the convolution \fIB\fR *
@mu@ and the space \fIH(B *\fR @mu@) of all exponential-polynomials spanned
by its integer translates.  The main result here provides a necessary and
sufficient condition for the equality \fIH(B * \fR @mu@)  = \fIH(B)\fR.  This
condition
is given in terms of the distribution of the zeros of the Fourier-Laplace
transform of @B  * mu@ and allows us to reduce the above equality to 
much simpler settings.

The importance of this result is for the determination of the approximation
properties of the space spanned by the integer translates of @B  * mu@.
A typical example is discussed.

%A James R. Goodman
%A Mary K. Vernon
%A Philip J. Woest
%T Efficient Synchronization Primitives for Large-scale Cache-coherent
Multiprocessors
%D January 1989
%R TR 814
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X This paper proposes a set of efficient primitives for process
synchronization in multprocessors.  The only assumptions made in
developing the set of primitives are that hardware combining is not 
implemented in the interconnect, and (in one case) that the interconnect
supports broadcast.
The primitives made use of synchronization bits (syncbits) to provide a
simple mechanism for mutual exclusion.  The proposed implementation of 
the primitives includes efficient (i.e. local) busy-waiting for syncbits.
In addition, a hardware-supported mechanism for maintaining a first-come 
first-serve queue of requests for a syncbit is proposed.  This queueing
mechanism allows for a very efficient implementation of, as well as 
fair access to, binary semaphores.  We also propose to implement 
Fetch_and_Add with combining in software rather than hardware.  This
allows an architecture to scale to a large number of processors while
avoiding the cost of hardware combining.
Scenarios for common synchronization events such as work queues and 
barriers are presented to demonstrate the generality and ease of use
of the proposed primitives.  The efficient implementation of the
primitives is simpler if themultiprocessor has a hardware 
cache-consistency protocol.  To illustrate this point, we outline how
the primitives would be implemented in the Multicube multiprocessor.

%A Judy Goldsmith
%T Polynomial Isomorphisms and Near-Testable Sets
%D January 1989
%R TR 816
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X This thesis investigates two areas in structural complexity, as 
listed below.
   \(bu  \fIThe Berman-Hartmanis Conjecture\fR
The Berman-Hartmanis conjecture states that all @<= sub m sup P@ 
-complete sets for \fIN P\fR are polynomially isomorphic.  We construct an
oracle \fIA\fR such that all @<= sub m sup P@ -complete sets for
@N~P sup A@ are @p sup A@ -isomorphic, and an oracle \fIB\fR
such that there are @<= sub 1tt sup P@ -complete sets for @N~P
sup B@ that are not @p sup B@ -isomorphic.
   \(bu   \fINear-Testable Sets\fR
Near-testability is a form of self-reducibility based on the lexicographic 
order.  In terms of complexity, the near-testable sets lie somewhere between
\fIP\fR and \fIE\fR \(ca \fIP S P A C E\fR.  We show that the existence of
one-way functions implies that there are near-testable sets that are not
in \fIP\fR.  In {GJY-87}, (2 for 1) p-cheatable sets were suggested as a
possible generalization of near-testable sets.  Here, we demonstrate that
\fIP\fR can be characterized as the intersection of the near-testable
sets with the (2 for 1) p-cheatable sets.  
                 
%A Eugene J. Shekita
%A Michael J. Carey
%T Performance Enhancement Through Replication in an Object-Oriented
DBMS
%D TR 817
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X In this paper we describe how replicated data can be used to
speed-up query processing in an object-oriented database system.  The
general idea is to use replicated data to eliminate some of the functional
joins that would otherwise be required for query processing.  We refer to
our technique for replicating data as \fIfield replication\fR because it
allows individual data fields to be selectively replicated.  In the paper
we describe how field replication is specified at the data model level
and we present storage-level mechanisms to efficiently support it.  We
also present an analytical cost model to give some feel for how beneficial
field replication can be and the circumstances under which it breaks
down.  While field replication is a relatively simple notion, the analysis
shows that it can provide significant performance gains in many 
situations.  Much of the discussion is based on the EXTRA data model, but
the ideas presented here can also be extended to other data models that
support reference attributes or referential integrity facilities.

%A Vasant Honavar
%T Perceptual Development and Learning: From Behavioral. Neurophysiological,
and Morphological Evidence to Computational Models  
%D January 1989
%R TR 818
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X An intelligent system has to be capable of adapting to a constantly
changing environment.  It therefore, ought to be capable of learning from its
perceptual interactions with its surroundings.  This requires a certain amount
of plasticity in its structure.  Any attempt to model the perceptual 
capabilities of a living system or, for that matter, to construct a synthetic
system of comparable abilities, must therefore, account for such plasticity
through a variety of developmental and learning mechanisms.  This paper
examines some results from neuroanatomical, morphological, as well as 
behavioral studies of the development of visual perception; integrates them
into a computational framework; and suggests several interesting experiments
with computational models that can yield insights into the development of
visual perception.

%A Thomas Reps
%T Demonstration of a Prototype Tool for Program Integration 
%D January 1989
%R TR 819
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X This paper illustrates a sample session with a preliminary 
implementation of a program-integration tool.  The tool has been embedded in a
program editor created using the Synthesizer Generator, a meta-system for
creating interactive, language-based program development systems.  Data-flow
analysis of programs is carried out according to the editor's defining
attribute grammar and used to construct dependence graphs.  An integration
command added to the editor invokes the integration algorithm on the
dependence graphs, reports whether the variant programs interfere, and, if
there is no interference, builds the integrated program.
It should be noted that the integration capabilities of the tool are severely
limited; in particular, the tool can only handle programs written in a 
simple language in which expressions contain scalar variables and constants, 
and the only statements are assignment statements, conditional statements,
and while-loops.

%A Carl de Boor
%A Amos Ron
%T On Polynomial Ideals of Finite Codimension with Applications to Box Spline
Theory 
%D January 1989
%R TR 821
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We investigate here the relations between an ideal \fI I \fR of
finite codimension in the space \(*p of multivariate polynomials and 
various ideals which are generated by lower order perturbations of the
generators of \fI I \fR.  Special emphasis is given to the question of the
codimension of \fI I \fR and its perturbed counterpart and the local
approximation order of their kernels.
The discussion, stimulated by certain results in approximation theory, allows
us to provide a simple analysis of the polynomial and exponential spaces
associated with box splines.  This includes their structure, dimension,
local approximation order and an algorithm for their construction.  The
resulting theory is extended to subspaces of the above exponential/polynomial
spaces.