[comp.doc.techreports] tr-input/wisc89.octnov

leff@smu.UUCP (Laurence Leff) (12/29/89)

Bibliography of Technical Reports
Computer Sciences Department
University of Wisconsin
October 1989 - November 1989

For copies, send requests to:

Technical Report Librarian
Computer Sciences Department
University of Wisconsin
1210 W. Dayton Street
Madison, WI 53706 USA


%A Anne Condon
%A Richard J. Lipton
%T Upper Bounds on the Complexity of Space Bounded Interactive Proofs 
%D April 1989
%R TR 841
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We show that the class \fI IP(2pfa)} \fR of 
languages accepted by interactive proof systems with finite state
verifiers is contained in \fI ATIME \fR$2 sup {2 sup O(n)}$.
We also show that 2-prover interactive proof systems with
finite state verifiers accept exactly the recursive languages.
Our results generalize to other space bounds.
We also obtain some results of independent interest 
on the rate of convergence of time-varying Markov chains and of
non-Markov chains, called feedback chains, to their halting states.

%A Michael J. Carey
%A Sanjay Krishnamurthi 
%A Miron Livny
%T Load Control for Locking:  The 'Half-and-Half' Approach 
%D October 1989
%R TR 880
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X  A number of concurrency control performance studies have shown that, under
high levels of data contention, concurrency control algorithms can exhibit
thrashing behavior which is detrimental to overall system performance.
In this paper, we present an approach to eliminating thrashing in the case
of two-phase locking, a widely used concurrency control algorithm.  Our
solution, which we call the `Half-and-Half' Algorithm, involves monitoring
the state of the DBMS in order to dynamically control the multiprogramming
level of the system.  Results from a performance study indicate that the
Half-and-Half algorithm can be very effective at preventing thrashing under
a wide range of operating conditions and workloads.

%A Mark Allmen 
%A Charles R. Dyer
%T Cyclic Motion Detection Using Spatiotemporal Surfaces and Curves 
%D October 1989
%R TR 881
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X The problem of detecting cyclic motion, while recognized by the psychophysical
community, has received very little attention in the computer vision
community. In this paper cyclic motion is formally 
defined as repeating curvature values along a path of motion. A procedure
is presented for cyclic motion detection using spatiotemporal (ST) 
surfaces and ST-curves. The projected movement of an object 
generates ST-surfaces. ST-curves are detected on the ST-surfaces, providing
an accurate, compact, qualitative description of the ST-surfaces. Curvature 
scale-space of the ST-curves is then used to detect intervals of repeating
curvature values. The successful detection of cyclic motion in two data 
sets is presented.

%A Eric Bach
%A James Driscoll 
%A Jeffrey Shallit
%T Factor Refinement
%D October 1989
%R TR 883
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Suppose we have obtained a partial factorization of an integer $m$, say
$m = m sub 1 m sub 2 ... m sub j$.
Can we efficiently ``refine'' this factorization of $m$
to a more complete factorization
m = prod from {1 <= i <= k} {n sub i} sup {e sub i},
where all the $n sub i >= 2$ are pairwise relatively prime, and $k ~>=~ 2$?
A procedure to find such refinements can be used
to convert a method for splitting integers into one that produces
complete factorizations,
to combine independently generated factorizations of a composite number,
and to parallelize the generalized Chinese remainder algorithm.
We apply Sleator and Tarjan's formulation of amortized analysis to prove
the surprising fact that our factor refinement algorithm takes
$O(( log m ) sup 2 )$ bit operations, the same as
required for a single gcd.  This is our main result, and appears
to be the first application
of amortized techniques to the analysis of a number-theoretic algorithm.
We also characterize the output of our factor refinement algorithm,
showing that the result of factor refinement
is actually a natural generalization of the greatest common divisor.
Finally, we also show how similar results can be obtained for
polynomials.  As an application, we give algorithms to produce
relatively prime squarefree factorizations and normal bases.

%A Kenneth Kunen
%T Logic for Logic Programmers 
%D October 1989
%R TR 884
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X The goal of logic programming is that the program, or database,
can be understood by logic alone, independently of any execution model.
Attempts to realize this goal have made it
clear that the logic involved must go beyond ordinary first-order logic.
This survey will explore several topics of current interest
in the logical meaning of logic programs, with particular attention paid to:
(1) The meaning of negation; this still remains problematical,
although many partial results are known.  (2) The meaning
of recursions; these imply a least fixed-point computation in
deductive databases, and something else in Prolog.  (3) The meaning of
the Prolog builtin predicates, such as the
evaluation of numeric terms.  (4) The semantic meaning
of the order in which a stream of answers is returned.
 
%A Amos Ron
%T A Characterization of the Approximation Order of Multivariate Spline Spaces 
%D November 1989
%R TR 885
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We analyze the approximation order associated with a directed set of spaces,
$ "{" S sub h "}" sub {h>0}$, each of which spanned by the $italic hZZ sup s $-translates of one
compactly supported function $phi sub h ~ : ~ italic IR sup s ~ -> ~ C$. Under a 
regularity condition on the sequence $"{"phi sub h"}" sub h$, we show that the optimal 
approximation order (in the \(if -norm) is always realized by 
quasi-interpolants, hence in a linear way. These quasi-interpolants provide 
the best approximation rates from $"{"S sub h "}" sub h$ to an exponential space of good 
approximation order at the origin. 
As for the case when each $S sub h$ is obtained by scaling $S sub 1$, 
under the assumption 
$sum from {alpha epsilon ZZ sup 3} phi sub 1~ ( cdot - alpha ) ~ == ~0,$	(*)	
the results here provide an unconditional characterization of the best
approximation order in terms of the polynomials in $S sub 1$. The necessity
of (*) in this characterization is demonstrated by a counterexample.

%A N. Dyn
%A I.R.H. Jackson
%A D. Levin 
%A A. Ron
%T On Multivariate Approximation by Integer Translates of a Basis Function
%D November 1989
%R TR 886
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Approximation properties of the dilations of the integer translates
of a smooth function, with some derivatives vanishing at
infinity, are studied.  The results apply to fundamental solutions
of homogeneous elliptic operators and to \*(lqshifted\*(rq 
fundamental solutions of the iterated Laplacian.
Following the approach from spline theory, the
question of polynomial reproduction by quasi-interpolation is
addressed first. The analysis makes an essential use of  the structure of
the generalized Fourier transform of the basis function.
In contrast with spline theory, polynomial reproduction is not sufficient
for the derivation of exact order of convergence by dilated
quasi-interpolants. These convergence orders are established by a careful and
quite involved examination of the decay rates of the basis function.
Furthermore, it is shown that the same approximation
orders are obtained with quasi-interpolants defined on a
bounded domain.

%A Carl de Boor 
%A Amos Ron
%T The Exponentials in the Span of the Integer Translates of a Compactly Supported Function
%D November 1989
%R TR 887
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Given a compactly supported function $phi ~~:~~ bold IR sup s
->~ C$ and the space \fIS\fR
spanned by its integer translates, we study quasiinterpolants which reproduce
(entirely or in part) the space \fI H\fR of all exponentials in \fI S\fR. We do this
by imitating the action on \fI H\fR of the associated
semi-discrete convolution operator $phi "*" prime$ by a 
convolution $lambda * , ~lambda$ being a compactly supported distribution,
and $lambda "*" sub {|H}$ by a another local convolution operator $mu "*"$. 
The discussion provides a unified theory for quasiinterpolants on regular
grids, showing that the each specific construction now in the literature 
corresponds to a special choice of $lambda$ and $mu$.
The natural choice $lambda ~=~ phi$ is singled out, and
the interrelation between $phi "*" prime$ and $phi "*"$ is analyzed in detail.
We use these observations in the conversion of the local approximation order
of an exponential space \fIH\fR into approximation rates from any space which
contains \fIH\fR and is spanned by the $italic hZZ sup s$-translates of a single compactly
supported function $phi$. The bounds obtained are desirable in the sense
that they rely only on \fIH\fR and the basic quantities diam supp $phi$
and $italic h sup {-s} "||" phi "||" inf "/" phi hat (0).$

%A Renato De Leone
%T Partially and Totally Asynchronous Algorithms for Linear Complementarity
Problems 
%D November 1989
%R TR 888
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X A unified treatment is given for partially and  totally
asynchronous parallel successive overrelaxation (SOR) algorithms for
the linear complementarity problem. Convergence conditions  are
obtained and compared to previous results.
This unified framework leads to a new convergence result for the
symmetric linear complementarity problem using a partially asynchronous method.

%A Jeffrey F. Naughton
%A Raghu Ramakrishnan
%T A Unified Approach to Logic Program Evaluation 
%D November 1989
%R TR 889
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X The Prolog evaluation algorithm has become the standard for
logic program evaluation, and bottom-up methods have long been
considered impractical because they compute irrelevant facts.
Recently, however, bottom-up evaluation algorithms that retain the
focusing property of top-down evaluation have been proposed, and in
view of these algorithms the choice between top-down and bottom-up
methods needs to be re-examined.
In order to motivate a closer look at bottom-up methods, we
identify certain classes of logic programs for which
bottom-up evaluation provides polynomial time evaluation algorithms
where Prolog takes exponential time. We also demonstrate that
techniques such as \fIpredicate factoring\fR
can provide a further \fIO(n)\fR improvement, when they are applicable.
We argue that no one evaluation method 
is uniformly preferable, and suggest that a choice of
the appropriate method must be made by the compiler based
on the given program.  We present
several results that shed light on this choice.
The bottom-up approach can be refined in a number of ways, and we
show how various results in the literature can be combined to
provide a coherent evaluation framework. Further, we indicate how
ideas in the \fItabulation\fR literature for functional programs
can be adapted to improve the memory utilization of bottom-up
methods. We also consider the program transformation techniques pioneered by
Burstall and Darlington, and study their relationship to deductive
database program transformations such as Magic Templates. 
The comparison indicates that the bottom-up approach, with the
refinements discussed in the paper, often achieves the same gains
as these sophisticated transformation systems, and thus illustrates
the power of the approach.
To keep this paper self-contained, we have included brief surveys of
all the techniques that we discuss.  We have not attempted to be
comprehensive in our survey; it is our hope that the reader will be
motivated to pursue these ideas in greater detail by following up on
the references. 

%A David Binkley
%A Susan Horwitz
%A Thomas Reps
%T The Multi-procedure Equivalence Theorem 
%D November 1989
%R TR 890
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Program dependence graphs have been used in program optimization,
vectorization, and parallelization.
They have also been used as the internal representation for programs in
programming environments, as well as for automatically merging program
variants.
This paper concerns the question of whether program dependence graphs are
\*(lqadequate\*(rq as a program representation.
Previous results on the adequacy of program dependence graphs have been limited
to a language without procedures and procedure calls.
Our main result is a theorem that extends previous results to a language with 
procedures and procedure calls.
The theorem shows that if the program dependence graphs of two programs are
isomorphic then the programs are strongly equivalent.

%A Mark D. Hill
%A James R. Larus
%T Cache Considerations for Programmers of Multiprocessors 
%D November 1989
%R TR 891
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Although caches in computers are invisible to programmers, they
significantly affect programs' performance.  This is particularly true
for multiprocessors.  This paper presents results from recent computer
architecture research about the performance of parallel programs and
discusses their implications for programmers who may know little or
nothing about caches.

%A Raghu Ramakrishnan 
%T Parallelism in Logic Programs 
%D November 1989
%R TR 892
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X There is a tension between the objectives of avoiding irrelevant computation 
and extracting parallelism, in that
a computational step used to restrict another must precede the latter.
Our thesis, following [BeR87], is that evaluation 
methods can be viewed as implementing a choice of 
\fIsideways information propagation graphs\fR, or sips, which
determines the set of goals and facts that must be evaluated.
Two evaluation methods that implement the same sips can then
be compared to see which obtains a greater degree of parallelism,
and we provide a formal measure of parallelism to make this comparison.
Using this measure, we prove that transforming a program using the Magic Templates
algorithm and then evaluating the fixpoint bottom-up provides
a ``most parallel'' implementation for a given choice of sips,
without taking resource constraints into account. This result, taken in
conjunction with earlier results from [BeR87,Ra88], which show that 
bottom-up evaluation performs no irrelevant computation and is sound
and complete, suggests that a bottom-up approach to parallel evaluation
of logic programs is very promising. A more careful analysis of the relative
overheads in the top-down and bottom-up evaluation paradigms is needed,
however, and we discuss some of the issues.
The abstract model allows us to
establish several results comparing other proposed
parallel evaluation methods in the logic programming and
deductive database literature, thereby showing some natural,
and sometimes surprising, connections.
We consider the limitations of the abstract model and of the proposed
bottom-up evaluation method, including the inability of sips to
describe certain evaluation methods, and the effect of resource 
constraints.  Our results shed light on the limits of the sip 
paradigm of computation, which we extend in the process. 

%A Michael J. Maher
%A Raghu Ramakrishnan 
%T Deja Vu in Fixpoints of Logic Programs 
%D November 1989
%R TR 893
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We investigate properties of logic programs that permit
refinements in their fixpoint evaluation and shed light
on the choice of control strategy. A fundamental aspect of
a bottom-up computation is that we must constantly check to see
if the fixpoint has been reached. If the computation
iteratively applies all rules, bottom-up, until the
fixpoint is reached, this amounts to checking if any new
facts were produced after each iteration. Such a check also
enhances efficiency in that duplicate facts need not be re-used in subsequent
iterations, if we use the Seminaive fixpoint evaluation strategy.
However, the cost of this check is a significant component of the cost
of bottom-up fixpoint evaluation, and
for many programs the full check is unnecessary.
We identify properties of
programs that enable us to infer that a much simpler check
(namely, whether any fact was produced in the previous iteration) suffices.
While it is in general undecidable whether
a given program has these properties, we develop techniques
to test sufficient conditions, and we illustrate these techniques on
some simple programs that have these properties.

%A Robert Netzer
%A Barton P. Miller
%T Detecting Data Races in Parallel Program Executions
%D November 1989
%R TR 894
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X This paper presents results that show how data races in a parallel
program execution can be detected
when only partial information about the
execution is recorded.
We present a formal model of a program execution, based on Lamport's
model of concurrent systems, and
define data races in terms of this model.
Executions of shared-memory parallel programs,
on sequentially consistent processors,
that use fork/join and semaphores are considered.
We distinguish between
an \fIactual data race\fP, which is a data race exhibited by an actual
program execution, and a \fIfeasible data race\fP, which is a data race
that could have been exhibited due to timing variations.
To accurately locate all actual data races in a program execution potentially
requires recording the
temporal ordering among all events in the execution.
Recording this information can incur unacceptable overhead.
To avoid this overhead,
we outline a scheme for recording only an approximation of the
temporal ordering,
and show how data races can be detected
using this approximation.
Our scheme for recording an approximation of the temporal ordering
is based on ordering only some of the events in the program execution.
We show how to detect data races
using this approximation,
and prove that all actual data races are detected.
Unlike
previous methods, which sometimes report
``data races'' that could never be exhibited by the program,
we also show how to characterize each detected data race as either being
feasible, or as belonging to a set of data races such that at least
one data race in the set is feasible.
We make this determination by examining how the
data dependences among the shared data
constrain alternate temporal orderings that could
potentially have been exhibited by the
program execution.

%A Susan Horwitz
%T Identifying the Semantic and Textual Differences Between Two Versions of
a Program 
%D November 1989
%R TR 895
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Text-based file comparators (\fIe.g.\fP, the Unix utility \fIdiff\fP)
are very general tools that can be applied to arbitrary files.
However, using such tools to compare \fIprograms\fP
can be unsatisfactory because their \fIonly\fP notion of change
is based on program \fItext\fP rather than program \fIbehavior\fP.
This paper describes a technique for comparing two versions of a program,
determining which program components represent changes, and classifying
each changed component as representing either a \fIsemantic\fP or a
\fItextual\fP change.