[comp.doc.techreports] tr-input/wisc.88

leff@smu.UUCP (Laurence Leff) (11/18/88)

Bibliography of Technical Reports
Computer Science Department
University of Wisconsin
July 1988 - October 1988

For copies, send requests to:

Technical Report Librarian
Computer Science Department
University of Wisconsin
1210 W. Dayton Street
Madison, WI 53706



%A Victor Shoup
%T On The Deterministic Complexity of Factoring Polynomials Over Finite Fields
%D July 1988
%R TR 782
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We present some new deterministic algorithms 
for factoring polynomials over finite fields
that are asymptotically faster than many commonly
known deterministic factoring algorithms.
Let @p@ be a prime number.
Our main result is a deterministic algorithm that factors
polynomials in @\fBZ\fR  sub p@(X) of degree @n@ using
\fIO\fR(@p sup 1/2+\(mo@@n sup 2+\(mo@)
operations in @\fBZ\fR sub p@.
This improves upon the running time,
with respect to both @n@ and @p@, of many previously known 
deterministic factoring algorithms.

%A Barton P. Miller
%A Morgan Clark
%A Steven Kierstead
%A Sek-See Lim
%A Timothy Torzewski
%T IPS-2: The Second Generation of a Parallel Program Measurement System  
%D August 1988
%R TR 783
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X IPS is a performance measurement system for parallel and distributed
programs.  IPS's model of parallel programs uses knowledge about the semantics
of a program's structure to provide two important features.  First, IPS 
provides a large amount of performance data about the execution of a parallel
program, 
and this information is organized so that access to it is easy and intuitive.
Second, IPS provides performance analysis techniques that help to
automatically guide the programmer to the location of program
bottlenecks.

IPS is currently running on its second implementation.  The
first implementation was a testbed for the basic design concepts,
providing experience with a hierarchical program and measurement
model, interactive program analysis, and automatic guidance 
techniques.  This implementation was built on the Charlotte Distributed
Operating System.  The second implementation, IPS-2, extends the
basic system with new instrumentation techniques, an interactive
and graphical user interface, and new automatic guidance analysis
techniques.  This implementation runs on 4.3BSD UNIX systems, on
the VAX, Sun 4, and Sequent Symmetry multiprocessor.

%A William Harry Plantinga
%T The Asp: A Continuous, Viewer-Centered Object Representation for Computer
Vision
%D August 1988 
%R TR 784
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X In this thesis a new continuous, viewer-centered representation for 
polyhedral objects or scenes is introduced, called the \fIaspect  
representation\fR or \fIasp\fR.  The asp is viewer-centered in the sense that
it represents the appearance of a polyhedron to a viewer as a two-dimensional
line drawing, rather than the volume of space that it fills.  It is 
continuous in the sense that it represents appearance for all viewpoints 
rather than for a discrete set of viewpoints.  In effect, it is a
representation of appearance as a function of viewpoint.  Analyses of the size
of asps and algorithms for their construction are given under orthographic and 
perspective viewing models, for convex and non-convex polyhedra.  The 
worst-case size of an asp is \(*H(n) in the convex case and \(*H(@n sup 4@)
in the non-convex case.

The asp is the first representation of appearance as a function of viewpoint.
As such it is useful for problems that depend on knowing the appearance of
an object from many viewpoints, finding ranges of viewpoints with particular
characteristics of appearance, and determining viewpoints at which particular
visual events happen.  Many problems in computer vision and graphics fall into 
one of these categories.  In general, the asp is useful for problems that make
repeated use of appearance characteristics, justifying the representation of 
all visual events.

In this thesis the asp is applied to three problems, in each case yielding
improvements or advantages over previous results.  The first problem is the
construction of the aspect graph, a graph defined to have a vertex for each
topologically-distinct view of a polyhedron and edges connecting adjacent
views.  Using the asp, we present the first algorithm for constructing the
aspect graph for general polydedra.  The second application is object 
recognition.  In this application we show how to use the asp to represent
and use occlusion information in recognizing three-dimensional objects from
an arbitrary viewpoint.  The third application is animating rotation, or
showing views of a polyhedral scene from many closely-spaced viewpoints fast
enough to give the appearance of the scene as the viewer moves around it.


%A Jong-Deok Choi
%A Barton P. Miller
%A Robert Netzer
%T Techniques for Debugging Parallel Programs with Flowback Analysis
%D August 1988
%R TR 786
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Flowback analysis is a powerful technique for debugging programs.
It allows the programmer to examine dynamic dependences in a program's execution history without having to re-execute the program.  The goal is to present to
the programmer a graphical view of the dynamic program dependences.  We are 
designing a system, called PPD, that performs flowback analysis while keeping
the execution time overhead low.  We also extend the semantics of flowback 
analysis to parallel programs.  This paper describes details of the graphs and
algorithms needed to implement efficient flowback analysis for parallel 
programs.

Execution time overhead is kept low by recording only a small amount of trace 
during a program's execution.  We use semantic analysis and a technique called
\fIincremental tracing\fR to keep the time and space overhead low.  As part of
the semantic analysis, PPD uses a static program dependence graph structure 
that reduces the amount of work done at compile time and takes advantage of 
the dynamic information produced during execution time.

Parallel programs have been accommodated in two ways.  First, the flowback
dependences can span process boundaries; i.e., the most recent modification to
a variable might be traced to a different process than the one that contains
the current reference.  The static and dynamic program dependence graphs of
the individual processes are tied together with synchronization and data 
dependence information to form complete graphs that represent the entire 
program.  Second, our algorithms will detect potential race conditions in the
access to variables.  The programmer can be directed to the cause of the race
condition.

PDD is currently being implemented for the C programming language on a Sequent
Symmetry shared-memory multiprocessor.

%A O. L. Mangasarian
%T Error Bounds for Nondegenerate Monotone Linear Complementarity Problems
%D August 1988
%R TR 787
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Error bounds and upper Lipschitz continuity results are given for
monotone linear complementarity problems with a nondegenerate solution.  The
existence of a nondegenerate solution considerably simplifies the error bounds
compared with problems for which all solutions are degenerate.  Thus when a 
point satisfies the linear inequalities of a nondegenerate complementarity 
problem, the residual that bounds the distance from a solution point consists
of the complementarity condition alone, whereas for degenerate problems this
residual cannot bound the distance to a solution without adding the square root
of the complementarity condition to it.  This and other simplified results are
a consequence of the polyhedral characterization of the solution set as the
intersection of the feasible region
{\fIz\fR \(or \fIM z + q\fR \(>= 0, \fIz\fR \(>= 0}
with a single linear affine inequality constraint.

%A Mary K. Vernon
%A Scott T. Leutenegger
%T Fairness Analysis of Multiprocessor Bus Arbitration Protocols
%D September 1988
%R TR 744
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Efficiency and fairness are two important properties of the bus
arbiter in a bus-based multiprocessor system.  The \fIparallel contention 
arbiter\fR is a highly efficient arbiter that requires very few control lines 
on the bus.  The popularity of this arbiter is demonstrated by its 
adoption in nearly all multiprocessor bus standards, including the Fastbus, 
Futurebus, NuBus, and Multibus II standards.  We have used Generalized Timed
Petri Nets to analyze the fairness of the protocols used in these arbiters.  We
also compre the efficiency and fairness of these protocols with the central
round-robin, central priority, and simple daisy chain arbiter.

The "fairness protocols" that have been implemented in the parallel contention
arbiter are widely regarded as being perfectly "fair", but no quantitative 
studies of their fairness have been reported.  Our results show that these
protocols have some undesirable fairness properties at high bus load.  We 
define a useful fairness measure to be the ratio of the bandwidths allocated to
the devices that are treated most favorably and least favorably by the protocols.  Under an important set of assumptions about the system workload, this ratio
is as high as 2.0 for the existing fairness protocols.  We propose a small
modification to one of the protocols that dramatically improves the fairness 
of the algorithm.  In the modified protocol, the ratio of the bandwidths 
asymptotically approaches 1.0 as the load on the bus increases.  However, the
bandwidths allocated to devices by the improved protocol still have a spread
of up to 10-15% in a range of offered loads where the bus is just becoming 
saturated.

The relative bandwidth allocated to each processor translates directly to the
relative speeds at which application processes run on the processor.  Our
analytical results do not argue strongly in favor of changing existing 
standards or implementations, since the unfairness only arises at peak bus
loads.  However, we also present a new protocol for the parallel contention 
arbiter that implements the round-robin algorithm of central round-robin 
arbiters.  The new protocol is as efficient as the existing protocols, is
simpler to implement, and is perfectly fair.  It would thus be the preferred
protocol when designing a new standard or a new system bus. 

%A Giri Narasimhan
%A Rachel Manber
%T Stability Number and Chromatic Number of Tolerance Graphs
%D September 1988
%R TR 788
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X An undirected graph \fIG(V, E)\fR is a \fItolerance graph\fR if there
exists a collection \fII = {@v bar@\(or\fIv \(mo V}\fR
of closed intervals on a line and a multiset \fIT = {@t sub v@\(or\fIv \(mo
V}\fR such that (\fIx, y\fR) \(mo \fIE\fR \(io \(or\fI@x bar@ \(ca 
@y bar@\fR\(or  \(>= 
\fRmin \fI{@t sub x, t sub y@}\fR.  Here \(or@x bar@\(or denotes the 
length of
interval @x bar@.  We present algorithms to compute the chromatic number, the
stability number, the clique number, and the clique cover number of tolerance
graphs.  

%A Amarnath Mukherjee
%A Lawrence H. Landweber
%A John C. Strikwerda
%T Evaluation of Retransmission Strategies in a Local Area Network Environment
%D September 1988
%R TR 789
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We present an evaluation of retransmission strategies over local 
area networks.  Expressions are derived for the expectation and the variance
of the transmission time of the go-back-n and the selective repeat protocols
in the presence of errors.  These are compared to the expressions for blast
with full retransmission on error (BFRE) derived by Zwaenepoel {Zwa 85}.  We
conclude that go-back-n performs almost as well as selective repeat and is 
very much simpler to implement while BFRE is stable only for a limited range of
messages sizes and error rates.  We also present a variant of BFRE which
optimally checkpoints the transmission of a large message.  This is shown to
overcome the instability of ordinary BFRE.  It has a simple state machine and
seems to take full advantage of the low error rates of local area networks.
We further investigate go-back-n by generalizing the analysis to an upper layer
transport protocol, which is likely to encounter among other things, 
\fIvariable\fR delays due to protocol overhead, multiple connections, process
switches and operating system scheduling priorities.

%A G. S. Sohi
%T High-bandwidth Interleaved Memories for Vector Processors - A Simulation Study 
%D September 1988
%R TR 790
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Sustained memory bandwidth for a range of access patterns is a key
to high-performance vector processing.  Interleaving is a popular way of 
constructing a high-bandwidth memory system.  Unfortunately, for some access
patterns, conflicts reduce the bandwidth of a standard, low-order interleaved
memory.  To improve memory bandwidth for a wide range of access patterns, 
alternate interleaving schemes must be considered.  This paper studies a family
of alternate interleaving schemes called permutation-based interleaving 
schemes.  Permutation-based interleaving schemes can be implemented with a 
small amount of additional hardware and with a minimal time overhead.  A 
detailed simulation analysis has been carried out in this paper.  The 
simulation analysis suggests that, with adequate buffering, permutation-based
interleaving schemes similar to those studied in this paper can be used to
implement a high-bandwidth memory system for vector processors.  The
resulting memory system sustains its bandwidth for a wide variety of access
patterns and for large bank busy times far better than a memory system with
standard interleaving.

%A Joel E. Richardson
%A Michael J. Carey
%T Persistence in the E Language: Issues and Implementation
%D September 1988
%R TR 791
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X E is an extension of C++ providing, among other features, database
types and persistent objects.  In a language offering persistence, there are
many important design and implementation issues which must be resolved.  This
paper discusses some of these issues, comparing the approach taken in the
E programming language with other persistent systems.  The basis of 
persistence in E is a new storage class for variables, and physical I/O is
based on a load/store model of the long-term storage layer.  In addition to
discussing the issues and E's general approach, this paper also details the
current implementation.

%A R. De Leone
%A O. L. Mangasarian
%A T.-H. Shiau
%T Multi-sweep Asynchronous Parallel Successive Overrelaxation for the 
Nonsymmetric Linear Complementarity Problem
%D September 1988
%R TR 792
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Convergence is established for the \fBmulti-sweep\fR asynchronous
parallel successive overrelaxation (SOR) algorithm for the \fBnonsymmetric\fR
linear complementarity problem.  The algorithm was originally introduced in 
(4) for the symmetric linear complementarity problem.  Computational tests
show the superiority of the multi-sweep asynchronous SOR algorithm over its
single-sweep counterpart on both symmetric and nonsymmetric linear 
complementarity problems.   
 
%A Vasant Honavar
%A Leonard Uhr
%T A Network of Neuron-like Units that Learns to Perceive by Generation as 
Well as Reweighting of its Links
%D September 1988
%R TR 793
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X Learning in connectionist models typically involves the modification
of weights associated with the links between neuron-like units; but the
topology of the network does not change.  This paper describes a new
connectionist learning mechanism for \fIgeneration\fR in a network of 
neuron-like elements that enables the network to modify its own topology by
growing links and recruiting units as needed (possibly from a pool of 
available units).  A combination of generation and reweighting of links, and
appropriate brain-like constraints on network topology, together with regulatory
mechanisms and neuronal structures that monitor the network's performance that
enable the network to decide when to generate, is shown capable of 
discovering, through feedback-aided learning, substantially more powerful, and
potentially more practical, networks for perceptual recognition than those
obtained through reweighting alone.

The \fIrecognition cones\fR model of perception (Uhr1972, Honavar1987, 
Uhr1987) is used to demonstrate the feasibility of the approach.  Results of
simulations of carefully pre-designed recognition cones illustrate the 
usefulness of brain-like topological constraints such as near-neighbor 
connectivity and converging-diverging heterarchies for the perception of
complex objects (such as houses) from digitized TV images.  In addition,
preliminary results indicate that brain-structured recognition cone networks
can successfully learn to recognize simple patterns (such as letters of the 
alphabet, drawings of objects like cups and apples), using generation-
discovery as well as reweighting, whereas systems that attempt to learn using
reweighting alone cannot ever learn.

%A Michael Ferris
%T Iterative Linear Programming Solution of Convex Programs
%D October 1988
%R TR 794
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X An iterative linear programming algorithm for the solution of the
convex programming problem is proposed.  The algorithm partially
solves a sequence of linear programming subproblems whose solution 
is shown to converge quadratically, superlinearly or linearly
to the solution of the convex program, depending on the accuraccy to
which the subproblems are solved.  The given algorithm is related
to inexact Newton methods for the nonlinear complementarity problem.
Preliminary results for an implementation of the algorithm are
given.

%A Eric Bach
%T A Note on Square Roots in Finite Fields
%D October 1988
%R TR 795
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X A simple method is presented whereby the quadratic character in a
finite field of odd order \fIq\fR can be computed in @\fIO\fR(log \fIq\fR)
sup 2 @ steps.  It is also shown how sequences generated deterministically 
from a random seed can be used reliably in a recent randomized algorithm of
Peralkta for computing square roots in finite fields.

%A Michael J. Carey
%A Rajiv Jauhari
%A Miron Livny
%T Transaction Boundaries in Active Databases: A Performance Perspective
%D October 1988
%R TR 796
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X The workload of an active DBMS 
consists of two types of activities:  externally generated tasks 
submitted by users, and rule management tasks caused by the triggering 
of rules stored in the knowledge component of the system.
Most design proposals for active DBMSs assume that an external task should
be combined with all the resulting rule management tasks into a single
transaction.
There is no compelling reason for this assumption, however;  the 
semantics of rules can be used to divide the workload into transactions
in a number of different ways.
In this paper, we describe a performance model of an active database, and 
present the results of
simulation experiments that study system performance as a function
of transaction boundary semantics for varying levels of data contention,
rule complexity, and data sharing between 
externally submitted tasks and rule management tasks.
Our results demonstrate that the way in which transaction boundaries
are imposed can have a major impact on the performance of an active 
DBMS, and that this aspect of rule semantics must therefore be
carefully considered at the time that rules are specified.

This work was performed under subcontract to the Advanced Information
Technology Division of Xerox, which was supported by the Defense Advanced
Research Projects Agency and by the Rome Air Development Center under Contract
No. F30602-87-C-0029.  Additional support was provided by the National
Science Foundation under grant IRI-8657323 and by the Digital Equipment
Corporation through its Initiatives for Excellence program.
The views and conclusions contained in this report are those of the authors 
and do not necessarily represent the official policies of the Defense 
Advanced Research Projects Agency, the Rome Air Development Center, or the 
U.S. Government.

%A Judy Goldsmith
%A Lane Hemachandra
%A Deborah Joseph
%A Paul Young
%T Near-Testable Sets
%D October 1988
%R TR 797
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X In this paper we introduce a new property of sets which we call
\fInear-testability\fR.  A set \fIS\fR is \fInear-testable (S\fR \(mo 
\fINT)\fR if the membership relation for all immediate neighbors is 
polynomially computable; i.e., if the function \fIt(x) = \(*x s(x) + \(*x s
(x \- 1) (mod \fR2) is polynomially computable.  The near-testable sets form
a subclass of the class, \(a+ \fIP (parity-P), \fRintroduced by Papadimitriou
and Zachos.  \(a+P has a complete set \(a+\fISAT\fR which has recently been
shown by Valiant and Vazirani to be hard for \fINP\fR under randomized 
polynomial time reductions.  We prove that there is a uniform polynomial 
one-one reduction which takes every set in \(a+\fIP\fR to a near-testable set,
and we show that the image of \(a+\fISAT\fR under this reduction (which we 
call \fINTSAT)\fR is polynomially isomorphyic to \(a+\fISAT\fR.  As 
corollaries we have that \fINTSAT\fR is complete for both \fINT\fR and for
\(a+\fIP,\fR that \fINTSAT\fR is hard for \fINP\fR under randomized 
polynomial time reductions, and that the existence of one-way functions  
implies the existence of sets that are near-testable but not polynomially 
decidable.  We then ask whether near-testability is preserved under 
\fIp\fR-isomorphisms.  This leads to a generalization, \fINT*\fR of 
\fINT\fR similar to those introduced by Meyer and Paterson and by Ko for 
self-reducible sets.  With this more general definition, \fINT*\fR is shown 
to be closed under polynomial time isomorphisms while remaining a subclass of
\(a+\fIP\fR.  We conjecture that it is a proper subclass.  In fact we show
that, relative to a random oracle, the containments \fIP \(ib NT \(ib NT* 
\(ib \(a+P\fR are proper with probability one.  We also show that, relative
to a random oracle, with probability one \fINT\fR and \fINT*\fR are
incomparable with both \fINP\fR and with \fIcoNP\fR.  Finally, we consider
the effects that the distribution and density of elements have on the
complexity of near-testable sets.

%A Deborah Joseph
%A Paul Young
%T Self-Reducibility: The Effects of Structure on Complexity 
%D October 1988
%R TR 798
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X In this column we will discuss the effect that various 
\fIself-reducibility\fR 
properties have on the analysis of complexity classes.  We will be primarily
interested in reviewing some of the more elementary results for readers
unfamiliar with the field and then discussing some recent results and
directions where self-reducibilities have been useful.  Throughout, we will
focus on the question of when self-reducibilities can be used to force sets,
or classes of sets, to have lower complexity than might otherwise be 
expected.  

%A Eric Bach
%A Joachim von zur Gathen
%T Deterministic Factorization of Polynomials over Special Finite Fields
%D October 1988
%R TR 799
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI 
%X Let \($D denote a polynomial of degree \fIn\fR whose coefficients
lie in a finite field with \fIq\fR elements and characteristic \fIp\fR. We
give a deterministic algorithm to factor such polynomials; assuming the
Extended Riemann Hypothesis, its running time is bounded by a polynomial in
\fIn\fR, log \fIq\fR, and the smallest prime factor of \fIp\fR + 1.  
A similar result holds if we choose positive integers \fIe\fR and \fIk\fR 
and replace \fIp\fR + 1 by @\fIp sup ek \fR + ... +~\fIp sup e@\fR + 1.
Independently of any hypotheses, there is a deterministic polynomial time 
algorithm to factor polynomials mod \fIp\fR, when \fIp\fR is a Mersenne prime.

%A Giri Narasimhan
%A Rachel Manber
%T A Generalization of Lovasz's Sandwich Theorem
%D October 1988
%R TR 800
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X For a graph with G(V, E) with \fIn\fR vertices, let @\(*w  sub k@
(G) be the size of the largest induced subgraph that can be covered with 
\fIk\fR cliques.  Define a \fIk\fR-multicoloring of G to be a coloring of its
vertices such that each vertex is assigned a set of \fIk\fR colors, and two
adjacent vertices are assigned disjoint sets of colors.  Let \(*x@\fI sub k
@\fR(G) be the minimum number of colors needed for a valid \fIk\fR-multicoloring
of the graph G.

We present a polynomial-time computable function \(*n@\fI sub k@\fR(G) which
satisfies the following inequality:

.ce
\(*w@\fI sub k@\fR(G) \(<= \(*n@\fI sub k@\fR(G) \(<= \(*x@\fI sub k@\fR(G). 

Thus \(*n@\fI sub k@\fR(G) is sandwiched between two NP-hard parameters, namely
\(*w @\fI sub k@\fR(G) and \(*x @\fI sub k@\fR(G).  This generalizes Lovasz's
@bold Sandwich bold Theorem@ {Lov86}, which demonstrates a polynomial-time 
computable function \(*n(G) satisfying the following inequality:

.ce
\(*w(G) \(<= \(*n(G) \(<= \(*x(G),

where \(*w(G) is the size of the largest clique of G, and \(*x(G) is the 
chromatic number of G.

%A Yannis e. Ioannidis
%A Eugene Wong
%T Towards an Algebraic Theory of Recursion
%D October 1988
%R TR 801
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X We have developed an algebraic framework for the study of recursion.
For immediate linear recursion, a Horn clause is represented by a relational
algebra operator.  We show that the set of all such operators forms a closed
semiring.  In this formalism, query answering corresponds to solving a linear
equation.  For the first time, we are able to have the query answer expressed
in an explicit algebraic form within an algebraic structure.  The 
manipulative power thus afforded has several implications on the implementation
of recursive query processing algorithms significantly.  In addition, we show
that mutual linear recursion can also be studied within a closed semiring, by
using relation vectors and operator matrices.  Regarding nonlinear recursion,
we first show that Horn clauses always give rise to multilinear recursion, 
which can always be reduced to vector bilinear recursion.  We then show that
bilinear recursion forms a nonassociative closed semiring.  Finally, we give
several sufficient conditions for bilinear recursion to be equivalent to a
linear one.  All these conditions are related to the properties of 
alternativeness and power-associativity.  Because the property of 
power-associativity does not have a finite test in closed semirings, we also
embed bilinear recursion in a linear algebra, where power-associativity, and
therefore linear equivalence, can be tested efficiently.

%A M. C. Ferris
%A O. L. Mangasarian
%T Finite Perturbation of Convex Programs
%D October 1988
%R TR 802
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X This paper concerns a characterization of the finite perturbation
property of a convex program.  When this property holds, finite perturbation
of the objective function of a convex program leads to a solution of the
original problem which minimizes the perturbation function over the set of
solutions of the convex program.  This generalizes an important property of
least norm solutions of linear programs which plays a key role in powerful
algorithms for solving large sparse linear programs.  It also generalizes a 
finite termination property of the proximal point algorithm.

service needs and implement particular semantics.  In a fully open system any
application can choose, add, replace, and extend services as well as resources.
However, in a multiuser environment protection considerations impose 
constraints on openness.  Enforcing protection may reduce sharing, entail
execution overhead, or increase programming complexity.  This dissertation 
studies how a fully open computing system can be constructed for a
multiuser environment and explores the aspects of openness.

We investigate these issues at three levels of abstraction: model, design,
and implementation.  The dissertation defines a model of a fully open
multiprocessor computing system, describes a specific design based on the
model, and examines various techniques by which that design can be  
implemented.  Our study provides insights into the extent of openness and into
the interplay between openness, protection, efficiency, and complexity.

The model, called the FOCS model, identifies a minimal set of mandatory
services required by considerations of protection and sharing.  The role of
the operating system is then defined as providing only these services.  All
other services can be provided by any application.  The model offers a novel
view of resource management, based on the notion of \fIresource ownership\fR.
Applications own private and shared resources and provide the policies as 
well as the mechanisms for using them.  The protection mechanism of the model
employs the concepts of \fIencapsulation, ownership, and light-weight
capabilities\fR.  The model introduces the notion of \fIactivities\fR, which 
are \fIthreads of control\fR that represent computations across multiple
address spaces.

At the design level we explore the mechanisms needed to support protected
openness and examine their efficiency and complexity.  The design elaborates
on the issues 9of processor and memory management, accounting, naming, and
language support.  Based on the examination of implementation techniques, we
conclude that a fully open computing system can be implemented with 
contemporary technology.  Our initial analysis suggests that the execution
overhead of such a system would not be large and that programming complexity
would be comparable to that in conventional systems.