[comp.doc.techreports] University of Rochester, NY TR List 1/91

MFLLL002@VE.BOGECN.EDU (03/26/91)

To:   mflll002
From: peg@cs.rochester.edu   OU=VE-GW ON=BITNET

Subj: 1/91 TR List



The University of Rochester
Computer Science Department
January 1991 Technical Report Abstract List

Listed below are the abstracts of our most recent technical reports.
There is a nominal charge for each report.  The fee is waived for
libraries in non-profit, educational institutions and for institutions
with which we have exchange agreements.  To order, write to the
Computer Science Dept. TR Librarian, 734 Computer Studies Building,
University of Rochester, Rochester, NY, 14627, USA (enclosing a check
payable to the University of Rochester), or send electronic mail to
tr@cs.rochester.edu (and we will send you a bill).


%A Miller, B.W.
%T The Rhetorical Knowledge Representation System Reference Manual (for Rhet version 17.9)
%R TR 326
%I Univ. of Rochester Computer Science Dept.
%D November 1990
%Z $5.00, 112 pages
%X Rhetorical (Rhet) is a programming / knowledge representation system
that offers a set of tools for building automated reasoning systems.
Its emphasis is on flexibility of representation, allowing the user to
decide if the system will basically operate as a theorem prover, a
frame-like system, or an associative network.  Rhet may be used as the
back-end to a user's programming system and handle the knowledge
representation chores, or it may be used as a full-blown programming
language.  Rhet offers two major modes of inference: a horn clause
theorem prover (backwards chaining mechanism), and a forward chaining
mechanism.  Both modes use a common representation of facts, namely
horn clauses with universally quantified, potentially type restricted,
variables, and use the unification algorithm.  Additionally, they both
share the following specialized reasoning capabilities: (1) variables
may be typed with a fairly general type theory that allows a limited
calculus of types including intersection and subtraction; (2) full
reasoning about equality between ground terms; (3) reasoning within a
context space, with access to axioms and terms in parent contexts; and
(4) escapes into Lisp for use as necessary.

%A Swain, M.J.
%T Parameter learning for Markov random fields with highest confidence first estimation
%R TR 350
%I Univ. of Rochester Computer Science Dept.
%D August 1990
%Z $2.00, 17 pages
%X We study the problem of learning parameters of a Markov Random
Field (MRF) from observations and propose two new approaches
suitable for use with Highest Confidence First (HCF) estimation.
Both approaches involve estimating local joint probabilities from
experience.  In one approach the joint probabilities are converted
to clique parameters of the Gibbs distribution so that the
traditional HCF algorithm can be used.  In the other approach
the HCF algorithm is modified to run directly with the
local probabilities of the MRF instead of the Gibbs distribution.

%A Quiroz, C.A.
%T Systematic detection of parallelism in ordinary programs (Ph.D. Thesis)
%R TR 351
%I Univ. of Rochester Computer Science Dept.
%D February 1991
%Z $5.00, 120 pages
%X This dissertation discusses a general model for compilers that take
imperative code written for sequential machines (ordinary code) and
detect the parallelism in that code that is compatible with the
semantics of the underlying programming language.  This model is based
on the idea of separating the concerns of parallelism detection and
parallelism exploitation.  This separation is made possible by having
the detection component provide an explicit representation of the
parallelism available in the original code.  This explicitly parallel
representation is based on a formalization of the notion of permissible
execution sequences for a given mass of code.  The model discussed here
prescribes the structure of the parallelism detector.  This structure
depends on (1) recognizing a hierarchical structure on a graph
representation of the program, and (2) separately encoding
parallelization conditions and effects.  Opportunities for
parallelization can then be discovered by traversing the hierarchical
structure from the bottom up.  During this traversal, progressively
larger parts of the program are compared against the independently
encoded conditions, and transformed when the conditions are satisfied.
The hierarchy guarantees that the results of transforming a piece
of a program propagate in time to affect the possible parallelization
of larger pieces. Although some of the algorithms used have exponential
worst cases for general graphs, their observed behavior on real flow
graphs is no worse than quadratic on the size of the original program.

%A Kyburg, H.E., Jr.
%T Giving up certainties
%R TR 352
%I Univ. of Rochester Computer Science Dept.
%D September 1990
%Z $2.00, 25 pages
%X One of the serious motivations for the development of non-monotonic
logics is the fact that, however sure we may be of some set of facts,
there can come a time at which at least some of them must be given up.
A number of philosophical approaches have stemmed from the study of
scientific inference, in which a law or theory, accepted on good
evidence at one time, comes to be rejected on the basis of more
evidence. These approaches are reviewed, and an alternative approach,
whose key idea is the control of observational error for the purpose of
predictive adequacy, is developed.

%A Dietz, P.F
%T Finding ancestors in trees
%R TR 354
%I Univ. of Rochester Computer Science Dept.
%D October 1990
%Z $2.00, 10 pages
%X We examine the problem of answering ancestor queries on trees; that
is, questions of the form "what is the ith ancestor of vertex v?"  We
solve this problem when the tree is static with linear preprocessing
time and constant time per query, and when the tree is growing by the
addition of leaves (the addleaf operation), where updates and queries
take constant amortized time.  In both cases, the machine model is the
RAM with logarithmic word size.  The related problem of finding nearest
common ancestors on growing trees has been addressed by Harel and
Tarjan.  They present an algorithm for the static case in which, after
#O(n)# preprocessing, nca queries take constant time, and also give a
linear time algorithm for trees that are linked at the roots.  Gabow
gave a linear time algorithm for the case of trees growing by addition
of leaves, and an #O(m alpha (m,n) +n)# amortized time algorithm (#m#
the total number of operations, #n# the size of the forest) for
arbitrary link operations (which cause the root of one tree to be a
child of a vertex in another tree). The results in this paper make
extensive use of the techniques presented by Gabow.

%A Hartman, L.
%T Decision theory and the cost of planning (Ph.D. Thesis)
%R TR 355
%I Univ. of Rochester Computer Science Dept.
%D September 1990
%Z $2.75, 72 pages
%X This thesis shows how it is possible for a planner to make
decisions that are sensitive to the computational resources that it
expends. The main feature of the approach is to ignore the distinction
between planning and executing and interpret planning procedures as
actions with uncertain outcomes.  A planner can then use standard
decision theoretic techniques to select which among a set of
alternative procedures is a better gamble.  The hypotheses that
underlie this work are that an autonomous agent must monitor its
resource expenditure in order to be successful and that computation is
an important and valuable resource. The main points of this thesis
are: (1) it is possible for planners to make local inexpensive
decisions that account for essentially their entire resource
expenditure; (2) there are limitations to what a planner is able to
infer about optimal strategies; and (3) it is possible for planners to
be sensitive to the statistical properties of their problem-solving
environments.

%A Cox, A.L.
%A Fowler, R.J.
%A Veenstra, J.E.
%T Interprocessor invocation on a NUMA multiprocessor
%R TR 356
%I Univ. of Rochester Computer Science Dept.
%D October 1990
%Z $2.00, 12 pages
%X In a distributed shared memory machine, the problem of minimizing
accesses to remote memory modules is crucial for obtaining high
performance.  We describe an object-based, parallel programming system
called OSMIUM to support experiments with mechanisms for performing
invocations on remote objects.  The mechanisms we have studied
include: non-cached access to remote memory, data migration, and
function-shipping using an interprocessor invocation protocol (IIP).
Our analyses and experiments indicate that IIP competes well with the
alternatives, especially when the structure of user programs requires
synchronized access to data structures.  While these results are
obtained on a NUMA multiprocessor, they are also applicable to systems
that use hardware cache coherency techniques.

%A Jain, S.
%T Learning in the presence of additional information and inaccurate information (Ph.D. Thesis)
%R TR 357
%I Univ. of Rochester Computer Science Dept.
%D September 1990
%Z $4.00, 113 pages
%X Inductive inference machines (IIMs) model language and scientific
learning.  In the classical model, a machine attempts to construct an
explanation about a phenomenon as it is receiving data about that
phenomenon.  The machine is said to be successful if it ultimately
succeeds in explaining the phenomenon. This is a naive model of
science.  For one thing, a scientist has more information available
than just the result of experiments.  For example, a scientist may have
some knowledge about the complexity of the phenomenon he (she) is
trying to learn.  For another, the result of the scientist's
investigation need not be the final theory. Finally, a scientist may
already have some approximate explanation of the phenomenon.  The study
of such additional information constitutes the first part of this
thesis.  In the real world our input is rarely free of error.  Inputs
usually suffer from noise and missing data.  The study of different
notions of such inaccuracies in the input is the focus of the second
part of this thesis.

%A Allender, E.
%A Hemachandra, L.A.
%A Ogiwara, M.
%A Watanabe, O.
%T Relating equivalence and reducibility to sparse sets
%R TR 358
%I Univ. of Rochester Computer Science Dept.
%D October 1990
%Z $2.00, 28 pages
%X For various polynomial-time reducibilities #r#, this paper asks
whether being #r#-reducible to a sparse set is a broader notion than
being #r#-equivalent to a sparse set.  Although distinguishing
equivalence and reducibility to sparse sets, for many-one or
1-truth-table reductions, would imply that #P != NP#, this paper shows
that for #k#-truth-table reductions, #k >= 2#, equivalence and
reducibility to sparse sets provably differ.  Though Gavalda and
Watanabe have shown that, for any polynomial-time computable unbounded
function #f(cdot)#, some sets #f(n)#-truth-table reducible to sparse
sets are not even Turing equivalent to sparse sets, this paper shows
that extending their result to the 2-truth-table case would provide a
proof that #P !- NP#.  Additionally, this paper studies the relative
power of different notions of reducibility, and proves that
disjunctive and conjunctive truth-table reductions to sparse sets are
surprisingly powerful, refuting a conjecture of Ko.

%A Crowl, L.A.
%A LeBlanc, T.J.
%T Architectural adaptability in parallel programming via control abstraction
%R TR 359
%I Univ. of Rochester Computer Science Dept.
%D January 1991
%Z $2.00, 38 pages
%X Parallel programming involves finding the potential parallelism in
an application, choosing an algorithm, and mapping it to the
architecture at hand. Since a typical algorithm has much more
potential parallelism than any single architecture can effectively
exploit, we usually program the parallelism that the available control
constructs easily express and that the given architecture efficiently
exploits. This approach produces programs that exhibit much less
parallelism than the original algorithm and whose performance depends
entirely on the underlying architecture.  To port such a program to a
new architecture, we must rewrite the program to remove any
ineffective parallelism and to recover any lost parallelism
appropriate for the new machine.  In this paper we show how to adapt a
parallel program to different architectures using control abstraction.
With control abstraction we can define and use a rich variety of
control constructs to represent an algorithm's potential parallelism.
Since control abstraction separates the definition of a construct
from its implementation, a construct may have several different
implementations, each exploiting a diferent subset of the
parallelism admitted by the construct.  By selecting an implementation
for each control construct using annotations, we can vary the
parallelism we choose to exploit without otherwise changing the
source code. This approach produces programs that exhibit most of,
if not all, the potential parallelism in an algorithm, and whose
performance can be tuned for a specific architecture simply by
choosing among the various implementations for the control constructs
in use.

%A Swain, M.J.
%T Color indexing (Ph.D. Thesis)
%R TR 360
%I Univ. of Rochester Computer Science Dept.
%D November 1990
%Z $8.00, 89 pages
%X Computer vision is embracing a new research focus in which the aim
is to develop visual skills for robots that allow them to interact with
a dynamic, realistic environment.  To achieve this aim, new kinds of
vision algorithms need to be developed that run in real time and
subserve the robot's goals.  Two fundamental goals are determining the
identity of an object with a known location and determining the
location of a known object.  Color can be successfully used for both
tasks.  This dissertation demonstrates that color histograms of
multicolored objects provide a robust, efficient cue for indexing into
a large database of models.  It shows that color histograms are stable
object representations in the presence of occlusion and over change in
view, and that they can differentiate among a large number of objects.
For solving the identification problem, it introduces a technique
called histogram intersection, which matches model and image
histograms, and a fast incremental version of histogram intersection
which allows real-time indexing into a large database of stored
models.  It demonstrates techniques for dealing with crowded scenes
and with models with similar color signatures. For solving the
location problem it introduces an algorithm called histogram
backprojection, which performs this task efficiently in
crowded scenes.

%A Hemachandra, L.A.
%A Hoene, A.
%T Collapsing degrees via strong computation
%R TR 361
%I Univ. of Rochester Computer Science Dept.
%D November 1990
%Z $2.00, 18 pages
%X Though Berman and others have provided powerful techniques to
collapse nondeterministic degrees at and above nondeterministic linear
space, and Immerman and Szelepcsenyi have provided techniques that
collapse even sublinear nondeterministic space classes, it has
remained an open problem whether any collapses could be proven for
sublinear nondeterministic space degrees.  This paper provides the
first such collapses.  For nondeterministic space classes #C# above
#NL#, we show that all #<= sup {1-L} sub m#-complete sets for #C#
collapse to a single #<= sup {1-L} sub {1li}# degree (i.e., all
#<= sup {1-L} sub m#-complete sets for #C# are #<= sup {1-L} sub
{1li}#-equivalent), and that all #<= sup {1-NL} sub m#-complete sets
for #C# are #NL#-isomorphic (and thus #P#-isomorphic).  Our techniques
sharply improve previous results for PSPACE.

%A Krizanc, D.
%T A note on off-line permutation routing on a mesh-connected processor array
%R TR 362
%I Univ. of Rochester Computer Science Dept.
%D November 1990
%Z $2.00, 5 pages

%X We show how to off-line route any permutation of an #n times n#
mesh-connected processor array in #2.5n-3# steps with at most 6
packets per processor per time step and in #2.25n+3# steps with at
most 8 packets per processor per time step.

%A Miller, B.W.
%T The Rhet Programmers Guide (for Rhet Version 17.9)
%R TR 363 (revision of TR 239)
%I Univ. of Rochester Computer Science Dept.
%D December 1990
%Z $4.50, 91 pages
%X Rhetorical (Rhet) is a programming/knowledge representation system
that offers a set of tools for building an automated reasoning system.
Its emphasis is on flexibility of representation.  This document
extends Technical Report 326 with more information about the internals
of the Rhet system.  In addition it provides the information needed
for users to write their own builtin functions, or better lispfns
(that use internally provided library functions).

%A Krizanc, D.
%A Narayanan, L.
%T Randomized selection on the mesh-connected processor array
%R TR 364
%I Univ. of Rochester Computer Science Dept.
%D November 1990
%Z $2.00, 12 pages

%X We show that selection on an input of size #N = n sup 2# can be
performed by an #N# node mesh-connected processor array in #2n + o(n)#
steps, with high probability.  The best previously known algorithm for
this involved sorting, and required #3n + o(n)# steps.

%A Rimey, R.D.
%A Brown, C.M.
%T HMMs and vision: Representing structure and sequences for active vision using hidden Markov models
%R TR 366
%I Univ. of Rochester Computer Science Dept.
%D January 1991
%Z $2.00, 25 pages
%X Hidden Markov models (HMMs) have rarely been used in
computer vision, although they are a key part of modern speech
recognition systems.  We show how HMMs can be used in a variety of
vision related tasks.  In all cases, the HMM learns structured
knowledge about the world from a set of observation sequences. The HMM
can also be used to generate behavior.  We present three examples.  (1)
An aspect graph is learned from foveation sequences.  (2) A finite
state machine to control visual operations to build a stable
representation of an object is learned from sequences of visual
(action,result) pairs.  (3) Augmented HMMs are used to learn and
generate camera motion sequences that are indexed on "what" and "where"
information in a scene.  In themselves, they are analogous to skilled
camera movements, and the AHMM formalism allows the fusion of the
learned "what" and "where" representations to create a hybrid skill.

%A Allen, J.F.
%T Natural language, knowledge representation, and logical form
%R TR 367
%I Univ. of Rochester Computer Science Dept.
%D January 1991
%Z $2.00, 30 pages
%X Current natural language understanding systems generally maintain a
strict division between the parsing processes and the representation
that supports general reasoning about the world. This paper examines
why these two forms of processing are separated, determines the
current advantages and limitations of this approach, and identifies
the inherent limitations of the approach.  I will point out some
fundamental problems with the models as they are defined today and
suggest some important directions of research in natural language and
knowledge representation.  In particular, I will argue that one of the
crucial issues facing future natural language systems is the
development of knowledge representation formalisms that can
effectively handle ambiguity.

%A Chaves, Jr., E.M.
%A LeBlanc, T.J.
%A Marsh, B.D.
%A Scott, M.L.
%T Kernel-kernel communication in a shared-memory multiprocessor
%R TR 368
%I Univ. of Rochester Computer Science Dept.
%D January 1991
%Z $2.00, 12 pages
%X In the standard kernel organization on a shared-memory
multiprocessor all processors share the code and data of the operating
system; explicit synchronization is used to control access to kernel
data structures. Distributed-memory multicomputers use an alternative
approach, in which each instance of the kernel performs local
operations directly and uses remote invocation to perform remote
operations. Either approach to inter-kernel communication can be used
in a NonUniform Memory Access (NUMA) multiprocessor, although the
performance tradeoffs may not be apparent in advance.  In this paper
we compare the use of remote access and remote invocation in the
kernel of a NUMA multiprocessor operating system.  We discuss the
issues and architectural features that must be considered when
choosing an inter-kernel communication scheme, and describe a series
of experiments on the BBN Butterfly designed to empirically evaluate
the tradeoffs between remote invocation and remote memory access.  We
conclude that the Butterfly architecture is biased towards the use of
remote invocation for most kernel operations, but that a small set of
frequently executed operations can benefit from the use of remote
access.

%A Raman, R.
%T Generating random graphs efficiently
%R TR 369
%I Univ. of Rochester Computer Science Dept.
%D January 1991
%Z $2.00, 16 pages
%X We consider the algorithmic complexity of generating labeled
(directed and undirected) graphs under various distributions.  We
describe three natural optimality criteria for graph generating
algorithms, and show algorithms that are optimal for many
distributions.