[mod.techreports] mitai10 tech reports

E1AR0002@SMUVM1.BITNET (03/04/86)

:aim 752
:author A. Yuille
:asort Yuille, A.L.
:title A Method for Computing Spectral Reflectance
:date December 1984
:cost $1.50
:pages 12
:ADnum AD-A150172
:keywords color, material edges, basis functions, mondrians
:abstract
Psychophysical experiments show that the perceived color of an object
is relatively independent of the spectrum of the incident illumination
and depends only on the surface reflectance. We demonstrate a possible
solution to this underdetermined problem by expanding the illumination
and surface reflectance in terms of a finite number of basis
functions. This yields a number of nonlinear equations for each color
patch. We show that given a sufficient number of surface patches with
the same illumination it is possible to solve these equations up to an
overall scaling factor. Generalizations to the spatial dependent
situation are discussed. We define a method for detecting material
changes and illustrate a way of detecting the color of a material at
its boundaries and propagating it inwards.

:aim 755
:title The Copycat Project: An Experiment in Nondeterminism and
Creative Analogies
:author Douglas Hofstadter
:asort Hofstadter, D.
:date January 1984
:cost $2.75
:pages 47
:adnum AD-A142744
:keywords analogy, nondeterminism, parallelism, randomness,
statistically emergent mentality, semanticity, slippability,
computational temperature
:abstract
A microworld is described, in which many analogies involving
strikingly different concepts and levels of subtlety can be made.  The
question "What differentiates the good ones from the bad ones?" is
discussed, and then the problem of how to implement a computational
model of the human ability to come up with such analogies (and to have
a sense for their quality) is considered.  A key part of the proposed
system, now under development, is its dependence on statistically
emergent properties of stochastically interacting "codelets" (small
pieces of ready-to-run code created by the system, and selected at
random to run with probability proportional to heuristically assigned
"urgencies"). Another key element is a network of linked concepts of
varying levels of "semanticity", in which activation spreads and
indirectly controls the urgencies of new codelets.  There is pressure
in the system toward maximizing the degree of "semanticity" or
"intensionality" of descriptions of structures, but many such
pressures, often conflicting, must interact with one another, and
compromises must be made.  The shifting of (1) perceived boundaries
inside structures, (2) descriptive concepts chosen to apply to
structures, and (3) features perceived as "salient" or not, is called
"slippage".  What can slip, and how, are emergent consequences of the
interaction of (1) the temporary ("cytoplasmic") structures involved
in the analogy with (2) the permanent ("Platonic") concepts and links
in the conceptual proximity network, or "slippability network". The
architecture of this system is postulated as a general architecture
suitable for dealing not only with fluid analogies, but also with
other types of abstract perception and categorization tasks, such as
musical perception, scientific theorizing, Bongard problems and others.

:aim 756
:title Artificial Intelligence and Robotics
:author Michael Brady
:asort Brady, M.
:date February 1984
:cost $2.75
:pages 44
:adnum AD-A142488
:keywords robotics, artificial intelligence
:abstract
Since Robotics is the field concerned with the connection of
perception to action, Artificial Intelligence must have a central role
in Robotics if the connection is to be {\it intelligent}. Artificial
Intelligence addresses the crucial questions of: what knowledge is
required in any aspect of thinking: how that knowledge should be
represented; and how that knowledge should be used.  Robotics
challenges AI by forcing it to deal with real objects in the real
world.  Techniques and representations developed for purely cognitive
problems, often in toy domains, do not necessarily extend to meet the
challenge.  Robots combine mechanical effectors, sensors, and
computers.  AI has made significant contributions to each component.
We review AI contributions to perception and object oriented
reasoning. Object oriented reasoning includes reasoning about space,
path-planning, uncertainty, and compliance.  We conclude with three
examples that illustrate the kinds of reasoning or problem solving
abilities we would endow robots with and that we believe are worthy
goals of both Robotics and Artificial Intelligence, being within reach
of both.

:aim 757
:title Smoothed Local Symmetries and Their Implementation
:author Michael Brady and Haruo Asada
:asort Brady, M.; Asada, H.
:date February 1984
:cost $2.75
:pages 44
:adnum AD-A142489
:abstract
We introduce a novel representation of two-dimensional shape that we
call {\it smoothed local symmetries} (SLS). Smoothed local symmetries
represent both the bounding contour of a shape fragment and the region
that it occupies.  In this paper we develop the main features of the
SLS representation and describe an implemented algorithm that computes
it.  The performance of the algorithm is illustrated for a set of
tools. We conclude by sketching a method for determining the
articulation of a shape into subshapes.

:aim 758
:title The Curvature Primal Sketch
:author Haruo Asada and Michael Bradey
:asort Asada, H.; Brady, M.
:date February 1984
:cost $2.25
:pages 22
:adnum AD-A142460
:keywords image understanding, vision, shape
:abstract
In this paper we introduce a novel representation of the significant
changes in curvature along the bounding contour of planar shape.  We
call the representation the {\it curvature primal sketch} and
illustrate its performance on a set of tool shapes.  The curvature
primal sketch derives its name from the close analogy to the primal
sketch representation advocated by Marr for describing significant
intensity changes.  We define a set of primitive parameterized
curvature discontinuities, and derive expressions for their
convolutions with the first and second derivatives of a Gaussian.  The
convolved primitives, sorted according to the scale at which they are
detected, provide us with a multi-scaled interpretation of the contour
of a shape.

:aim 759
:title Automatic Synthesis of Fine-Motion Strategies for Robots
:author Tomas Lozano-Perez, Matthew T. Mason, and Russell H. Taylor
:asort Lozano-Perez, T.; Mason, M.T.; Taylor, R. H.
:date December 1983
:cost $2.25
:pages 34
:adnum AD-A139532
:keywords robotics, compliance, task planning, automatic programming
:abstract
The use of active compliance enables robots to carry out tasks in the
presence of significant sensing and control errors.  Compliant motions
are quite difficult for humans to specify, however.  Furthermore,
robot programs are quite sensitive to details of geometry and to error
characteristics and must, therefore, be constructed anew for each
task.  These factors motivate the need for automatic synthesis tools
for robot programming, especially for compliant motion.  This paper
describes a formal approach to the synthesis of compliant motion
strategies from geometric descriptions of assembly operations and
explicit estimates of errors in sensing and control.  A key aspect of
the approach is that it provides correctness criteria for compliant
motion strategies.

:aim 760
:title The Find-Path Problem in the Plane
:author Van-Duc Nguyen
:asort Nguyen, V.
:date February 1984
:cost $3.00
:pages 70
:adnum AD-A142549
:abstract
This paper presents a fast heuristic algorithm for planning
collision-free paths of a moving robot in a cluttered planar
workspace.  The algorithm is based on describing the free space
between the obstacles as a {\it network of linked cones}. Cones
capture the {\it freeways} and the {\it bottlenecks} between the
obstacles.  Links capture the {\it connectivity} of the free space.
Paths are computed by intersecting the valid {\it configuration volumes}
of the moving robot inside these cones and inside the regions
described by the links.

:aim 761
:title Computations Underlying the Measurement of Visual Motion
:author Ellen C. Hildreth
:asort Hildreth, E.
:date March 1984
:cost $2.75
:pages 53
:abstract
The organization of movement in a changing image provides a valuable
source of information for analyzing the environment in terms of
objects, their motion in space, and their three-dimensional structure.
This movement may be represented by a two-dimensional velocity field
that assigns a direction and magnitude of velocity to elements in the
image.  This paper presents a method for computing the velocity field,
with three main components.  First, initial measurements of motion in
the image take place at the location of significant intensity
changes, which give rise to zero-crossings in the output of the
convolution of the image with a $\nabla_2 G$ operator.  The initial
motion measurements provide the component of velocity in the direction
perpendicular to the local orientation of the zero-crossing contours.
Second, these initial measurements are integrated along contours to
compute the two-dimensional velocity field.  Third, an additional
constraint of smoothness of the velocity field, based on the physical
constraint that surfaces are generally smooth, allows the computation
of a unique velocity field.  The details of an algorithm are
presented, with results of the algorithm applied to artificial and
natural image sequences.

:aim 762
:title Computational Experiments with a Feature Based Stereo Algorithm
:author W. Eric L. Grimson
:asort Grimson, W.E.L.
:date January 1984
:cost $2.25
:adnum AD-A142549
:pages 39
:abstract
Computational models of the human stereo system can provide insight
into general information processing constraints that apply to any
stereo system, either artificial or biological.  In 1977, Marr and
Poggio proposed one such computational model, that was characterized
as matching certain feature points in difference-of-Gaussian filtered
images, and using the information obtained by matching coarser
resolution representations to restrict the search space for matching
finer resolution representations.  An implementation of the algorithm
and its testing on a range of images was reported in 1980.  Since
then a number of psychophysical experiments have suggested possible
refinements to the model and modifications to the algorithm.  As well,
recent computational experiments applying the algorithm to a variety of
natural images, especially aerial photographs, have led to a number of
modifications.  In this article, we present a version of the
Marr-Poggio-Grimson algorithm that embodies these modifications and
illustrate its performance on a series of natural images.

:aim 763
:author W. Eric L. Grimson
:asort Grimson, W.E.L.
:title The Combinatorics of Local Constraints in Model-Based
Recognition and Localization From Sparse Data
:date April 1984
:cost $2.25
:adnum AD-A148338
:pages 38
:keywords object recognition, model-based recognition, constraint
propagation, constrained relaxation, combinatorial analysis
:abstract The problem of recognizing what objects are where in the
workspace of a robot can be cast as one of searching for a consistent
matching between sensory data elements and equivalent model elements.
In principle, this search space is enormous and to contain the
potential explosion, constraints between the data and model elements
are needed. We derive a set of constraints for sparse sensory data
that are applicable to a wide variety of sensors and examine their
completeness and exhaustiveness. We then derive general theoretical
bounds on the number of interpretations expected to be consistent with
the data under the effects of local constraints. These bounds are
applicable to many types of local constraints, other than the specific
examples used here. For the case of sparse, noisy three-dimensional
sensory data, explicit values for the bounds are computed and are
shown to be consistent with empirical results obtained earlier in
[Grimson and Lozano-Perez 1984]. The results are used to demonstrate
the graceful degradation of the recognition technique with the
presence of noise in the data, and to predict the number of data
points needed in general to uniquely determine the object being sensed.

:aim 764
:author John M. Rubin and W.A. Richards
:asort Rubin, J.M.; Richards, W.A.
:title Color Vision: Representing Material Categories
:date May 1984
:cost $2.25
:pages 37
:keywords
:abstract
We argue that one of the early goals of color vision is to distinguish
one kind of material from another. Accordingly, we show that when a
pair of image regions is such that one region has greater intensity at
one wavelength than at another wavelength, and the second region has
the opposite property, then the two regions are likely to have arisen
from distinct materials in the scene. We call this material change
circumstance the "opposite slope sign condition."  With this criterion
as a foundation, we construct a representation of spectral information
that facilitates the recognition of material changes.  Our theory has
implications for both psychology and neurophysiology. In particular,
Hering's notion of opponent colors and psychologically unique
primaries, and Land's results in two-color projection can be
interpreted as different aspects of the visual system's goal of
categorizing materials. Also, the theory provides two basic
interpretations of the the function of double-opponent color cells
described by neurophysiologists.

:aim 768
:author V. Torre and T. Poggio
:asort Torre, V.; Poggio, T.
:title On Edge Detection
:date August 1984
:cost $2.25
:adnum AD-A148573
:pages 41
:keywords numerical differentiation, zero crossings, regularization
:abstract
Edge detection is the process that attempts to characterize the
intensity changes in the image in terms of the physical processes that
have originated them. A critical, intermediate goal of edge detection
is the detection and characterization of significant intensity
changes. This paper discusses this part of the edge detection problem.
To characterize the types of intensity changes derivatives of
different types, and possibly different scales, are needed.  Thus, we
consider this part of edge detection as a problem in numerical
differentiation. We show that numerical differentiation of images is
an ill-posed problem in the sense of Hadamard.  Differentiation needs
to be {\it regularized} by a regularizing filtering operation before
differentiation. This shows that this part of edge detection consists
of two steps, a {\it filtering} step and a {\it differentiation} step.
Following this perspective, the paper discusses in detail the
following theoretical aspects of edge detection: (1) The properties of
different types of filters are derived. (2) Relationships among
several 2-D differential operators are established. (3) Geometrical
and topological properties of the zero crossings of differential
operators are studied in terms of transversality and Morse theory. We
discuss recent results on the behavior and the information content of
zero crossings obtained with filters of different sizes. Finally, some
of the existing local edge detector schemes are briefly outlined in
the perspective of our theoretical results.

:aim 769
:author Whitman Richards and Donald D. Hoffman
:asort Richards, W.A.; Hoffman, D.D.
:title Codon Constraints on Closed 2D Shapes
:date May 1984
:cost $2.25
:pages 24
:keywords vision, recognition, transversality, visual representation,
object perception, figure-ground
:abstract
Codons are simple primitives for describing plane curves.  They thus
are primarily image-based descriptors. Yet they have the power to
capture important information about the 3D world, such as making part
boundaries explicit. The codon description is highlty redundant
(useful for error-correction).  This redundancy can be viewed as a
constraint on the number of possible codon strings.  For smooth closed
strings that represent the bounding contour (silhouette) of many
smooth 3D objects, the constraints are so strong that sequences
containing 6 elements yield only 33 generic shapes as compared with a
possible number of 15,625 combinations.

:aim 770
:author Christof Koch and Shimon Ullman
:asort Koch, C.; Ullman, S.
:title Selecting One Among the Many: A Simple Network Implementing
Shifts in Selective Visual Attention
:date January 1984
:cost $1.50
:adnum AD-A148989
:pages 19
:reference C.B.I.P. Paper 003
:keywords attention, lateral inhibition, selective visual attention,
winner-take-all network, visual perception, lateral geniculate
nucleus, hierarchical networks, cortical anatomy/physiology
:abstract
This study addresses the question of how simple networks can account
for a variety of phenomena associated with the shift of a specialized
processing focus across the visual scene. We address in particular
aspects of the dichotomy between the preattentive-parallel and the
attentive-serial modes of visual perception and their hypothetical
neuronal implementations.  Specifically, we propose the following:
(1) A number of elementary features, such as color, orientation,
direction of movement, disparity etc. are represented in parallel in
different topographical maps, called the early representation. (2)
There exists a selective mapping from this early representation into a
more central representation, such that at any instant the central
representation contains the properties of only a single location in
the visual scene, the {\it selected} location. (3) We discuss some
selection rules that determine which location will be mapped into the
central representation.  The major rule, using the saliency or
conspicuity of locations in the early representation, is implemented
using a so-called Winner-Take-All network. A hierarchical pyramid-like
architecture is proposed for this network. We suggest possible
implementations in neuronal hardware, including a possible role for
the extensive back-projection from the cortex to the LGN.

:aim 771
:author Ronald S. Fearing and John M. Hollerbach
:asort Fearing, R.S.; Hollerbach, J.M.
:title Basic Solid Mechanics for Tactile Sensing
:date March 1984
:cost $2.25
:pages 23
:keywords robotics, tactile sensing, force sensing, contact sensing,
end effectors, robot hands, feature extraction
:abstract
In order to stable grasp objects without using object models, tactile
feedback from the fingers is sometimes necessary.  This feedback can
be used to adjust grasping forces to prevent a part from slipping from
a hand.  If the angle of force at the object finger contact can be
determined, slip can be prevented by the proper adjustment of finger
forces. Another important tactile sensing task is finding the edges
and corners of an object, since they are usually feasible grasping
locations. This paper describes how this information can be extracted
from the finger-object contact using train sensors beneath a compliant
skin.  For determining contact forces, strain measurements are easier
to use than the surface deformation profile.  The finger is modelled
as an infinite linear elastic half plane to predict the measured
strain for several contact types and forces.  The number of sensors
required is less than has been proposed for other tactile recognition
tasks. A rough upper bound on sensor density requirements for a
specific depth is presented that is based on the frequence response of
the elastic medium.  The effects of different sensor stiffnesses on
sensor performance are discussed.

:aim 772
:author Katsushi Ikeuchi, H. Keith Nishihara, Berthold K.P. Horn,
Patrick Sobalvarro, Shigemi Nagata
:asort Ikeuchi, K.; Nishihara, H.K.; Horn, B.K.P.; Sobalvarro, P.;
Nagata, S.
:title Determining Grasp Points Using Photometric Stereo and the PRISM
Binocular Stereo System
:date August 1984
:cost $2.25
:adnum AD-A147782
:pages 38
:abstract
This paper describes a system which locates and grasps doughnut shaped
parts from a pile.  The system uses photometric stereo and binocular
stereo as vision input tools. Photometric stereo is used to make
surface orientation measurements. With this information the camera
field is segmented into isolated regions of continuous smooth surface.
One of these regions is then selected as the target region.  The
attitude of the physical object associated with the target region is
determined by histograming surface orientations over that region and
comparing with stored histograms obtained from prototypical objects.
Range information, not available from photometric stereo is obtained by
the PRISM binocular stereo system. A collision-free grasp
configuration and approach trajectory is computed and executed using
the attitude, and range data.

:aim 773
:author Tomaso Poggio and Vincent Torre
:asort Poggio, T.; Torre, V.
:title Ill-Posed Problems and Regularization Analysis in Early Vision
:date April 1984
:cost $1.50
:adnum AD-A147753
:pages 14
:reference Also, C.B.I.P. Paper 001
:keywords early vision, regularization theory, edge detection,
ill-posed problems, motion analysis, variational problems
:abstract
One of the best definitions of early vision is that it is inverse
optics -- a set of computational problems that both machines and
biological organisms have to solve.  While in classical optics the
problem is to determine the images of physical objects, vision is
confronted with the inverse problem of recovering three-dimensional
shape from the light distribution in the image. Most processes of
early vision such as stereomatching, computation of motion and all the
"structure from" processes can be regarded as solutions to inverse
problems. This common characteristic of early vision can be
formalized - {\it most early vision problems are "ill-posed problems"
in the sense of Hadamard}. We will show that a mathematical theory
developed for regularizing ill-posed problems leads in a natural way
to the solution of early vision problems in terms of variational
principles of a certain class.  This is a new theoretical framework
for some of the variational solutions already obtained in the analysis
of early vision processes. It also shows how several other problems in
early vision can be approached and solved.

:aim 774
:author Gerald Roylance
:asort Roylance, G.
:title Some Scientific Subroutines in LISP
:date September 1984
:cost $1.50
:adnum AD-A147889
:pages 12
:abstract
Here's a LISP Library of Mathematical functions that calculate
hyperbolic and inverse hyperbolic functions. Bessel functions,
elliptic integrals, the gamma and beta functions, and the incomplete
gamma and beta functions.  There are probability density functions,
cumulative distributions, and random number generators for the normal,
Poisson, chi-square, Student's T, and Snedecor's F functions. Multiple
linear regression, Fletcher-Powell unconstrained minimization,
numerical integration, root finding, and convergence. Code to factor
numbers and to do the Solovay-Strassen probabilistic prime test.

:aim 776
:author Tomaso Poggio
:asort Poggio, T.
:title Vision by Man and Machine: How the brain processes visual
information may be suggested by studies in computer vision (and vice
versa)
:date March 1984
:cost $1.50
:adnum AD-A147890
:pages 12
:keywords computer vision, human vision, stereo, computational approach
:abstract
The development of increasingly sophisticated and powerful computers
in the last few decades has frequently stimulated comparisons between
them and the human brain. Such comparisons will become more earnest as
computers are applied more and more to tasks formerly associated with
essentially human activities and capabilities. The expectation of a
coming generation of 'intelligent' computers and robots with sensory,
motor and even 'intellectual' skills comparable in quality to (and
quantitatively surpassing) our own is becoming more widespread and is,
I believe, leading to a new and potentially productive analytical
science of 'information processing'. In no field has this new approach
been so precisely formulated and so thoroughly exemplified as in the
field of vision. As the dominant sensory modality of man, vision is
one of the major keys to our mastery of the environment, to our
understanding and control of the objects which surround us. If we wish
to create robots capable of performing complex manipulative tasks in a
changing environment, we must surely endow them with (among other
things) adequate visual powers.  How can we set about designing such
flexible and adaptive robots? In designing them, can we make use of
our rapidly growing knowledge of the human brain, and if so, how at
the same time, can our experience in designing artificial vision
systems help us to understand how the brain analyzes visual information?

:aim 777
:author A.L. Yuille and T. Poggio
:asort Yuille, A.L.; Poggio, T.
:title A Generalized Ordering Constraint for Stereo Correspondence
:date May 1984
:cost $2.25
:adnum AD-A149182
:pages 25
:reference C.B.I.P. Paper 005
:abstract
The ordering constraint along epipolar lines is a powerful constraint
that has been expoited by some recent stereomatching algorithms. We
formulate a {\it generalized ordering constraint}, not restricted to
epipolar lines. We prove several properties of the generalized ordering
constraint and of the "forbidden zone", the set of matches that would
violate the constraint. We consider both the orthographic and the
perspective projection case, the latter for a simplified but standard
stereo geometry. The disparity gradient limit found in the human
stereo system may be related to a form of the ordering constraint. To
illustrate our analysis we outline a simple algorithm that exploits the
generalized ordering constraint for matching contours of wireframe
objects. We also show that the use of the generalized ordering
constraint implies several other stereo matching constraints: a) the
ordering constraint along epipolar lines, b) figural continuity, c)
Binford's cross-product constraint, d) Mayhew and Frisby's figural
continuity constraint. We finally discuss ways of extending the
algorithm to arbitrary 3-D objects.

:aim 779
:author Hugh Robinson and Christof Koch
:asort Robinson, H.; Koch, C.
:title An Information Storage Mechanism: Calcium and Spines
:date April 1984
:cost $1.50
:pages 15
:reference C.B.I.P. Paper 004
:keywords information storage, short-term memory, biological hardware,
dendritic spines, calcium, calmodulin, actin
:abstract
This proposal addresses some of the biophysical events possibly
underlying fast activity-dependent changes in synaptic efficiency.
Dendritic spines in the cortex have attracted increased attention over
the last years as a possible locus of cellular plasticity, given the
large number of studies reporting a close correlation between
presynaptic activity (or lack of thereof) and changes in spine shape.
This is highlighted by recent reports, showing that the spine
cytoplasm contains high levels of actin. Moreover, it has been
demonstrated that a high level of intracellular free calcium,
$Ca^{2+}$, is a prerequisite for various forms of synaptic
potentiation. We propose a series of plausible steps, linking
presynaptic electrical activity at dendritic spines with a
short-lasting change in spine geometry. Specifically, we conjecture
that the spike-induced excitatory postsynaptic potential triggers an
influx of $Ca^{2+}$ into the spine, where it will rapidly bind to
intracellular calcium buffers such as calmodulin and calcineurin.
However, for prolonged or intense presynaptic electrical activity,
these buffers will saturate. The free $Ca^{2+}$ will then activate the
actin/myosin network in the spine neck, reversibly shortening the
length of the neck and increasing its diameter. This change in the
geometry of the spine will lead to an increase in the synaptic
efficiency of the synapse. We will discuss the implications of our
proposal for the control of cellular plasticity and its relation to
generalized attention and arousal.

:aim 780
:author H.K. Nishihara
:asort Nishihara, H.K.
:title PRISM: A Practical Real-Time Imaging Stereo Matcher
:date May 1984
:cost $2.25
:pages 32
:adnum AD-A142532
:keywords binocular stereo, noise tolerance, zero crossings, computer
vision, stereo matching, correlation, binarization, robotics, obstacle
avoidance, proximity sensors, structured light
:abstract
A binocular-stereo-matching algorithm for making rapid range
measurements in noisy images is described. This technique is developed
for applications to problems in robotics where noise tolerance,
reliability, and speed are predominant issues. A high speed pipelined
convolver for preprocessing images and an {\it unstructured light}
technique for improving signal quality are introduced to help enhance
performance to meet the demands of this task domain. These
optimizations, however, are not sufficient. A closer examination of
the problems encountered suggests that broader interpretations of both
the objective of binocular stereo and of the zero-crossing theory of
Marr and Poggio are required.  In this paper, we restrict ourselves to
the problem of making a single primitive surface measurement. For
example, to determine whether or not a specified volume of space is
occupied, to measure the range to a surface at an indicated image
location, or to determine the elevation gradient at that position. In
this framework we make a subtle but important shift from the explicit
use of zero-crossing contours (in band-pass filtered images) as the
elements matched between left and right images, to the use of the
signs between zero-crossings.  With this change, we obtain a simpler
algorithm with a reduced sensitivity to noise and a more predictable
behavior. The PRISM system incorporated this algorithm with the
unstructured light technique and a high speed digital convolver. It
has been used successfully by others as a sensor in a path planning
system and a bin picking system.

:aim 781
:author Carl Hewitt, Tom Reinhardt, Gul Agha, Giuseppe Attardi
:asort Hewitt, C.; Reinhardt, T.; Agha, G.; Attardi, G.
:title Linguistic Support of Receptionists for Shared Resources
:date
:cost $2.25
:pages 30
:keywords parallel problem solving, guardian, actors, message passing,
guarantee of service, serializers, transaction marker, concurrent
system
:abstract
This paper addresses linguistic issues that arise in providing support
for shared resources in large scale concurrent systems. Our work is
based on the Actor Model of computation which unifies the lambda
calculus, the sequential stored-program and the object-oriented models
of computation.  We show how {\it receptionists} can be used to
regulate the use of shared resources by scheduling their access and
providing protection against unauthorized or accidental access. A
shared financial account is an example of the kind of resource that
needs a receptionist. Issues involved in the implementation of
scheduling policies for shared resources are also addressed. The
modularity problems involved in implementing servers which multiplex
the use of physical devices illustrate how delegation aids in the
implementation of parallel problem solving systems for communities of actors.

:aim 783
:author Tomaso Poggio and Christof Koch
:asort Poggio, T.; Koch, C.
:title An Analog Model of Computation for the Ill-Posed Problems of
Early Vision
:date May 1984
:cost $1.50
:adnum AD-A147726
:pages 16
:reference Also C.B.I.P. Paper 002
:keywords early vision, parallel processing,
elect./chem./neuronal networks, regularization analysis,
neural hardware, analog computation, motion analysis, variational
problems
:abstract
A large gap exists at present between computational theories of vision
and their possible implementation in neural hardware. The model of
computation provided by the digital computer is clearly unsatisfactory
for the neurobiologist, given the increasing evidence that neurons are
complex devices, very different from simple digital switches. It is
especially difficult to imagine how networks of neurons may solve the
equations involved in vision algorithms in a way similar to digital
computers. In this paper, we suggest an analog model of computation in
electrical or chemical networks for a large class of vision problems,
that maps more easily into biologically plausible mechanisms.  Poggio
and Torre (1984) have recently recognized that early vision problems
such as motion analysis (Horn and Schunck, 1981; Hildreth, 1984a,b),
edge detection (Torre and Poggio, 1984), surface interpolation
(Grimson, 1981; Terzopoulos, 1984), shape-from-shading (Ikeuchi and
Horn, 1981) and stereomatching can be characterized as mathematically
ill-posed problems in the sense of Hadamard (1923). Ill-posed problems
can be "solved", according to regularization theories, by variational
principles of a specific type. A natural way of implementing
variational problems are electrical, chemical or neuronal networks. We
present specific networks for solving several low-level vision
problems, such as the computation of visual motion and edge detection.

:aim 786
:author Tamar Flash
:asort Flash, T.
:title The Coordination of Arm Movements: An Experimentally Confirmed
 Mathematical Model
:date November 1984
:cost $2.25
:pages 32
:keywords Cartesian trajectory planning, jerk minimization, human
multi-joint movements, end-effector trajectory planning, obstacle
avoidance
:abstract
This paper presents studies of the coordination of voluntary human arm
movements. A mathematical model is formulated which is shown to
predict both the qualitative features and the quantitative details
observed experimentally in planar, multi-joint arm movements.
Coordination is modelled mathematically by defining an objective
function, a measure of performance for any possible movement.  The
unique trajectory which yields the best performance is determined
using dynamic optimization theory. In the work presented here the
objective function is the square of the magnitude of jerk (rate of
change of acceleration) of the hand integrated over the entire
movement. This is equivalent to assuming that a major goal of motor
coordination is the production of the smoothest possible movement of
the hand. The theoretical analysis is based solely on the kinematics
of movement independent of the dynamics of the musculoskeletal system,
and is successful only when formulated in terms of the motion of the
hand in extracorporal space. The implications with respect to movement
organization are discussed.

:aim 787
:author Christof Koch
:asort Koch, C.
:title A Theoretical Analysis of the Electrical Properties of a X-cell
in the Cat's LGN: Does the Interneuron Gate the Visual Input to the X-System
:date March 1984
:cost $2.75
:pages 45
:reference C.B.I.P. Paper 006
:keywords information processing, visual system, local circuits,
geniculate nucleus, AND-NOT logic
:abstract
Electron-microscopic studies of relay cells in the lateral geniculate
nucleus of the cat have shown that the retinal input of X-cells is
associated with a special synaptic circuitry, termed the spine-triad
complex. The retinal afferents make an asymmetrical synapse with both
a dendritic appendage of the X-cell and a geniculate interneuron. The
interneuron contacts in turn the same dendritic appendage with a
symmetrical synaptic profile. The retinal input to geniculate Y-cells
is predominately found on dendritic shafts without any triadic
arrangement. We explore the integrative properties of X- and Y-cells
resulting from this striking dichotomy in synaptic architecture. The
basis of our analysis is the solution of the cable equation for a
branched dendritic tree with a known somatic input resistance. Under
the assumption that the geniculate interneuron mediates a shunting
inhibition, activation of the interneuron reduces very efficiently the
excitatory post-synaptic potential induced by the retinal afferent {\it
without} affecting the electrical activity in the rest of the cell.
Therefore, the spine-triad circuit implements the analog of an AND-NOT
gate, unique to the X-system. Functionally, this corresponds to a
presynaptic, feed-forward type of inhibition of the optic tract
terminal. Since Y-cells lack this structure, inhibition acts globally,
reducing the general electrical activity of the cell. We propose that
geniculate interneurons gate the flow of visual information into the
X-system as a function of the behavioral state of the animal,
enhancing the center-surround antagonism and possibly mediating
reciprocal lateral inhibition, eye-movement related suppression and
selective visual attention.

:aim 788
:author G. Edward Barton, Jr.
:asort Barton, G.E., Jr.
:title Toward A Principle-Based Parser
:date July 1984
:cost $2.75
:adnum AD-A147637
:pages 47
:keywords natural language, parsing, syntax, linguistics, generative
grammar, GB-theory, metarules, modularity
:abstract
Parser design lags behind linguistic theory. While modern
transformational grammar has largely abandoned complex,
language-specific rules systems in favor of modular subsystems of
principles and parameters, the rule systems that underlie existing
natural-language parsers are still large, detailed, and complicated.
The shift to modular theories in linguistics took place because of the
scientific disadvantages of such rule systems.  Those scientific ills
translate into engineering maladies that make building
natural-language systems difficult. The cure for these problems should
be the same in parser design as it was in linguistic theory. The shift
to modular theories of syntax should be replicated in parsing
practice; a parser should base its actions on interacting modules of
principles and parameters rather than a complex, monolithic rule
system. If it can be successfully carried out, the shift will make it
easier to build natural-language systems because it will shorten and
simplify the language descriptions that are needed for parsing.  It
will also allow parser design to track new developments in linguistic theory.

:aim 790
:author Christopher G. Atkeson and John M. Hollerbach
:asort Atkeson, C.G.; Hollerbach, J.M.
:title Kinematic Feature of Unrestrained Arm Movements
:date July 1984
:cost $2.25
:pages 26
:keywords control of limb movement, human motor control, dynamics of
limb movement, kinematics of limb movement, 3-D movement
monitoring, selspot system
:reference C.B.I.P Paper 007
:abstract
Unrestrained human arm trajectories between point targets have been
investigated using a three dimensional tracking apparatus, the Selspot
system. Movements were executed between different points in a vertical
plane under varying conditions of speed and hand-held load. In
contrast to past results which emphasized the straightness of hand
paths, movement regions were discovered in which the hand paths were
curved. All movements, whether curved or straight, showed an invariant
tangential velocity profile when normalized for speed and distance.
The velocity profile invariance with speed and load is interpreted in
terms of simplification of the underlying arm dynamics, extending the
results of Hollerbach and Flash (1982).

:aim 792
:author J.L. Marroquin
:asort Marroquin, J.L.
:title Surface Reconstruction Preserving Discontinuities
:date August 1984
:cost $2.25
:adnum AD-A146741
:pages 25
:keywords surface reconstruction, discontinuity detection, Markov
random fields, Bayesian estimation
:abstract
This paper presents some experimental results that indicate the
plausibility of using non-convex variational principles to reconstruct
piecewise smooth surfaces from sparse and noisy data.  This method
uses prior generic knowledge about the geometry of the discontinuities
to prevent the blurring of the boundaries between continuous
subregions. We include examples of the application of this approach to
the reconstruction of synthetic surfaces, and to the interpolation of
disparity data from the stereo processing of real images.

:aim 795
:author Christof Koch and Tomaso Poggio
:asort Koch, C.; Poggio, T.
:title Biophysics of Computation: Neurons, Synapses and Membranes
:date October 1984
:cost $3.00
:pages 73
:reference C.B.I.P. Paper 008
:keywords computational systems, biophysics, information processing,
biol. implementation of logical gates, AND-NOT, spikes, spines,
active processes
:abstract
Synapses, membranes and neurotransmitter play an important role in
processing information in the nervous system. We do not know, however,
what biophysical mechanisms are critical for neuronal computations,
what elementary information processing operations they implement, and
which sensory or motor computations they underlie. In this paper, we
outline an approach to these problems. We review a number of different
biophysical mechanisms, such as synaptic interactions between
excitation and inhibition, dendritic spines, non-impulse generating
membrane nonlinearities and transmitter-regulated voltage channels.
For each one, we discuss the information processing operations that
may be implemented. All of these mechanisms act either within a few
milliseconds, such as the action potential or synaptic transmission,
or over several hundred milliseconds or even seconds, modulating some
property of the circuit. In some cases we will suggest specific
examples where a biophysical mechanism underlies a given computation.
In particular, we will discuss the neuronal operation, and their
implementation, underlying direction selectivity in the vertebrate retina.

:aim 796
:author Alan Bawden and Philip E. Agre
:asort Bawden, A.; Agre, P.E.
:title What a parallel programming language has to let you say
:date September 1984
:cost $2.25
:adnum AD-A147854
:pages 26
:keywords Connection Machine, programming languages, parallel
computers, compiler theory, message passing
:abstract
We have implemented in simulation a prototype language for the
Connection Machine called CL1. CL1 is an extrapolation of serial
machine programming language technology: in CL1 one programs the
individual processors to perform local computations and talk to the
communications network. We present details of the largest of our
experiments with CL1, an interpreter for Scheme (a dialect of LISP)
that allows a large number of different Scheme programs to be run in
parallel on the otherwise SIMD Connection Machine. Our aim was not to
propose Scheme as a language for Connection Machine programming, but
to gain experience using CL1 to implement an interesting and familiar
algorithm. Consideration of the difficulties we encountered led us to
the conclusion that CL1 programs do not capture enough of the causal
structure of the processes they describe. Starting from this
observation, we have designed a successor language call CGL (for
Connection Graph Language).

:aim 800
:author Demetri Terzopoulos
:asort Terzopoulos, D.
:title Computing Visible-Surface Representations
:date March 1985
:cost $2.75
:pages 61
:adnum AD-A160602
:keywords vision, multiresolution reconstruction, finite elements,
discontinuities, surface representation, variational principles,
generalized splines, regularization
:abstract
The low-level interpretation of images provides contraints on 3-D
surface shape at multiple resolutions, but typically only at scattered
locations over the visual field. Subsequent visual processing can be
facilitated substantially if the scattered shape constraints are
immediately transformed into visible-surface representations that
unambiguously specify surface shape at every image point. The required
transformation is shown to lead to an ill-posed surface reconstruction
problem. A well posed variational principle formulation is obtained by
invoking "controlled continuity," a physically nonrestrictive
(generic) assumption about surfaces which is nonetheless strong enough
to guarantee unique solutions.  The variational principle, which
admits an appealing physical interpretation, is locally discretized by
applying the finite element method to a piecewise, finite element
representation of surfaces. This forms the mathematical basis of a
unified and general framework for computing visible-surface
representations.  An efficient surface reconstruction algorithm is developed.

:aim 801
:author Kent Pitman
:asort Pitman, K.
:title The Description of Large Systems
:date September 1984
:cost $2.25
:adnum AD-A148072
:pages 32
:keywords compilation, large systems, LISP, system maintenance
:abstract
In this paper, we discuss the problems associated with the description
and manipulation of large systems when their sources are not
maintained as single files.  We show why and how tools that address
these issues, such as Unix MAKE and Lisp Machine DEFSYSTEM, have
evolved.

:aim 803
:author Demetri Terzopoulos
:asort Terzopoulos, D.
:title Multigrid Relaxation Methods and the Analysis of Lightness,
Shading and Flow
:date October 1984
:cost $2.25
:adnum AD-A158173
:pages 23
:keywords computer vision, lightness, optical flow, partial
differential equations, multigrid relaxation, shape from shading,
variational principles, parallel algorithms
:abstract
Image analysis problems posed mathematically as variational principles
or as partial differential equations, are amenable to numerical
solution by relaxation algorithms that are local, iterative, and often
parallel. Although they are well suited structurally for
implementation on massively parallel, locally-interconnected
computational architectures, such distributed algorithms are seriously
handicapped by an inherent inefficiency at propagating constraints
between widely separated processing elements. Hence, they converge
extremely slowly when confronted by the large representation necessary
for low-level vision. Application of multigrid methods can overcome
this drawback, as we established in previous work on 3-D surface
reconstruction. In this paper, we develop efficient multiresolution
iterative algorithms for computing lightness, shape-from-shading, and
optical flow, and we evaluate the performance of these algorithms on
synthetic images. The multigrid methodology that we describe is
broadly applicable in low-level vision. Notably, it is an appealing
strategy to use in conjunction with regularization analysis for the
efficient solution of a wide range of ill-posed visual reconstruction problems.

:aim 804
:author Gideon Sahar and John M. Hollerbach
:asort Sahar, G.; Hollerbach, J.M.
:title Planning of Minimum-Time Trajectories for Robot Arms
:date November 1984
:cost $2.25
:adnum AD-A148956
:pages 25
:keywords robotics, manipulators, optimal paths, minimum-time paths,
trajectory planning, path planning
:abstract
The minimum-time path for a robot arm has been a long-standing and
unsolved problem of considerable interest. We present a general
solution to this problem that involves joint-space tesselation, a
dynamic time-scaling algorithm, and graph search. The solution
incorporates full dynamics of movement and actuator constraints, and
can be easily extended for joint limits and workspace obstacles, but
is subject to the particular tesselation scheme used. The results
presented show that, in general, the optimal paths are not straight
lines, but rather curves in joint-space that utilize the dynamics of
the arm and gravity to help in moving the arm faster to its
destination. Implementation difficulties due to the tesselation and to
combinatorial proliferation of paths are discussed.

:aim 805
:author Michael A. Gennert
:asort Gennert, M.A.
:title Any Dimensional Reconstruction from Hyperplanar Projections
:date October 1984
:cost $1.50
:pages 18
:keywords tomography, nuclear magnetic resonance, medical imaging,
density reconstruction
:abstract
In this paper we examine the reconstruction of functions of any
dimension from hyperplanar projection. This is a generalization of a
problem that has generated much interest recently, especially in the
field of medical imaging. Computed Axial Tomography (CAT) and Nuclear
Magnetic Resonance (NMR) are two medical techniques that fall in this
framework. CAT scans measure the hydrogen density along planes through
the body.  Here we will examine reconstruction methods that involve
backprojecting the projection data and summing this over the entire
region of interest. There are two methods for doing this. One method
is to filter the projection data first, and then backproject this
filtered data and sum over all projection directions. The other method
is to backproject and sum the projection data first, and then filter.
The two methods are mathematically equivalent, producing very similar
equations. We will derive the reconstruction formulas for both methods
for any number of dimensions. We will examine the cases of two and
three dimensions, since these are the only ones encountered in
practice. The equations are very different for these cases. In
general, the equations are very different for even and odd
dimensionality. We will discuss why this is so, and show that the
equations for even and odd dimensionality are related by the Hilbert
Transform.