[comp.doc.techreports] tr-input/mit.pt3

leff@smu.UUCP (Laurence Leff) (08/29/89)

relevant to layout and planning tasks.

:aim 876
:author Shahriar Negahdaripour and Alan Yuille
:asort Negahdaripour, S.; Yuille, A.L.
:title Direct Passive Navigation: Analytical Solution for Quadratic
Patches 
:date March 1986
:pages 30
:cost $3.25
:keywords passive navigation, motion field, optical flow, structure
and motion, quadratic patches, Taylor series expansion,
least-square-errors, essential parameters
:abstract In this paper, we solve the problem of recovering the motion
of an observer relative to a surface which can be locally approximated
by a quadratic patch directly from image brightness values. We do not
compute the optical flow as an intermediate step. We use the
coefficients of the Taylor series expansion of the intensity function
in two frames to determine 15 intermediate parameters, termed the {\it
essential parameters}, from a set of linear equations. We then solve
analytically for the motion and structure parameters from a set of
nonlinear equations in terms of these intermediate parameters. We show
that the solution is always unique, unlike some earlier results that
reported two-fold ambiguities in some special cases.

:aim 877
:author James H. Applegate, Michael R. Douglas, Yekta Gursel, Gerald
Jay Sussman, Jack Wisdom
:asort Applegate, J.H.; Douglas, M.R.; Gursel, Y.; Sussman, G.J.;
Wisdom, J.
:title The Outer Solar System for 210 Million Years
:date February 1986
:pages 43
:cost $3.75
:reference {{\it The Astronomical Journal}, Vol. 92, No. 1, July, 1986.}
:keywords dynamics, simulation, numerical methods
:abstract We have used a special purpose computer to integrate the
orbits of the outer five planets for more than 100 Myr into the future
and more than 100 Myr into the past. The strongest features in the
Fourier transforms of the orbital elements of theory. Many of the
weaker features in the Fourier spectra are identified as linear
combinations of the basic frequencies. We note serious differences
between our measurements and the predictions of Bretagnon (1974). The
amplitude of the 3.796 Myr period libration of Pluto's longitude of
perihelion is modulated with a period of 34 Myr. Very long periods, on
the order of 137 Myr, are also seen. The orbit of Pluto is stable for
the duration of our integration; the maximum Lyapunov characteristics
exponent is less than $10^{-6.8}yr^{-1}$.

:aim 879
:author Aaron Bobick and Whitman Richards
:asort Bobick, A.; Richards, W.A.
:title Classifying Objects from Visual Information
:date June 1986
:pages 41
:cost $3.75
:keywords object recognition, image understanding, cluster analysis,
vision, unsupervised classification
:abstract 
Consider a world of ``objects."  Our goal is to place these objects
into categories that are useful to the observer using sensory data.
One criterion for utility is that the categories allow the observer to
infer the object's potential behaviors, which are often
non-observable. Under what conditions can such useful categories be
created? We propose a solution which requires 1) that modes or
clusters of natural structures are reflected in the sensory data used
by the observer for classification. Given these two constraints, we
explore the type of additional knowledge sufficient for the observer
to generate an internal representation that makes explicit the natural
modes. Finally we develop a formal expression of the object
classification problem.

:aim 882
:author John M. Hollerbach, Ki C. Suh
:asort Hollerbach, J.M.; Suh, K.C.
:title Redundancy Resolution of Manipulators Through Torque
Optimization
:date January 1986
:pages 16 
:cost $2.50
:adnum AD-A167851
:reference Also published in {\it IEEE Journal of Robotics and
Automation}, RA-3, 1987.
:keywords robotics, manipulator control, redundant manipulators,
manipulator dynamics
:abstract Methods for resolving kinematic redundancies of manipulators
by the effect on joint torque are examined. When the generalized
inverse is formulated in terms of accelerations and incorporated into
the dynamics, the effect of redundancy resolution on joint torque can
be directly reflected. One method chooses the joint acceleration
null-space vector to minimize joint torque in a least squares sense;
when the least squares is weighted by allowable torque range, the
joint torques tend to be kept within their limits. Contrasting methods
employing only the pseudoinverse with and without weighting by the
inertia matrix are presented. The results show an unexpected stability
problem during long trajectories for the null-space methods and for
the inertia-weighted pseudoinverse method, but rarely for the
unweighted pseudoinverse method. Evidently a whiplash action develops
over time that thrusts the endpoint off the intended path, and
extremely high torques are required to overcome these natural movement
dynamics.

:aim 883
:author Michael Erdmann and Tom\'as Lozano-P\'erez
:asort Erdmann, M.A.; Lozano-P\'erez, T.
:title On Multiple Moving Objects
:date May 1986
:pages 47
:cost $3.75
:adnum AD-A196213
:keywords robotics, motion planning, collision avoidance,
configuration space, coordinated motion, autonomous robots
:abstract
This paper explores the motion planning problem for multiple moving
objects. The approach taken consists of assigning priorities to the
objects, then planning motions one object at a time. For each moving
object, the planner constructs a configuration space-time that
represents the time-varying constraints imposed on the moving object
by the other moving and stationary objects. The planner represents the
space-time approximately, using two-dimensional slices. The space-time
is then searched for a collision-free path. The paper demonstrates
this approach in two domains. One domain consists of translating
planar objects; the other domain consists of two-link planar
articulated arms.

:aim 885
:author A.L. Yuille
:asort Yuille, A.L.
:title Shape from Shading, Occlusion and Texture
:date May 1987
:pages 55
:cost $3.75
:adnum AD-A184840
:keywords shape from shading, shape from occlusion, texture
:abstract Shape from Shading, Occlusion and Texture are three
important sources of depth information. We review and summarize work
done on these modules.

:aim 887
:author Chae H. An, Christopher G. Atkeson, John M. Hollerbach
:asort An, C.H.; Atkeson, C.G.; Hollerbach, J.M.
:title Estimation of Inertial Parameters of Rigid Body Links of
Manipulators
:date February 1986
:pages 18
:cost $2.50
:adnum AD-A167930
:reference Also published in {\it International Journal of Robotics
Research,} vol. 5, no. 3, 1986; and in {\it Robotics Research: the
Third International Symposi
:keywords robotics, manipulator dynamics, link identification,
manipulator control
:abstract A method of estimating the mass, the location of center of
mass, and the moments of inertia of each rigid body link of a robot
during general manipulator movement is presented. The algorithm is
derived from the Newton-Euler equations, and uses measurements of the
joint torques as well as the measurement and calculation of the
kinematics of the manipulator while it is moving. The identification
equations are linear in the desired unknown parameters, and a modified
least squares algorithm is used to obtain estimates of these
parameters. Some of the parameters, however, are not identifiable due
to the restricted motion of proximal links and the lack of full
force/torque sensing. The algorithm was implemented on the MIT Serial
Direct Drive Arm. A good match was obtained between joint torques
predicted from the estimated parameters and the joint torques computed
from motor currents.

:aim 888
:author Norberto Grzywacz and Alan Yuille
:asort Grzywacz, N.M.; Yuille, A.L.
:title Massively Parallel Implementations of Theories for Apparent
Motion
:date June 1987
:pages 38
:cost $3.25
:adnum AD-A185841
:keywords analog networks, rigidity, 3-D structure, vision
:abstract
We investigate two ways of solving the correspondence problem for
motion using the assumptions of minimal mapping and rigidity.
Massively parallel analog networks are designed to implement these
theories.  Their effectiveness is demonstrated with mathematical
proofs and computer simulations.  We discuss relevant psychophysical
experiments.  

:aim 890
:author Gary L. Drescher
:asort Drescher, G.L. 
:title Genetic AI: Translating Piaget into LISP
:date February 1986
:pages 17
:cost $2.50
:adnum AD-A174603
:keywords machine learning, constructivism, Piaget
:abstract
This report presents a constructivist model of human cognitive development
during infancy.  According to constructivism, the elements of mental
representation-- even such basic elements as the concept of physical object---
are constructed afresh by each individual, rather than being innately supplied.
I propose a (partially specified, not yet implemented) mechanism, the Schema
Mechanism.  This mechanism is intended to achieve a series of cognitive
constructions characteristic of infants' sensorimotor-stage development,
primarily as described by Piaget.  In reference to Piaget's ``genetic
epistemology'', I call this approach genetic AI-- ``genetic'' not in the sense
of genes, but in the sense of genesis: development from the point of origin.
I present a hypothetical scenario in which the Schema Mechanism, starting with
a set of sensory and motor primitives as its only representational vocabulary,
invents for itself an approximation to the idea of external, persistent,
perception-independent physical objects.  This invention progresses through
several levels of refinement, which parallel infants' development of the
physical-object concept from Piaget's first through fifth sensorimotor stages.

:aim 893
:author Walter Hamscher and Randall Davis
:asort Hamscher, W.; Davis, R.
:title Issues in Model Based Troubleshooting
:date March 1987
:pages 28
:cost $3.25
:adnum AD-A183617
:abstract
To determine why something has stopped working, it's helpful to know
how it was supposed to work in the first place. This simple fact
underlies recent work on a number of systems that do diagnosis from
knowledge about the internal structure and behavior of components of
the malfunctioning device. Recently much work has been done in this
vein in many domains with an apparent diversity of techniques. But the
variety of domains and the variety of computational mechanisms used to
implement these systems tend to obscure two important facts. First,
existing programs have similar mechanisms for generating and testing
fault hypotheses. Second, most of these systems have similar built-in
assumptions about both the devices being diagnosed and their failure
modes; these assumptions in turn limit the generality of the programs.
The purpose of this paper is to identify the problems and non-problems
in model based troubleshooting.

:aim 894
:author Eric Sven Ristad
:asort Ristad, E.S.
:title Computational Complexity of Current GPSG Theory
:date April 1986
:pages 24
:cost $3.25
:adnum AD-A174329
:keywords linguistic models, GPSG, computational complexity,
intractability
:abstract
An important goal of computational linguistics has been to use
linguistic theory to guide the construction of computationally
efficient real-word natural language processing systems. At first
glance, generalized phrase structure grammar (GPSG) appears to be a
blessing on two counts. First, the precise formalisms of GPSG might be
a direct and transparent guide for parser design and implementation.
Second, since GPSG has weak context-free generative power and
contest-free languages can be parsed in $O(n^3)$ by a wide range of
algorithms, GPSG parsers would appear to run in polynomial time. This
widely-assumed GPSG ``efficient parsbility" result is misleading: here
we prove that the universal recognition problem for current GPSG
theory is exponential-polynomial time hard, and assuredly intractable.
The paper pinpoints sources of complexity (e.g. metarules and the
theory of syntactic features) in the current GPSG theory and concludes
with some linguistically and computationally motivated restrictions on GPSG.

:aim 895
:author Eric Sven Ristad
:asort Ristad, E.S.
:title Defining Natural Language Grammars in GPSG
:date April 1986
:pages 11
:cost $2.50
:adnum AD-A174531
:keywords GPSG, undecidability, natural language
:abstract
This paper is a formal analysis of whether generalized
phrase structure grammar's (GPSG) weak context-free generative power
will allow it to achieve three of its central goals: (1) to
characterize all and only the natural language grammars, (2) to
algorithmically determine membership and generative power consequences
of GPSGs, and (3) to embody the universalism of natural language
entirely in the formal system.  I prove that ``$=\Sigma^*$?" is undecidable
for GPSGs and, on the basis of this result and the unnaturalness of
$\Sigma^*$, I argue that GPSG's three goals and its weak context-free
generative power conflict with each other: there is no algorithmic way
of knowing whether any given GPSG generates a natural language or an
unnatural one. The paper concludes with a diagnosis of the result and
suggests that the problem might be met by abandoning the weak
context-free framework and assuming substantive constraints.

:aim 896
:author Tom\'as Lozano-P\'erez
:asort Lozano-P\'erez, T.
:title A Simple Motion Planning Algorithm for General Robot
Manipulators
:date June 1986
:pages 31
:cost $3.25
:adnum AD-A183816
:keywords robotics, motion planning
:abstract
This paper presents a simple and efficient algorithm, using
configuration space, to plan collision-free motions for general
manipulators.  We describe an implementation of the algorithm for
manipulators made up of revolute joints.  The configuration-space
obstacles for an n degree-of-freedom manipulator are approximated by
sets of n-1 dimensional slices, recursively built up from one
dimensional slices.  This obstacle representation leads to an
efficient approximation of the free space outside of the
configuration-space obstacles.

:aim 897
:author J. Marroquin, S. Mitter, T. Poggio
:asort Marroquin, J.L.; Mitter, S.; Poggio, T.
:title Probabilistic Solution of Ill-Posed Problems in Computational
Vision
:date March 1987
:pages 40
:cost $3.25
:adnum AD-A183807
:keywords ill-posed problems, regularization, computer vision,
stochastic methods, SIMD architecture
:abstract We formulate several problems in early vision as inverse
problems. Among the solution methods we review standard regularization
theory, discuss its limitations, and present new stochastic (in
particular, Bayesian) techniques based on Markov Random Field models
for their solution. We derive efficient algorithms and describe
parallel implementations on digital parallel SIMD architectures, as
well as a new class of parallel hybrid computers that mix digital with
analog components.

:aim 898
:author Kenneth W. {Haase, Jr.}
:asort Haase, K.W.
:title Discovery Systems
:date April 1986
:pages 20
:cost $2.50
:adnum AD-A167852
:keywords discovery, learning, Eurisko, AM, subsumption, lattices
:abstract Cyrano is a thoughtful reimplementation of Lenat's
controversial Eurisko program, designed to perform automated discovery
and concept formation in a variety of technical fields. The ``thought"
in the reimplementation has come from several directions: an appeal to
{\it basic principles}, which led to identifying constraints of
modularity and consistency on the design of discovery systems; an
appeal to {\it transparency}, which led to collapsing more and more of
the control structure into the representation; and an appeal to
{\it accountability}, which led to the explicit specification of
dependencies in the concept formation process. The process of
reimplementing Lenat's work has already revealed several insights into
the nature of Eurisko-like systems (which I call {\it inquisitive}
systems) as search processes which dynamically reconfigure their
search space by the formation of new concepts and representations.
This insight reveals requirements for modularity and ``consistency" in
the definition of new concepts and representations.

:aim 899
:author Rodney Brooks
:asort Brooks, R.A.
:title Achieving Artificial Intelligence Through Building Robots
:date May 1986
:pages 12
:cost $2.50
:adnum AD-A174364
:keywords artificial intelligence, robotics
:abstract We argue that generally accepted methodologies of Artificial
Intelligence research are limited in the proportion of human level
intelligence they can be expected to emulate. We argue that the
currently accepted decompositions and static representations used in
such research are wrong. We argue for a shift to a process based model,
with a decomposition based on task achieving behaviors as the
organizational principle. In particular we advocate building robotic
insects.  

:aim 902
:author Richard H. Lathrop, Teresa A. Webster and Temple F. Smith
:asort Lathrop, R.H.; Webster, T.A.; Smith, T.F.
:title ARIADNE: Pattern-Directed Inference and Hierarchical
Abstraction in Protein Structure Recognition
:date May 1987
:pages 24
:cost $3.25
:keywords pattern recognition, expert systems, protein structure
:abstract
There are many situations in which a very detailed low-level description
encodes, through a hierarchical organization, a recognizable
higher-order pattern.  The macro-molecular structural conformations of
proteins exhibit higher order regularities whose recognition is
complicated by many factors.  ARIADNE searches for similarities between
structural descriptors and hypothesized protein structure at levels more
abstract than the primary sequence, based on differential similarity to
rule antecedents and the controlled use of tentative higher-order
structural hypotheses.  Inference is grounded solely in knowledge
derivable from the primary sequence, and exploits secondary structure
predictions. A novel proposed alignment and functional domain
identification of the aminoacyl-tRNA synthetases was found using this
system.

:aim 903
:author Richard H. Lathrop, Robert J. Hall, Robert S. Kirk
:asort Lathrop, R.H.; Hall, R.J.; Kirk, R.S.
:title Functional Abstraction From Structure in VLSI Simulation Models
:date May 1987
:pages 23
:cost $3.25
:adnum AD-A183647
:keywords VLSI circuits, simulation, functional models, temporal reasoning
:abstract High-level functional (or behavioral) simulation models are
difficult, time-consuming, and expensive to develop.  We report on a
method for automatically generating the program code for a high-level
functional simulation model.  The high-level model is produced
directly from the program code for the circuit components' functional
models and a netlist description of their connectivity.  A prototype
has been implemented in LISP for the SIMMER functional simulator.

:aim 907
:author Charles Rich and Richard C. Waters
:asort Rich, C.; Waters, R.C.
:title Toward a Requirement Apprentice: On the Boundary Between
Informal and Formal Specifications
:date July 1986
:pages 25
:cost $3.25
:adnum AD-A183631
:keywords Programmer's Apprentice, knowledge acquisition,
requirements, clich\'es, informality, specification
:abstract
Requirements acquisition is one of the most important and least well
supported parts of the software development process. The Requirements
Apprenctice (RA) is intended to support a human analyst in the
earliest phases of creating a requirement, in which {\it
incompleteness}, {\it ambiguity}, and {\it contradiction} are
inevitable features. From an artificial intelligence perspective, the
central problem the RA faces is one of {\it knowledge acquisition}.
The RA will rely on a variety of techniques, including
dependency-directed reasoning, hybrid knowledge representation, and
the reuse of common forms ({\it clich\'es}). The Requirements
Apprentice is being developed in the creation of an intelligent
assistant for all aspects of software development.

:aim 909
:author Anya Hurlbert and Tomaso Poggio
:asort Hurlbert, A.; Poggio, T.
:title Learning a Color Algorithm from Examples
:date June 1987
:pages 30
:cost $3.25
:adnum AD-A184385
:keywords computer vision, color constancy, learning, regularization,
optimal estimation, pseudoinverse
:abstract
We show that a color algorithm capable of separating illumination from
reflectance in a Mondrian world can be learned from a set of examples.
The learned algorithm is equivalent to filtering the image data -- in
which reflectance and illumination are mixed -- through a
center-surround receptive field in individual chromatic channels. The
operation resembles the ``retinex'' algorithm recently proposed by
Edwin Land. This result is a specific instance of our earlier results
that a standard regularization algorithm can be learned from examples.
It illustrates that the natural constraints needed to solve a problem
in inverse optics can be extracted directly from a sufficient set of
input data and the corresponding solutions. The learning procedure has
been implemented as a parallel algorithm on the Connection Machine System.

:aim 910
:author Steven D. Eppinger and Warren P. Seering
:asort Eppinger, S.D.; Seering, W.P.
:title On Dynamic Models of Robot Force Control
:date July 1986
:pages 15
:cost $2.50
:reference C.B.I.P. Memo No. 025
:keywords robot dynamics, robot modeling, force control, dynamics of
force control
:abstract For precise robot control, endpoint compliance strategies
utilize feedback from a force sensor located near the tool/workpiece
interface. Such endpoint force control systems have been observed in
the laboratory to be limited to unsatisfactory closed-loop
performance. This paper discusses the particular dynamic properties of
robot systems which can lead to instability and limit performance. A
series of lumped-parameter models is developed in an effort to predict
the closed-loop dynamics of a force-controlled single-axis arm. The
models include some effects of robot structural dynamics, sensor
compliance, and workpiece dynamics. The qualitative analysis shows
that the robot dynamics contribute to force-controlled instability.
Recommendations are made for models to be used in control system design.

:aim 911
:author David McAllester and Ramin Zabih
:asort McAllester, D.A.; Zabih, R.
:title Boolean Classes
:date September 1986
:pages 15
:cost $2.50
:adnum AD-A183856
:keywords object-oriented programming, inheritance, class hierarchy,
data types, propositional inference
:abstract 
Object-oriented programming languages all involve the
notions of class and object.  We extend the notion of class so that any
Boolean combination of classes is also a class.  Boolean classes allow
greater precision and conciseness in naming the class of objects
governed by a particular method.  A class can be viewed as a predicate
which is either true or false of any given object.  Unlike predicates
however classes have an inheritance hierarchy which is known at compile
time.  Boolean classes extend the notion of class, making classes more
like predicates, while preserving the compile time computable
inheritance hierarchy.

:aim 914
:author Christof Koch, Tomaso Poggio and Vincent Torre
:asort Koch, C.; Poggio, T.; Torre, V.
:title Computations in the Vertebrate Retina: Gain Enhancement,
Differentiation and Motion Discrimination
:date September 1986
:pages 17
:cost $2.50
:reference C.B.I.P. Memo No. 017
:keywords direction selectivity, retina, biophysics of computation
:abstract
The vertebrate retina, which provides the visual input to the brain
and its main interface with the outside world, is a very attractive
model system for approaching the question of the information
processing role of biological mechanisms of nerve cells.  It is as yet
impossible to provide a complete circuit diagram of the retina, but it
is now possible to identify a few simple computations that the retina
performs and to relate them to specific biophysical mechanisms and
circuit elements, on the basis of theoretical work, computer
simulations and experimental data. In this paper we consider three
operations carried out by most retinae: {\it amplification, temporal
differentiation and computation of the direction of motion of visual
patterns}.

:aim 915
:author A. Hurlbert and T. Poggio
:asort Hurlbert, A.; Poggio, T.
:title Visual Attention in Brains and Computers
:date September 1986 
:pages 7
:cost $2.50
:adnum AD-A183849
:keywords visual recognition, face recognition, parallel-serial
routines, attention
:abstract
Existing computer programs designed to perform visual recognition of
objects suffer from a basic weakness: the inability to spotlight
regions in the image that potentially correspond to objects of
interest. The brain's mechanisms of visual attention, elucidated by
psychophysicists and neurophysiologists, may suggest a solution to the
computer's problem of object recognition.

:aim 916
:author Alessandro Verri and Tomaso Poggio
:asort Verri, A.; Poggio, T.
:title Regularization Theory and Shape Constraints
:date September 1986
:pages 23
:cost $3.25
:adnum AD-A185844
:keywords regularization, early vision, constraints, mathematical
programming 
:abstract 
Many problems of early vision are ill-posed; to recover unique stable
solutions regularization techniques can be used. These techniques lead
to meaningful results, provided that solutions belong to suitable
compact sets.  Often some additional constraints on the shape or the
behavior of the possible solutions are available.  This note discusses
which of these constraints can be embedded in the classic theory of
regularization and how, in order to improve the quality of the
recovered solution. Connections with mathematical programming
techniques are also discussed. As a conclusion, regularization of
early vision problems may be improved by the use of some constraints
on the shape of the solution (such as monotonicity and upper and lower
bounds), when available.

:aim 917
:author Alessandro Verri and Tomaso Poggio
:asort Verri, A.; Poggio, T.
:title Motion Field and Optical Flow: Qualitative Properties
:date December 1986
:pages 32
:cost $3.25
:reference C.B.I.P. Memo No. 22
:adnum AD-A183727
:keywords motion, optical flow, qualitative approach,
structure from motion 
:abstract In this paper we show that the {\it optical flow}, a 2-D
field that can be associated with the variation of the image
brightness pattern, and the 2-D {\it motion field}, the projection on
the image plane of the 3-D velocity field of a moving scene, are in
general different, unless very special conditions are satisfied. The
optical flow, therefore, is ill-suited for computing structure from
motion and for reconstructing the 3-D velocity field, problems that
require an accurate estimate of the 2-D motion field. We then suggest
a different use of the optical flow. We argue that stable qualitative
properties of the 2-D motion field give useful information about the
3-D velocity field and the 3-D structure of the scene, and that they
can be usually obtained from the optical flow. To support this
approach we show how the (smoothed) optical flow and 2-D motion field,
interpreted as vector fields tangent to flows of planar dynamical
systems, may have the same qualitative properties from the point of
view of the theory of structural stability of dynamical systems.

:aim 919
:author Ellen C. Hildreth and Christof Koch
:asort Hildreth, E.; Koch, C.
:title The Analysis of Visual Motion: From Computational Theory to
Neuronal Mechanisms
:date December 1986
:pages 60
:cost $3.75
:adnum AD-A190388
:reference CBIP Memo. No. 21, also in {\it Annual Review of
Neuroscience}, vol. 10, 1987, pages 477-533.
:keywords motion analysis, 3-D vision, object segmentation, velocity
field, motion measurement, image analysis
:abstract
This paper reviews a number of aspects of visual motion analysis in
biological systems, from a computational perspective.  We illustrate
the kinds of insights that have been gained through computational
studies and how these observations can be integrated with experimental
studies from psychology and the neurosciences, to understand the
particular computations used by biological systems to analyze motion.
The particular areas of motion analysis that we discuss include early
motion detection and measurement, the optical flow computation, motion
correspondence, the detection of motion discontinuities, and the
recovery of three-dimensional structure from motion.

:aim 924
:author Mario Bertero, Tomaso Poggio, Vincent Torre
:asort Bertero, M.; Poggio, T.; Torre, V.
:title Ill-Posed Problems in Early Vision
:date May 1987
:pages 61
:cost $4.00
:keywords computational vision, regularization theory, 
inverse problems, ill-posed problems
:abstract
The first processing stage in computational vision, also called early
vision, consists in decoding 2D images in terms of properties of 3D
surfaces. Early vision includes problems such as the recovery of
motion and optical flow, shape from shading, surface interpolation and
edge detection. These are inverse problems, which are often ill-posed
or ill-conditioned.  We review here the relevant mathematical results
on ill-posed and ill-conditioned problems and introduce the formal
aspects of regularization theory in the linear and non-linear case.
More general stochastic regularization methods are also introduced.
Specific topics in early vision and their regularization are then
analyzed rigorously, characterizing existence, uniqueness and
stability of solutions.

:aim 927
:title Stereo and Eye Movement
:author Davi Geiger and Alan Yuille
:asort Geiger, D.; Yuille, A.L.
:date January 1988
:pages 35
:cost $3.25
:adnum AD-A193588
:keywords stereo, error analysis, eye movement, controlled movement
:abstract We describe a method to solve the stereo correspondence
using controlled eye (or camera) movements. These eye-movements
essentially supply additional image-frames which can be used to
constrain the stereo matching. Because the eye-movements are small,
traditional methods of stereo with multiple frames will not work. We
develop an alternative approach using a systematic analysis to define
a probability distribution for the errors. Our matching strategy then
matches the most probable points first, thereby reducing the ambiguity
for the remaining matches. We demonstrate this algorithm with several
examples. 

:aim 928
:title Parallel Algorithms for Computer Vision on the Connection Machine
:author James J. Little
:asort Little, J.J.
:date November 1986
:pages 31
:cost $3.25
:adnum AD-A185842
:abstract
The Connection Machine is a fine-grained parallel computer having up
to 64K processors.  It supports both local communication among the
processors, which are situated in a two-dimensional mesh, and
high-bandwidth communication among processors at arbitrary locations,
using a message-passing network.  We present solutions to a set of
Image Understanding problems for the Connection Machine.  These
problems were proposed by DARPA to evaluate architectures for Image
Understanding systems, and are intended to comprise a representative
sample of fundamental procedures to be used in Image Understanding.
The solutions on the Connection Machine embody general methods for
filtering images, determining connectivity among image elements,
determining spatial relations of image elements and computing graph
properties, such as matchings and shortest paths.

:aim 930
:author J.R. Quinlan
:asort Quinlan, J.R.
:title Simplifying Decision Trees
:date December 1986
:pages 17
:cost $2.50
:adnum AD-A183615
:keywords induction, knowledge acquisition, production rules, pruning,
decision trees
:abstract
Many systems have been developed for constructing decision trees from
collections of examples.  Although the decision trees generated by
these methods are accurate and efficient, they often suffer the
disadvantage of excessive complexity that can render them
incomprehensible to experts.  It is questionable whether opaque
structures of this kind can be described as knowledge, no matter how
well they function.  This paper discusses techniques for simplifying
decision trees without compromising their accuracy.  Four methods are
described, illustrated, and compared on a test-bed of decision trees
from a variety of domains.

:aim 931
:author Shimon Ullman
:asort Ullman, S.
:title An Approach to Object Recognition: Aligning Pictorial
Descriptions
:date December 1986
:pages 58
:cost $3.75
:adnum AD-A184462
:keywords recognition, object recognition, alignment, vision
:abstract
This paper examines the problem of shape-based object recognition, and
proposes a new approach, the alignment of pictorial descriptions. The
first part of the paper reviews general approaches to visual object
recognition, and divides these approaches into three broad classes:
invariant properties methods, object decomposition methods, and
alignment methods.  The second part presents the alignment method.  In
this approach the recognition process is divided into two stages.  The
first determines the transformation in space that is necessary to
bring the viewed object into alignment with possible object-models.
The second stage determines the model that best matches the viewed
object.  The proposed alignment method also uses abstract description,
but, unlike structural description methods, it uses them pictorially,
rather than in symbolic structural descriptions.

:aim 933A
:author Charles Rich and Richard C. Waters
:asort Rich, C.; Waters, R.C.
:title The Programmer's Apprentice: A Program Design Scenario
:date November 1987
:pages 46
:cost $3.75
:adnum AD-A183918
:keywords Programmer's Apprentice, automatic programming, software
engineering, program synthesis
:abstract A scenario is used to illustrate the capabilities of a
proposed Design Apprentice, focussing on the area of detailed,
low-level design. Given a specification, the Design Apprentice will be
able to make many of the design decisions needed to synthesize the
required program. The Design Apprentice will also be able to detect
various kinds of contradictions and omissions in a specification.

:aim 937
:author Daniel P. Huttenlocher and Shimon Ullman
:asort Huttenlocher, D.P.; Ullman, S.
:title Recognizing Rigid Objects by Aligning Them with an Image
:date January 1987
:pages 43
:cost $3.75
:adnum AD-A184253
:keywords object recognition, computer vision, alignment, 3-D from 2-D
recognition 
:abstract
This paper presents an approach to recognition where an object is
first {\it aligned} with an image using a small number of pairs of
model and image features, and then the aligned model is compared
directly against the image.  To demonstrate the method, we present
some examples of recognizing flat rigid objects with arbitrary
three-dimensional position, orientation, and scale, from a single
two-scale-space segmentation of edge contours.  The method is
extended to the domain of non-flat objects as well.

:aim 939
:author Shahriar Negahdaripour
:asort Negahdaripour, S.
:title A Direct Method for Locating the Focus of Expansion
:date January 1987
:pages 28
:cost $3.25
:adnum AD-A185840
:keywords focus of expansion, optical flow, motion field, passive
navigation, structure from motion
:abstract
We address the problem of recovering the motion of a monocular
observer relative to a rigid scene. We do not make any assumptions
about the shapes of the surfaces in the scene, nor do we use estimates
of the optical flow or point correspondences. Instead, we exploit the
spatial gradient and the time rate of change of brightness over the
whole image and explicitly impose the constraint that the surface of
an object in the scene must be in front of the camera for it to be
imaged. 

:aim 940
:author Shahriar Negahdaripour
:asort Negahdaripour, S.
:title Ambiguities of a Motion Field
:date January 1987
:pages 20
:cost $2.50
:adnum AD-A181200
:keywords motion field, optical flow, quadratic surfaces, asymptotic
lines
:abstract We study the conditions under which a perspective motion
field can have multiple interpretations. Furthermore, we show that, in
most cases, the ambiguity in the interpretation of a motion field can
be resolved by imposing the physical constraint that depth is positive
over the image region onto which the surface projects.

:aim 941
:author Eric Saund
:asort Saund, E.
:title Dimensionality-Reduction Using Connectionist Networks
:date January 1987
:pages 26
:cost $3.25
:adnum AD-A183632
:keywords dimensionality-reduction, pattern matching, data
abstraction, connectionist network, backpropagation, self-organization
:abstract
This paper presents a method for using the self-organizing properties
of connectionist networks of simple computing elements to discover a
particular type of constraint in multidimensional data.  The method
performs dimensionality-reduction in a wide class of situations for
which an assumption of linearity need not be made about the underlying
constraint surface.  We present a scheme for representing the values
of continuous (scalar) variables in subsets of units.  The
backpropagation weight updating method for training connectionist
networks is extended by the use of auxiliary pressure in order to coax
hidden units into the prescribed representation for scalar-valued
variables.

:aim 945
:author Carl E. Hewitt
:asort Hewitt, C.
:title Offices are Open Systems
:date February 1987
:pages 17
:cost $2.50
:reference {\it ACM Transactions on Office Information Systems}, Vol.
4, No. 3, July 1986, pages 271-287.
:keywords management, decision making, logic, microtheories,
negotiation, offices, open systems
:abstract This paper takes a prescriptive stance on how to establish
the information-processing foundations for taking action and making
decisions in office work from an open system perspective. We propose
{\it due process} as a central activity in organizational information
processing.

:aim 946
:author Alan Bawden
:asort Bawden, A.
:title Reification without Evaluation
:date June 1988
:pages 27
:cost $3.25
:adnum AD-A195928
:reference Proceedings of the 1988 ACM Conference on Lisp and
Functional Programming.
:keywords reification, reflection, 3-Lisp, introspection,
meta-representation, continuation passing
:abstract 
Constructing self-referential systems, such as Brian Smith's 3-Lisp
language, is actually more straightforward than you think.  Anyone can
build an infinite tower of processors (where each processor implements
the processor at the next level below) by employing some common sense
and one simple trick.  This paper presents a simple programming
language interpreter that illustrates how this can be done.  Given
this basically straightforward technique, processor towers might be
easily constructed for a wide variety of systems to enable them to
manipulate and reason about themselves.

:aim 947
:author Bonnie Jean Dorr
:asort Dorr, B.J.
:title Principle-Based Parsing For Machine Translation
:date December 1987
:pages 17
:cost $2.50
:reference Also in {\it Proceedings of the Ninth Annual Conference of
the Cognitive Science Society}, University of Washington, Seattle, WA,
1987.
:adnum AD-A199183
:keywords natural language processing, interlingual machine
translation, parsing, principles vs. rules, co-routine design,
linguistic constraints
:abstract
This report shows how a principle-based parser with a ``co-routine''
design improves parsing for translation.  The parser consists of a
skeletal structure-building mechanism that operates in conjunction with
a linguistically based constraint module, passing control back and forth
until a set of underspecified skeletal phrase-structures is converted
into a fully instantiated parse tree.  The modularity of the parsing
design accommodates linguistic generalization, reduces the grammar size,
allows extension to other languages, and is compatible with studies of
human language processing.

:aim 948
:author Steven D. Eppinger and Warren P. Seering
:asort Eppinger, S.D.; Seering, W.P.
:title Understanding Bandwidth Limitations in Robot Force Control
:date August 1987
:pages 19
:cost $2.50
:adnum AD-A195927
:reference Research originally presented at IEEE International
Conference on Robotics and Automation, Raleigh, NC, April 1987.
:keywords robot dynamics, robot force control, robot control
:abstract 
This paper provides an analytical overview of the dynamics involved in
force control.  Models are developed which demonstrate, for the
one-axis explicit force control case, the effects on system
closed-loop bandwidth of a) robot system dynamics that are not usually
considered in the controller design; b) drive-train and task
nonlinearities; and c) actuator and controller dynamics.  The merits
and limitations of conventional solutions are weighed, and some new
solutions are proposed.  Conclusions are drawn which give insights
into the relative importance of the effects discussed.

:aim 949
:author Richard C. Waters
:asort Waters, R.C.
:title Program Translation Via Abstraction and Reimplementation
:date December 1986
:pages 43
:cost $3.75
:adnum AD-A185845
:reference Also in {\it IEEE Transactions on Software Engineering\/},
14(8):1207--1228, August 1988.
:keywords program translation, compilation, program analysis,
artificial intelligence, Programmer's Apprentice
:abstract
Essentially all program translators (both source-to-source translators
and compilers) operate via transliteration and refinement.  This
approach is fundamentally limited in the quality of the output it can
produce.  In particular, it tends to be insufficiently sensitive to
global features of the source program and too sensitive to irrelevant
local details.  This paper presents the alternate translation paradigm
of abstraction and reimplementation, which is one of the goals of the
Programmer's Apprentice project.  A translator has been constructed
which translates Cobol programs into Hibol (a very high level,
business data processing language).  A compiler has been designed
which generates extremely efficient PDP-11 object code for Pascal
programs.

:aim 950
:author Kenneth Man-kam Yip
:asort Yip, K.
:title Extracting Qualitative Dynamics from Numerical Experiments
:date March 1987
:pages 18
:cost $2.50
:reference  Proceedings AAAI 1987.
:adnum AD-A183633
:keywords qualitative physics, qualitative reasoning, dynamical
systems, nonlinear dynamics, bifurcation
:abstract
The Phase Space is a powerful tool for representing and reasoning
about the qualitative behavior of nonlinear dynamical systems.
Significant physical phenomena of the dynamical system -- periodicity,
recurrence, stability and the like -- are reflected by outstanding
geometric features of the trajectories in the phase space.  This paper
presents an approach for the automatic reconstruction of the full
dynamical behavior from the numerical results.  The approach exploits
knowledge of Dynamical Systems Theory which, for certain classes of
dynamical systems, gives a complete classification of all the possible
types of trajectories, and a list of bifurcation rules which govern
the way trajectories can fit together in the phase space.  The
approach is applied to an important class of dynamical systems: the
area-preserving maps, which often arise from the study of Hamiltonian
systems.  Finally, the paper describes an implemented program which
solves the interpretation problem by using techniques from
computational geometry and computer vision.

:aim 951
:author Daniel S. Weld
:asort Weld, D.S.
:title Comparative Analysis
:date November 1987
:pages 46
:cost $3.75
:adnum AD-A190556
:reference Also "Comparative Analysis", {\it Artificial Intelligence},
:keywords comparative analysis, qualitative physics, qualitative
simulation, DQ analysis, QSIM, perspectives, explanation
:abstract
Comparative analysis is the problem of predicting how a system will
react to perturbations in its parameters, and why. For example,
comparative analysis could be asked to explain why the period of an
oscillating spring/block system would increase if the mass of the
block were larger.  This paper formalizes the problem of comparative
analysis and presents a technique, differential qualitative (DQ)
analysis, which solves the task, providing explanations suitable for
use by design systems, automated diagnosis, intelligent tutoring
systems, and explanation based generalization.  DQ analysis uses
inference rules to deduce qualitative information about the relative