[mod.techreports] mitai7 tech reports

E1AR0002@SMUVM1.BITNET (02/28/86)

:aim 806
:author John Canny
:asort Canny, J.
:title Collision Detection for Moving Polyhedra
:date October 1984
:cost $2.25
:adnum AD-A148961
:pages 17
:keywords collision detection, collision avoidance, motion planning,
robotics, geometric modelling	
:abstract
We consider the problem of moving a three dimensional solid object
among polyhedral obstacles. The traditional formulation of
configuration space for this problem uses three translational
parameters and three {\it angles} (typically Euler angles), and the
constraints between the object and obstacles involve transcendental
functions. We show that a quaternion representation of rotation yields
constraints which are purely algebraic in a higher-dimensional space.
By simple manipulation, the constraints may be projected down into a
six dimensional space with no increase in complexity. Using this
formulation, we derive an efficient {\it exact} intersection test for
an object which is translating and rotating among obstacles.

:aim 809
:author Ronald S. Fearing
:asort Fearing, R.S.
:title Simplified Grasping and Manipulation with Dextrous Robot Hands
:date November 1984
:cost $1.50
:adnum AD-A148962
:pages 17
:keywords automatic grasping, force control, stable grasping, robot
hands, regrasping objects, re-orienting objects, dextrous hands
:abstract
A method is presented for stably grasping two-dimensional polygonal
objects with a dextrous hand when object models are not available.
Basic constraints on object vertex angles are found for feasible
grasping with two fingers. Local tactile information can be used to
determine the finger motion that will reach feasible grasping
locations. With an appropriate choice of finger stiffnesses, a hand
can automatically grasp these objects with two fingers. The bounded
slip of a part in a hand is shown to be valuable for adapting the
fingers and object to a stable situation. Examples are given to show
the ability of this grasping method to accommodate disturbance forces
and to perform simple part reorientations and regrasping operations.

:aim 811
:author Richard J. Doyle
:asort Doyle, R.J.
:title Hypothesizing and Refining Causal Models
:date December 1984
:cost $3.00
:ADnum AD-A158165
:pages 108
:keywords learning, causal reasoning, qualitative reasoning, telelogical
reasoning, theory formation, planning, analogy, quantities.
:abstract An important common sense competence is the ability to hypothesize
causal relations. This paper presents a set of constraints which make
the problem of formulating causal hypotheses about simple physical
systems a tractable one. The constraints include: 1) a temporal and
physical proximity requirement, 2) a set of abstract causal
explanations for changes in physical systems in terms of dependences
between quantities, 3) a teleological assumption that dependences in
designed physical systems are functions.
    These constraints were embedded in a learning system which was
tested in two domains: a sink and a toaster. The learning system
successfully generated and refined naive causal models of these simple
physical systems.
    The causal models which emerge from the learning process support
causal reasoning - explanation, prediction, and planning. Inaccurate
predictions and failed plans in turn indicate deficiencies in the
causal models and the need to rehypothesize. Thus learning supports
reasoning which leads to further learning. The learning system makes
use of standard inductive rules of inference as well as the
constraints on causal hypotheses to generalize its causal models.
    Finally, a simple example involving an analogy illustrates another way
to repair incomplete causal models.

:aim 812
:author G. Edward Barton, Jr.
:asort Barton, G.E., Jr.
:title On the Complexity of ID/LP Parsing
:date December 1984
:cost $2.25
:adnum AD-A158211
:pages 22
:keywords parsing, ID/LP grammars, context free grammar, NP-completeness,
natural language, Earley's algorithm, GPSG, UCFG parsing.
:abstract Recent linguistic theories cast surface complexity as the result of
interacting subsystems of constraints.  For instance, the ID/LP
grammar formalism separates constraints on immediate dominance from
those on linear order.  Shieber (1983) has shown how to carry out
direct parsing of ID/LP grammars.  His algorithm uses ID and LP
constraints directly in language processing, without expanding them
into a context-free object grammar.''  This report examines the
computational difficulty of ID/LP parsing.

:aim 813
:author Berthold K.P. Horn and Michael J. Brooks
:asort Horn, B.K.P.; Brooks, M.J.
:title The Variational Approach to Shape from Shading
:date March 1985
:cost $2.25
:pages 33
:keywords calculus of variations, parallel iteration, regularization,
shading, shape, shape from shading
:abstract
We develop a systematic approach to the discovery of parallel
iterative schemes for solving the shape-from-shading problem on a
grid. A standard procedure for finding such schemes is outlined, and
subsequently used to derive several new ones. The shape-from-shading
problem is known to be mathematically equivalent to a nonlinear
first-order partial differential equation in surface elevation.  To
avoid the problems inherent in methods used to solve such equations,
we follow previous work in reformulating the problem as one of finding
a surface orientation field that minimizes the integral of the
brightness error. The calculus of variations is then employed to
derive the appropriate Euler equations on which iterative schemes can
be based. Different schemes result if one uses different parameters to
describe surface orientation. We derive two new schemes, using unit
surface normals, that facilitate the incorporation of the occluding
boundary information.  These schemes, while more complex, have several
advantages over previous ones.

:aim 815
:author Kenneth Man-Kam Yip
:asort Yip, K.
:title Tense, Aspect and Cognitive Representation of Time
:date December 1984
:cost $2.25
:ADnum AD-A159306
:pages 26
:keywords temporal logic, linguistic constraints, learnability, tense
and aspect, processing constraints, markedness
:abstract This paper explores the relationships between a
computational theory of temporal representation (as develped by James
Allen) and a formal linguistic theory of tense (as developed by
Norbert Hornstein) and aspect. It aims to provide explicit answers to
four fundamental questions: (1) what is the computational justification
for the primitives of a linguistic theory; (2) what is the
computational explanation of the formal grammatical constraints; (3)
what are the processing constraints imposed on the learnability and
markedness of these theoretical constructs; and (4) what are the
constraints that a linguistic theory imposes on representation.  We
show that one can effectively exploit the interface between the
language faculty and the cognitive faculties by using linguistic
constraints to determine restrictions on the cognitive representations
and {\it vice versa}. Three main results are obtained: (1) We derive
an explanation of an observed grammatical constraint on tense -- the
Linear Order Constraint -- from the information monotonicity property
of the constraint propagation algorithm of Allen's temporal system:
(2) We formulate a principle of markedness for the basic tense
structures based on the computational efficiency of the temporal
representations; and (3) We show Allen's interval-based temporal
system is not arbitrary, but it can be used to explain independently
motivated linguistic constraints on tense and aspect interpretations.
We also claim that the methodology of research developed in this study
-- "cross-level" investigation of independently motivated formal
grammatical theory and computational models -- is a powerful paradigm
with which to attack representational problems in basic cognitive
domains, e.g. space, time, causality, etc.

:aim 816
:author Richard C. Waters
:asort Waters, R.C.
:title PP: A Lisp Pretty Printing System
:date December 1984
:cost $2.25
:ADnum A157092
:pages 37
:keywords pretty printing, formatted output, abbreviated output, LISP
:abstract
The PP system provides an efficient implementation of the Common Lisp
pretty printing function PPRINT. In addition, PP goes beyond ordinary
pretty printers by providing mechanisms which allow the user to
control the exact form of pretty printed output. This is done by
extending Lisp in two ways. First, several new FORMAT directives are
provided which support dynamic decisions about the placement of
newlines based on the line width available for output. Second, the
concept of print-self methods is extended so that it can be applied to
lists as well as to objects which can receive messages. Together,
these extensions support pretty printing of both programs and data
structures. The PP system also modifies the way that the Lisp printer
handles the abbreviation of output. The traditional mechanisms for
abbreviating lists based on nesting depth and length are extended so
that they automatically apply to every kind of structure without the
user having to take any explicit action when writing print-self
methods. A new abbreviation mechanism is introduced which can be used
to limit the total number of lines printed.

:aim 817
:author A. Hurlbert and T. Poggio
:asort Hurlbert, A.; Poggio, T.
:title Spotlight on Attention
:date April, 1985
:cost $1.50
:pages 6
:keywords
:abstract We review some recent psychophysical, physiological and
anatomical data which highlight the important role of attention in
visual information processing, and discuss the evidence for a serial
spotlight of attention.  We point out the connections between the
questions raised by the spotlight model and computational results on
the intrinsic parallelism of several tasks in vision.

:aim 820
:author Michael J. Brooks and Berthold K.P. Horn
:asort Brooks, M.J.; Horn, B.K.P.
:title Shape and Source from Shading
:date January 1985
:cost $1.50
:pages 12
:keywords computer vision, source detection, shape from shading,
Lambertian surface
:abstract Well-known methods for solving the shape-from-shading problem
require knowledge of the reflectance map. Here we show how the
shape-from-shading problem can be solved when the reflectance map is
not available, but is known to have a given form with some unknown
parameters. This happens, for example when the surface is known to be
Lambertian, but the direction to the light source is not known.  We
display an iterative algorithm which alternately estimates the surface
shape and the light source direction.  Use of the unit normal in the
parameterization of the reflectance map, rather than the gradient or
stereographic coordinates, simplifies the analysis. Our approach also
leads to an iterative scheme for computing shape from shading that
adjusts the current estimates of the local normals toward or away from
the direction of the light source. The amount of adjustment is
proportional to the current difference between the predicted and the
observed brightness. Generalizations to less constrained forms of
reflectance maps are also developed.

:aim 821
:author Shahriar Negahdaripour and Berthold K.P. Horn
:asort Negahdaripour, S.; Horn, B.K.P.
:title Direct Passive Navigation
:date February 1984
:cost $1.50
:pages 19
:keywords passive navigation, optical flow, structure and motion,
least squares, planar surfaces, non-linear equations, dual solution,
planar motion, field equations
:abstract In this paper, we show how to recover the motion of an observer
relative to a planar surface directly from image brightness
derivatives.  We do not compute the optical flow as an intermediate
step. We derive a set of nine non-linear equations using a
least-squares formulation. A simple iterative scheme allows us to find
either of two possible solutions of these equations. An initial pass
over the relevant image region is used to accumulate a number of
moments of the image brightness derivatives. All of the quantities
used in the iteration can be efficiently computed from these totals,
without the need to refer back to the image. A new, compact notation
allows us to show easily that there are at most two planar solutions.

:aim 822
:author Michael Brady, Jean Ponce, Alan Yuille, and Haruo Asada
:asort Brady, M.; Ponce, J.; Yuille, A.L.; Asada, H.
:title Describing Surfaces
:date January 1985
:cost $2.25
:adnum AD-A158940
:pages 33
:keywords computer vision, robotics, 3-D vision, surface description,
computer aided design, object recognition
:abstract
This paper continues our work on visual representations of
three-dimensional surfaces [Brady and Yuille 1984b]. The theoretical
component of our work is a study of classes of surface curves as a
source of constraint on the surface on which they lie, and as a basis
for describing it. We analyze bounding contours, surface
intersections, lines of curvature, and asymptotes. Our experimental
work investigates whether the information suggested by our
theoretical study can be computed reliably and efficiently. We
demonstrate algorithms that compute lines of curvature of a (Gaussian
smoothed) surface; determine planar patches and umbilic regions;
extract axes of surfaces of revolution and tube surfaces. We report
preliminary results on adapting the curvature primal sketch algorithms
of Asada and Brady [1984] to detect and describe surface
intersections.

:aim 823
:author Jonathan H. Connell and Michael Brady
:asort Connell, J.H.; Brady, M.
:title Generating and Generalizing Models of Visual Objects
:date  July 1985
:cost  $2.25
:pages 24
:keywords vision, learning, shape description, representation of shape
:abstract
We report on initial experiments with an implemented learning system
whose inputs are images of two dimensional shapes. The system first
builds semantic network descriptions of shapes based on Brady's {\it
smoothed local symmetry} representation. It learns shape models from
them using a substantially modified version of Winston's ANALOGY
program. A generalization of Gray coding enables the representation to
be extended and allows a single operation, called {\it ablation}, to
achieve the effects of many standard induction heuristics. The program
can learn disjunctions, and learn concepts using only positive
examples.  We discuss learnability and the pervasive importance of
representational hierarchies.

:aim 824
:author Jean Ponce and Michael Brady
:asort Ponce, J.; Brady, M.
:title Toward a Surface Primal Sketch
:date April 1985
:cost $2.25
:pages 30
:ADnum AD-A159693
:keywords vision, edge detection, 3-D vision, robotics, surface representation.
:abstract
This paper reports progress toward the development of a representation
of significant surface changes in dense depth maps. We call the
representation the {\it surface primal sketch} by analogy with
representations of intensity change, image structure, and changes in
curvature of planar curves.  We describe an implemented program that
detects, localizes, and symbolically describes: {\it steps}, where the
surface height function is discontinuous; {\it roofs}, where the
surface is continuous but the surface normal is discontinuous; {\it
smooth joins}, where the surface normal is continuous but a principal
curvature is discontinuous and changes sign; and {\it shoulders},
which consist of two roofs and correspond to a STEP viewed obliquely.
We illustrate the performance of the program on range maps of objects
of varying complexity.

:aim 825
:author S. Murray Sherman and Christof Koch
:asort Sherman, S.M.; Koch, C.	
:title The Anatomy and Physiology of Gating Retinal Signals in the
Mammalian Lateral Geniculate Nucleus
:date June,1985
:cost $2.25
:pages 34
:keywords  visual system, lateral geniculate nucleus, gating signals,
visual attention, top-down processing.
:abstract
In the mammalian visual system, the lateral geniculate nucleus is
commonly thought to act merely as a relay for the transmission of
visual information from the retina to the visual cortex, a relay
without significant elaboration in receptive field properties or
signal strength.  In this paper, we will review the different
anatomical pathways and biophysical mechanisms possibly implementing
a selective gating of visual information flow from the retina to the
visual cortex. We will argue that the lateral geniculate nucleus in
mammals is one of the earliest sites where selective, visual attention
operates and where general changes in neuronal excitability as a
function of the behavioral states of the animal, for instance sleep,
paradoxical sleep, arousal, etc., occur.

:aim 826
:author Michael Drumheller
:asort Drumheller, M.
:title Mobile Robot Localization Using Sonar
:date January 1985
:cost $2.25
:adnum AD-A158819
:pages 25
:keywords mobile robot, robot navigation, sonar, ultrasonic
rangefinding, rangefinding, robot localization, robot positioning,
contour matching
:abstract This paper describes a method by which range data from a
sonar or other type of rangefinder can be used to determine the
two-dimensional position and orientation of a mobile robot inside a
room. The plan of the room is modeled as a list of segments indicating
the positions of walls. The method works by extracting straight
segments from the range data and examining all hypotheses about
pairings between the segments and walls in the model of the room.
Inconsistent pairings are discarded efficiently by using local
constraints based on distances between walls, angles between walls,
and ranges between walls along their normal vectors. These constraints
are used to obtain a small set of possible positions, which is further
pruned using a test for physical consistency. The approach is
extremely tolerant of noise and clutter. Transient objects such as
furniture and people need not be included in the room model, and very
noisy, low-resolution sensors can be used. The algorithm's performance
is demonstrated using a Polaroid Ultrasonic Rangefinder, which is a
low-resolution, high-noise sensor.

:aim 828
:author Philip E. Agre
:asort Agre, P.E.
:title Routines
:date May 1985
:cost $2.25
:pages 27
:adnum AD-A160481
:keywords routines, planning, process representation
:abstract
Regularities in the world give rise to regularities in the way in
which we deal with the world. That is to say, we fall into routines. I
have been studying the phenomena of routinization, the process by
which institutionalized patterns of interaction with the world arise
and evolve in everyday life. Underlying this evolution is a
dialectical process of {\it internalization}: First you build a model
of some previosly unarticulated emergent aspect of an existing
routine. Armed with an incrementally more global view of the
interaction, you can often formulate an incrementally better informed
plan of attack. A routine is NOT a plan in the sense of the classical
planning literature, except in theoretical limit of the process. I
am implementing this theory using {\it running arguments}, a technique
for writing rule-based programs for intelligent agents. Because a
running argument is compiled into TMS networks as it proceeds,
incremental changes in the world require only incremental
recomputation of the reasoning about what actions to take next.  The
system supports a style of programming, {\it dialectical
argumentation}, that has many important properties that recommend it as
a substrate for large AI systems. One of these might be called {\it
additivity}: an agent can modify it's reasoning in a class of
situations by adducing arguments as to why it's previous arguments
were incorrect in those cases. Because no side-effects are ever
required, reflexive systems based on dialectical argumentation ought
to be less fragile than intuition and experience might suggest. I
outline the remaining implementation problems.

:aim 829
:author Kent M. Pitman
:asort Pitman, K.
:title CREF: An Editing Facility for Managing Structured Text
:date February 1985
:cost $2.25
:pages 23
:ADnum AD-A158155
:keywords browsing, document preparation, editing environments,
information management, knowledge engineering, mail reading,
non-linear text, protocol parsing, structured text, text editing
:abstract This paper reports work in progress on an experimental text
editor called CREF, the Cross Referenced Editing Facility.  CREF deals
with chunks of text, called segments, which may have associated
features such as keywords or various kinds of links to other segments.
Text in CREF is organized into linear collections for normal browsing.
The use of summary and cross-reference links in CREF allows the
imposition of an auxiliary network structure upon the text which can
be useful for zooming in and out'' or non-local transitions.''
Although it was designed as a tool for use in complex protocol
analysis by a Knowledge Engineer's Assistant,'' CREF has many
interesting features which should make it suitable for a wide variety
of applications, including browsing, program editing, document
preparation, and mail reading.

:aim 833
:author T. Poggio, H. Voorhees, and A. Yuille
:asort Poggio, T.; Voorhees, H.; Yuille, A.L.
:title A Regularized Solution to Edge Detection
:date April 1985
:cost $2.25
:adnum AD-A159349
:pages 22
:abstract
We consider edge detection as the problem of measuring and localizing
changes of light intensity in the image. As discussed by Torre and
Poggio (1984), edge detection, when defined in this way, is an
ill-posed problem in the sense of Hadamard.  The regularized solution
that arises is then the solution to a variational principle. In the
case of exact data, one of the standard regularization methods (see
Poggio and Torre, 1984) leads to cubic spline interpolation before
differentiation.  We show that in the case of regularly-spaced data
this solution corresponds to a convolution filter -- to be applied to
the signal before differentiation -- which is a cubic spline. In the
case of non-exact data, we use another regularization method that
leads to a different variational principle. We prove (1) that this
variational principle leads to a convolution filter for the problem of
one-dimensional edge detection, (2) that the form of this filter is
very similar to the gaussian filter, and (3) that the regularizing
parameter $\lambda$ in the variational principle effectively controls
the scale of the filter.

:aim 835
:author John M. Rubin and W.A. Richards
:asort Rubin, J.M.; Richards, W.A.
:title Boundaries of Visual Motion
:date April 1985
:cost $2.25
:pages 29
:keywords vision, visual motion, motion recognition, event perception,
motion representation, motion perception, motion boundaries.
:abstract A representation of visual motion convenient for recognition should
make prominent the qualitative differences among simple motions. We argue
that the first stage in such a motion representation is to make explicit
boundaries that we define as starts, stops and force discontinuities. When
one of these boundaries occurs in motion, human observers have the subjective
impression that some fleeting, significant event has occurred. We go farther
and hypothesize that one of the subjective motion boundaries is seen if
and only if one of our defined boundaries occurs. We enumerate all possible
motion boundaries and provide evidence that they are psychologically real.

:aim 836
:author Robert C. Berwick and Amy S. Weinberg
:asort Berwick, R.; Weinberg, A.
:title Parsing and Linguistic Explanation
:date April 1985
:cost $2.25
:pages 32
:adnum AD-A159233
:keywords natural language processing, cognitive modeling, parsing
:abstract This article summarizes and extends recent results linking
deterministic parsing to observed "locality principles" in syntax.
It also argues that grammatical theories based on explicit phrase
structure rules are unlikely to provide comparable explanations of
why natural languages are built the way they are.

:aim 837
:author Eric Sven Ristad
:asort Ristad, E.S.
:title GPSG-Recognition is NP-Hard
:date March 1985
:cost $1.50
:pages 11
:keywords
:abstract
Proponents of Generalized Phrase Structure Grammar (GPSG) often cite
its weak context-free generative power as proof of the computational
tractability of GPSG-Recognition.  It is well known that context-free
languages can be parsed by a wide range of algorithms.  Hence, it
might be thought that GPSG's weak context-free generative power should
guarantee that it, too, is efficiently parsible.  This widely-assumed
GPSG efficient parsibility" result is false: A reduction from
3-Satisfiability proves that GPSG-Recognition is in the class NP-hard,
and likely to be intractable.

:aim 838
:author Jean Ponce
:asort Ponce, J.
:title Prism Trees: An Efficient Representation for Manipulating and
Displaying Polyhedra With Many Faces
:date April 1985
:cost $2.25
:pages 22
:keywords computer graphics, hierarchical structures, set operations between
solids, geometric modelling, ray casting display.
:abstract Computing surface and/or object intersections is a corner-stone of
many algorithms in geometric modeling and computer graphics, for
example set operations between solids, or surfaces ray casting
display. We present an object centered, information preserving,
hierarchial representation for polyhedra called {\it Prism Tree}. We
use the representation to decompose the intersection algorithms into
two steps: the {\it localization} of intersections, and their {\it
processing}. When dealing with polyhedra with many faces (typically
more than one thousand), the first step is by far the most expensive.
The {\it Prism Tree} structure is used to compute efficiently this
localization step. A preliminary implementation of the set operations
and ray casting algorithims has been constructed.

:aim 839
:author J.L.Marroquin
:asort Marroquin, J.L.
:title Optimal Bayesian Estimators For Image Segmentation and
Surface Reconstruction
:date April 1985
:cost $1.50
:pages 17
:keywords Bayesian estimation, Markov random fields, image segmentation,
surface reconstruction, image restoration.
:abstract  A very fruitful approach to the solution of image segmentation
and surface reconstruction tasks is their formulation as estimation problems
via the use of Markov random field models and Bayes theory. However, the
Maximuma Posteriori (MAP) estimate, which is the one most frequently
used, is suboptimal in these cases. We show that for segmentation problems
the optimal Bayesian estimator is the maximizer of the posterior marginals,
while for reconstruction tasks, the threshold posterior mean has the
best possible performance. We present efficient distributed algorithms for
approximating these estimates in the general case. Based on these results,
we develop a maximum likelihood that leads to a parameter-free
distributed algorithm for restoring piecewise constant images. To illustrate
these ideas, the reconstruction of binary patterns is discussed in detail.

:aim 840
:title Inferring 3D Shapes from 2D Codons
:author Whitman Richards, Jan J. Koenderink, D.D. Hoffman
:asort Richards, W.A.; Koenderink, J.J.; Hoffman, D.D.
:date April 1985
:pages 19
:cost $1.50
:keywords vision, recognition, visual representation, object perception,
figure-ground, 3-D shape
:abstract
All plane curves can be described at an abstract level by a sequence
of five primitive elemental shapes, called "codons", which capture the
sequential relations between the singular points of curvature. The
codon description provides a basis for enumerating all smooth 2D
curves. Let each of these smooth plane curves be considered as the
silhouette of an opaque object. Clearly an infinity of 3D objects can
generate any one of our "codon" silhouettes. How then can we predict
which 3D object corresponds to a given 2D silhouette? To restrict the
infinity of choices, we impose three math- matical properties of
smooth surfaces plus one simple viewing constraint.  The constraint is
an extension of the notion of general position, and seems to drive our
preferred inferences of 3D shapes, given only the 2D contour.

:aim 841
:author W. Eric L. Grimson and Tomas Lozano-Perez
:asort Grimson, W.E.L.; Lozano-Perez, T.
:title Recognition and Localization of Overlapping Parts From Sparse Data
:date June 1985
:cost $2.50
:pages 41
:ADnum AD-A158394
:keywords object recognition, sensor interpretations
:abstract
This paper discusses how sparse local measurements of positions and
surface normals may be used to identify and locate overlapping
objects.  The objects are modeled as polyhedra (or polygons) having up
to six degrees of positional freedom relative to the sensors. The
approach operates by examining all hypotheses about pairings between
sensed data and object surfaces and efficiently discarding
inconsistent ones by using local constraints on: distances between
faces, angles between face normals, and angles (relative to the
surface normals) of vectors between sensed points. The method
described here is an extension of a method for recognition and
localization of non-overlapping parts previously described in [Grimson
and Lozano-Perez 84] and [Gaston and Lozano-Perez 84].

:aim 842
:author Tomas Lozano-Perez and Rodney A. Brooks
:asort Lozano-Perez, T.; Brooks, R.A.
:title An Approach To Automatic Robot Programming
:date April 1985
:cost $2.25
:pages 35
:adnum AD-A161120
:keywords robotics, task planning, robot programming
:abstract
In this paper we propose an architecture for a new task level system,
which we call TWAIN. Task-level programming attempts to simplify the
robot programming process by requiring that the user specify only
goals for the physical relationships among objects, rather than the
motions needed to achieve those goals. A task-level specification is
meant to be completely robot independent; no positions or paths that
depend on the robot geometry or kinematics are specified by the user.
We have two goals for this paper. The first is to present a more
unified treatment of some individual pieces of research in task
planning, whose relationship has not previously been described. The
second is to provide a new framework for further research in
task-planning. This is a slightly modified version of a paper that
appeared in {\it Proceedings of Solid Modeling by Computers: From
Theory to Applications}, Research Laboratories Symposium Series,
sponsored by General Motors, Warren, MI, September, 1983.

:aim 845
:author Norberto M. Grzywacz and Ellen C. Hildreth
:asort Grzywacz, N.M.; Hildreth, E.
:title The Incremental Rigidity Scheme for Recovering Structure from Motion:
Position vs. Velocity Based Formulations
:date October 1985
:pages 53
:cost $2.75
:keywords motion analysis, structure from motion, image analysis, 3-d analysis,
velocity field, rigidity assumption.
:abstract Perceptual studies suggest that the visual system uses the
"rigidity" assumption to recover three-dimensional structure from
motion. Ullman (1984) recently proposed a computational scheme, the
{\it incremental rigidity scheme}, which uses the rigidity assumption
to recover the structure of rigid and non-rigid objects in motion.
The scheme assumes the input to be discrete positions of elements in
motion, under orthographic projection. We present formulations of
Ullman's method that use velocity information and perspective
projection in the recovery of structure. Theoretical and computer
analysis show that the velocity based formulations provide a rough
estimate of structure quickly, but are not robust over an extended
time period. The stable long term recovery of structure requires
disparate views of moving objects. Our analysis raises interesting
questions regarding the recovery of structure from motion in the human
visual system.

:aim 846
:author Ellen C. Hildreth and John M. Hollerbach
:asort Hildreth, E.; Hollerbach, J.M.
:title The Computational Approach to Vision and Motor Control
:date August 1985
:pages 84
:cost $3.00
:reference C.B.I.P. Memo 014
:keywords vision, robotics, motor control, natural computation,
computational approach, artificial intelligence
:abstract
Over the past decade, it has become increasingly clear that to
understand the brain, we must study not only its biochemical and
biophysical mechanisms and its outward perceptual and physical
behavior. We must also study the brain at a theoretical level that
investigates the {\it computations} that are necessary to perform its
functions. The control of movements such as reaching, grasping and
manipulating objects requires complex mechanisms that elaborate
information from many sensors and control the forces generated by a
large number of muscles.  The act of seeing, which intuitively seems
so simple and effortless, requires information processing whose
complexity we are just beginning to grasp.  A {\it computational
approach} to the study of vision and motor control has evolved within
the field of Artificial Intelligence, which inquires directly into the
nature of the information processing that is required to perform
complex visual and motor tasks.  This paper discusses a particular
view of the computational approach and its relevance to experimental
neuroscience.

:aim 848
:title The Revised Revised Report on Scheme or The Uncommon Lisp
:author Hal Abelson, Norman Adams, David Bartly, Gary Brooks, William
Clinger(editor), Dan Friedman, Robert Halstead, Chris Hanson, Chris Haynes,
Eugene Kohlbecker, Don Oxley, Kent Pitman, Jonathan Rees, Bill Rozas, Gerald
Jay Sussman, Mitchell Wand.
:asort Abelson, H.; Adams, N.; Bartly, D.; Brooks, G.; Clinger, W.D.;
Friedman, D.; Halstead, R.; Hanson, C.; Haynes, C.; Kohlbecker, E.;
Oxley, D.; Pitman, K.; Rees, J.; Rozas, B.; Sussman, G.J.; Wand, M.
:date August 1985
:pages 76
:cost $6.00
:reference Indiana University Computer Science Dept. Technical Report 174,
June, 1985
:keywords SCHEME, LISP, functional programming, computer languages
:abstract
Data and procedures and the values they amass,
Higher order functions to combine and mix and match,
Objects with their local state, the messages they pass,
A property, a package, the control point for a catch-
In the Lambda Order they are all first-class.
One Thing to name them all, One Thing to define them,
One Thing to place them in environments and bind them,
In the Lambda Order they are all first-class.

:aim 849
:author John M. Hollerbach and Christopher G. Atkeson
:asort Hollerbach, J.M.; Atkeson, C.G.
:title Characterization of Joint-Interpolated Arm Movements
:date June,1985
:cost $1.50
:pages 19
:keywords arm control, kinematics, trajectory planning
:abstract Two possible sets of planning variables for human arm movement are
joint angles and hand position. Although one might expect these possibilities
to be mutually exclusive, recently an apparently contradictory set of data
has appeared that indicates straight-line trajectories in both hand space
and joint space at the same time. To assist in distinguishing between these
viewpoints applied to the same data, we have theoretically characterized the
set of trajectories derivable from a joint based planning strategy and have
compared them to experimental measurements. We conclude that the apparent
straight lines in joint space happen to be artifacts of movement kinematics
near the workspace boundary.

:aim 858
:title Edge Detection
:author Ellen C. Hildreth
:asort Hildreth, E.
:date September 1985
:pages 22
:cost $2.25
:keywords edge detection, computer vision, image processing, image filtering,
intensity changes, Gaussian filtering, multi-resolution image analysis,
zero crossings
:abstract
For both biological systems and machines, vision begins with a large
and unwieldy array of measurements of the amount of light reflected
from surfaces in the environment. The goal of vision is to recover
physical properties of objects in the scene, such as the location of
object boundaries and the structure, color and texture of object
surfaces, from the two-dimensional image that is projected onto the
eye or camera. This goal is not achieved in a single step; vision
proceeds in stages, with each stage producing increasingly more useful
descriptions of the image and then the scene.  The first clues about
the physical properties of the scene are provided by the {\it changes
of intensity} in the image. The importance of intensity changes and
edges in early visual processing has led to extensive research on
their detection, description and use, both in computer and biological
vision systems. This article reviews some of the theory that underlies
the detection of edges, and the methods used to carry out this
analysis.

:aim 863
:author Shahriar Negahdaripour
:asort Negahdaripour, S.
:title Direct Passive Navigation: Analytical Solutions for Planes and Curved Sur
faces
:date August 1985
:pages 17
:cost $1.50
:keywords passive navigation, optical flow, structure and motion, planar surface
s, least squares
:abstract
In this paper, we derive a closed form solution for recovering the
motion of an observer relative to a planar surface directly from image
brightness derivatives.  We do not compute the optical flow as an
intermediate step, only the spatial and temporal intensity gradients
at a minimum of 9 points. We solve a linear matrix equation for the
elements of a 3X3 matrix whose eigenvalue decomposition is used to
compute the motion parameters and plane orientation.  We also show how
the procedure can be extended to curved surfaces that are locally
approximatable by quadratic patches. In this case, a minimum of 18
independent points are required to uniquely determine the elements of
two 3X3 matrices that are used to solve for the surface structure and
motion parameters.

:aim 864
:author Rodney A. Brooks
:asort Brooks, R.A.
:title A Robust Layered Control System For A Mobile Robot
:date September 1985
:pages 25
:ADnum AD-A160833
:cost $2.25
:keywords mobile robot, robot control
:abstract
We describe a new architecture for controlling mobile robots.  Layers
of control system are built to let the robot operate at increasing
levels of competence. Layers are made up of asynchronous modules which
communicate over low bandwidth channels. Each module is an instance of
a fairly simple computational machine. Higher level layers can subsume
the roles of lower levels by suppressing their outputs. However, lower
levels continue to function as higher levels are added. The result is
a robust and flexible robot control system. The system is intended to
control a robot that wanders the office areas of our laboratory,
building maps of its surroundings. In this paper we demonstrate the
system controlling a detailed simulation of the robot.

:aim 865
:author Gul Agha and Carl Hewitt
:asort Agha, G.; Hewitt, C.
:title Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism
:date October 1985
:pages 20
:cost $1.50
:adnum AD-A162422
:keywords concurrency, distributed computing, programming languages, object-
oriented programming, actors, functional programming, parallel processing,
open systems
:abstract
We argue that the ability to model shared objects with changing local
states, dynamic reconfigurability, and inherent parallelism are
desirable properties of any model of concurrency.  The {\it actor
model} addresses these issues in a uniform framework. This paper
briefly describes the concurrent programming language {\it Act3} and
the principles that have guided its development. {\it Act3} advances
the state of the art in programming languages by combining the
advantages of object-oriented programming with those of functional
programming. We also discuss considerations relevant to large-scale

parallelism in the context of {\it open systems}, and define an
abstract model which establishes the equivalence of systems defined by
actor programs.

:aim 868
:author Brian C. Williams
:asort Williams,B.C.
:title Circumscribing Circumscription:A guide to Relevance and Incompleteness
:date October 1985
:pages 46
:cost $2.75
:keywords circumscription,commonsense reasoning,nonmonotonic reasoning,
conjectural reasoning, resource limitations, relevance, completeness.
:abstract Intelligent agentsin the physical world must work from incomplete
information due to partial knowledge and limited resources. An agent copes
with these limitations by applying rules of conjecture to make reasonable
assumptions about wahata is known. Circumscription, proposed by McCarthy
is the formalization of a particularly important rule of conjecture like-
ned to Occam's razor. That is, the set of all objects satisfying a certain
property is the smallest set of objects that is consistent with what is
known.
This paper examines closely the properties and the semantics underlying
circumscription, considering both its expressive power and limitations.
In addition we study circumscription's relationship to several related
formalisms, such as negation by failure, the closed world assumption
default reasoning and Planner's THNOT. In the discussion a number of ex-
tensions to circumscription are proposed,allowing one to tightly focus
its scope of applicability. In addition, several new rules of conjecture
are proposed  based on the notions of revelance and minimality. Finally,
a synthesis between the approaches of McCarthy and Konoglie is used to
extend circumscription, as well as several other rules of conjecture,
to account for resource limitations.

------------------------
AI Technical Reports
------------------------

:tr 219
:unavailable
:author Daniel G. Bobrow
:asort Bobrow, D.G.
:title Natural Language Input for a Computer Problem Solving Language
:date June 1964
:reference (MAC-TR-1),(In Minsky (ed.) {\it Semantic Information
Processing}, M.I.T. Press, 1968)
:adnum AD-604-730

:tr 220
:unavailable
:author Bertram Raphael
:asort Raphael, B.
:title SIR: A Computer Program for Semantic Information Retrieval
:date June 1964
:adnum AD-608-499
:reference (MAC-TR-2) (In Minsky (ed.), {\it  Semantic Information
Processing}, M.I.T. Press,1968)

:tr 221
:unavailable
:author Warren Teitelman
:asort Teitelman, W.
:title  PILOT: A Step Toward Man-Computer Symbiosis
:date September 1966
:reference (MAC-TR-32)
:adnum AD-638-446

:tr 222
:unavailable
:author Lewis M. Norton
:asort Norton, L.M.
:title  ADEPT: A Heuristic Program for Proving Theorems of Group Theory
:date October 1966
:reference (MAC-TR-33)
:adnum AD-645-660

:tr 233
:unavailable
:author William A. Martin
:asort Martin, W.A.
:title  Symbolic Mathematical Laboratory
:date January 1967
:reference (MAC-TR-36)
:adnum AD-657-283

:tr 224
:unavailable
:author Adolfo Guzman-Arenas
:asort Guzman-Arenas, A.
:title  Some Aspects of Pattern Recognition by Computer
:date February 1967
:reference (MAC-TR-37)
:adnum AD-656-041

:tr 225
:unavailable
:author Allen Forte
:asort Forte, A.
:title Syntax-Based Analytic Reading of Musical Scores
:date April 1967
:reference (MAC-TR-39)
:adnum AD-661-806

:tr 226
:unavailable
:author Joel Moses
:asort Moses, J.
:title  Symbolic Integration
:date December 1967
:reference (MAC-TR-47)
:adnum AD-662-666

:tr 227
:unavailable
:author Eugene Charniak
:asort Charniak, E.
:title  CARPS: A Program Which Solves Calculus Word Problems
:date July 1968
:reference (MAC-TR-51)
:adnum AD-673-670

:tr 228
:unavailable
:author Adolfo Guzman-Arenas
:asort Guzman-Arenas, A.
:title Computer Recognition of Three-Dimensional Objects in a Visual Scene
:date December 1968
:reference (MAC-TR-59)
:adnum AD-692-200

:tr 229
:unavailable
:author Wendell Terry Beyer
:asort Beyer, W.T.
:title  Recognition of Topological Invariants by Iterative Arrays
:date October 1969
:reference (MAC-TR-66)
:adnum AD-699-502

:tr 230
:unavailable
:author Arnold K. Griffith
:asort Griffith, A.K.
:title Computer Recognition of Prismatic Solids
:date August 1970
:reference (MAC-TR-73)
:adnum AD-711-763
:cost $6.00

:tr 231
:unavailable
:author Patrick H. Winston
:asort Winston, P.H.
:title Learning Structural Descriptions From Examples
:date September 1970
:pages 266
:reference (MAC-TR-76)
:adnum AD-713-988

:tr 232
:unavailable
:author Berthold K.P. Horn
:asort Horn, B.K.P.
:title Shape From Shading: A Method for Obtaining the Shape of a Smooth Opaque O
bject From One View
:date November 1970
:reference (MAC-TR-79) (In Winston (ed.), {\it The Psychology of
Computer Vision}, McGraw-Hill, 1975)
:adnum AD-717-336