[comp.doc.techreports] tr-input/mit.pt2

leff@smu.UUCP (Laurence Leff) (08/28/89)

that they automatically apply to every kind of structure without the
user having to take any explicit action when writing print-self
methods. A new abbreviation mechanism is introduced which can be used
to limit the total number of lines printed.

:aim 817
:author A. Hurlbert and T. Poggio
:asort Hurlbert, A.; Poggio, T.
:title Spotlight on Attention
:date April 1985
:cost $2.50
:pages 6
:abstract We review some recent psychophysical, physiological and
anatomical data which highlight the important role of attention in
visual information processing, and discuss the evidence for a serial
spotlight of attention.  We point out the connections between the
questions raised by the spotlight model and computational results on
the intrinsic parallelism of several tasks in vision.

:aim 820
:author Michael J. Brooks and Berthold K.P. Horn
:asort Brooks, M.J.; Horn, B.K.P.
:title Shape and Source from Shading
:date January 1985
:cost $2.50
:pages 12
:keywords computer vision, source detection, shape from shading,
Lambertian surface
:abstract Well-known methods for solving the shape-from-shading problem
require knowledge of the reflectance map. Here we show how the
shape-from-shading problem can be solved when the reflectance map is
not available, but is known to have a given form with some unknown
parameters. This happens, for example when the surface is known to be
Lambertian, but the direction to the light source is not known.  We
display an iterative algorithm which alternately estimates the surface
shape and the light source direction.  Use of the unit normal in the
parameterization of the reflectance map, rather than the gradient or
stereographic coordinates, simplifies the analysis. Our approach also
leads to an iterative scheme for computing shape from shading that
adjusts the current estimates of the local normals toward or away from
the direction of the light source. The amount of adjustment is
proportional to the current difference between the predicted and the
observed brightness. Generalizations to less constrained forms of
reflectance maps are also developed.

:aim 822
:author Michael Brady, Jean Ponce, Alan Yuille, and Haruo Asada
:asort Brady, M.; Ponce, J.; Yuille, A.L.; Asada, H.
:title Describing Surfaces
:date January 1985
:cost $3.25
:adnum AD-A158940
:pages 33
:keywords computer vision, robotics, 3-D vision, surface description,
computer aided design, object recognition
:abstract
This paper continues our work on visual representations of
three-dimensional surfaces [Brady and Yuille 1984b]. The theoretical
component of our work is a study of classes of surface curves as a
source of constraint on the surface on which they lie, and as a basis
for describing it. We analyze bounding contours, surface
intersections, lines of curvature, and asymptotes. Our experimental
work investigates whether the information suggested by our
theoretical study can be computed reliably and efficiently. We
demonstrate algorithms that compute lines of curvature of a (Gaussian
smoothed) surface; determine planar patches and umbilic regions;
extract axes of surfaces of revolution and tube surfaces. We report
preliminary results on adapting the curvature primal sketch algorithms
of Asada and Brady [1984] to detect and describe surface
intersections.

:aim 823
:author Jonathan H. Connell and Michael Brady
:asort Connell, J.H.; Brady, M.
:title Generating and Generalizing Models of Visual Objects
:date July 1985
:cost $3.25
:pages 24
:adnum AD-A158197
:keywords vision, learning, shape description, representation of shape
:abstract
We report on initial experiments with an implemented learning system
whose inputs are images of two dimensional shapes. The system first
builds semantic network descriptions of shapes based on Brady's {\it
smoothed local symmetry} representation. It learns shape models from
them using a substantially modified version of Winston's ANALOGY
program. A generalization of Gray coding enables the representation to
be extended and allows a single operation, called {\it ablation}, to
achieve the effects of many standard induction heuristics. The program
can learn disjunctions, and learn concepts using only positive
examples.  We discuss learnability and the pervasive importance of
representational hierarchies.

:aim 824
:author Jean Ponce and Michael Brady
:asort Ponce, J.; Brady, M.
:title Toward a Surface Primal Sketch
:date April 1985
:cost $3.25
:pages 30
:ADnum AD-A159693
:keywords vision, edge detection, 3-D vision, robotics, surface representation
:abstract 
This paper reports progress toward the development of a representation
of significant surface changes in dense depth maps. We call the
representation the {\it surface primal sketch} by analogy with
representations of intensity change, image structure, and changes in
curvature of planar curves.  We describe an implemented program that
detects, localizes, and symbolically describes: {\it steps}, where the
surface height function is discontinuous; {\it roofs}, where the
surface is continuous but the surface normal is discontinuous; {\it
smooth joins}, where the surface normal is continuous but a principal
curvature is discontinuous and changes sign; and {\it shoulders},
which consist of two roofs and correspond to a STEP viewed obliquely.
We illustrate the performance of the program on range maps of objects
of varying complexity.

:aim 825
:author S. Murray Sherman and Christof Koch
:asort Sherman, S.M.; Koch, C.	      
:title The Anatomy and Physiology of Gating Retinal Signals in the
Mammalian Lateral Geniculate Nucleus
:date June 1985
:cost $3.25
:pages 34
:keywords visual system, lateral geniculate nucleus, gating signals,
visual attention, top-down processing.
:abstract  
In the mammalian visual system, the lateral geniculate nucleus is
commonly thought to act merely as a relay for the transmission of
visual information from the retina to the visual cortex, a relay
without significant elaboration in receptive field properties or
signal strength.  In this paper, we will review the different
anatomical pathways and biophysical mechanisms possibly implementing
a selective gating of visual information flow from the retina to the
visual cortex. We will argue that the lateral geniculate nucleus in
mammals is one of the earliest sites where selective, visual attention
operates and where general changes in neuronal excitability as a
function of the behavioral states of the animal, for instance sleep,
paradoxical sleep, arousal, etc., occur.

:aim 826
:author Michael Drumheller
:asort Drumheller, M.
:title Mobile Robot Localization Using Sonar
:date January 1985
:cost $3.25
:adnum AD-A158819
:pages 25
:keywords mobile robot, robot navigation, sonar, ultrasonic
rangefinding, rangefinding, robot localization, robot positioning,
contour matching
:abstract This paper describes a method by which range data from a
sonar or other type of rangefinder can be used to determine the
two-dimensional position and orientation of a mobile robot inside a
room. The plan of the room is modeled as a list of segments indicating
the positions of walls. The method works by extracting straight
segments from the range data and examining all hypotheses about
pairings between the segments and walls in the model of the room.
Inconsistent pairings are discarded efficiently by using local
constraints based on distances between walls, angles between walls,
and ranges between walls along their normal vectors. These constraints
are used to obtain a small set of possible positions, which is further
pruned using a test for physical consistency. The approach is
extremely tolerant of noise and clutter. Transient objects such as
furniture and people need not be included in the room model, and very
noisy, low-resolution sensors can be used. The algorithm's performance
is demonstrated using a Polaroid Ultrasonic Rangefinder, which is a
low-resolution, high-noise sensor.

:aim 828
:author Philip E. Agre
:asort Agre, P.E.
:title Routines
:date May 1985
:cost $3.25
:pages 27
:adnum AD-A160481
:keywords routines, planning, process representation
:abstract
Regularities in the world give rise to regularities in the way in
which we deal with the world. That is to say, we fall into routines. I
have been studying the phenomena of routinization, the process by
which institutionalized patterns of interaction with the world arise
and evolve in everyday life. Underlying this evolution is a
dialectical process of {\it internalization}: First you build a model
of some previosly unarticulated emergent aspect of an existing
routine. Armed with an incrementally more global view of the
interaction, you can often formulate an incrementally better informed
plan of attack. A routine is NOT a plan in the sense of the classical
planning literature, except in theoretical limit of the process. I
am implementing this theory using {\it running arguments}, a technique
for writing rule-based programs for intelligent agents. Because a
running argument is compiled into TMS networks as it proceeds,
incremental changes in the world require only incremental
recomputation of the reasoning about what actions to take next.  The
system supports a style of programming, {\it dialectical
argumentation}, that has many important properties that recommend it as
a substrate for large AI systems. One of these might be called {\it
additivity}: an agent can modify it's reasoning in a class of
situations by adducing arguments as to why it's previous arguments
were incorrect in those cases. Because no side-effects are ever
required, reflexive systems based on dialectical argumentation ought
to be less fragile than intuition and experience might suggest. I
outline the remaining implementation problems.

:aim 829
:author Kent M. Pitman
:asort Pitman, K.
:title CREF: An Editing Facility for Managing Structured Text
:date February 1985
:cost $3.25
:pages 23
:ADnum AD-A158155
:keywords browsing, document preparation, editing environments,
information management, knowledge engineering, mail reading,
non-linear text, protocol parsing, structured text, text editing
:abstract This paper reports work in progress on an experimental text
editor called CREF, the Cross Referenced Editing Facility.  CREF deals
with chunks of text, called segments, which may have associated
features such as keywords or various kinds of links to other segments.
Text in CREF is organized into linear collections for normal browsing.
The use of summary and cross-reference links in CREF allows the
imposition of an auxiliary network structure upon the text which can
be useful for ``zooming in and out'' or ``non-local transitions.''
Although it was designed as a tool for use in complex protocol
analysis by a ``Knowledge Engineer's Assistant,'' CREF has many
interesting features which should make it suitable for a wide variety
of applications, including browsing, program editing, document
preparation, and mail reading.

:aim 832
:author A. Verri and A. Yuille
:asort Verri, A.; Yuille, A.L.
:title Perspective Projection Invariants
:date February 1986
:cost $2.50
:pages 15
:adnum AD-A167783
:keywords stereo, registration, perspective projection, zeros of curvature
:abstract
An important part of stereo vision consists of finding and matching
points in two images which correspond to the same physical element in
the scene.  We show that zeros of curvature of curves are perspective
projection invariants and can therefore be used to find corresponding
points.  They can be used to help solve the registration problem
(Longuet-Higgins, 1982) and to obtain the correct depth when a curve
enters the forbidden zone (Krol and van de Grind, 1982). They are also
relevant to theories for representing image curves. We consider the
stability of these zeros of curvature.

:aim 833
:author T. Poggio, H. Voorhees, and A. Yuille
:asort Poggio, T.; Voorhees, H.; Yuille, A.L.
:title A Regularized Solution to Edge Detection
:date April 1985
:cost $3.25
:adnum AD-A159349
:pages 22
:abstract
We consider edge detection as the problem of measuring and localizing
changes of light intensity in the image. As discussed by Torre and
Poggio (1984), edge detection, when defined in this way, is an
ill-posed problem in the sense of Hadamard.  The regularized solution
that arises is then the solution to a variational principle. In the
case of exact data, one of the standard regularization methods (see
Poggio and Torre, 1984) leads to cubic spline interpolation before
differentiation.  We show that in the case of regularly-spaced data
this solution corresponds to a convolution filter -- to be applied to
the signal before differentiation -- which is a cubic spline. In the
case of non-exact data, we use another regularization method that
leads to a different variational principle. We prove (1) that this
variational principle leads to a convolution filter for the problem of
one-dimensional edge detection, (2) that the form of this filter is
very similar to the gaussian filter, and (3) that the regularizing
parameter $\lambda$ in the variational principle effectively controls
the scale of the filter.

:aim 835
:author John M. Rubin and W.A. Richards
:asort Rubin, J.M.; Richards, W.A.
:title Boundaries of Visual Motion
:date April 1985
:cost $3.25
:pages 29
:keywords vision, visual motion, motion recognition, event perception,
motion representation, motion perception, motion boundaries.
:abstract A representation of visual motion convenient for recognition should
make prominent the qualitative differences among simple motions. We argue
that the first stage in such a motion representation is to make explicit
boundaries that we define as starts, stops and force discontinuities. When 
one of these boundaries occurs in motion, human observers have the subjective
impression that some fleeting, significant event has occurred. We go farther
and hypothesize that one of the subjective motion boundaries is seen if 
and only if one of our defined boundaries occurs. We enumerate all possible
motion boundaries and provide evidence that they are psychologically real.

:aim 836
:author Robert C. Berwick and Amy S. Weinberg
:asort Berwick, R.; Weinberg, A.
:title Parsing and Linguistic Explanation
:date April 1985
:cost $3.25
:pages 32
:adnum AD-A159233
:keywords natural language processing, cognitive modeling, parsing
:abstract This article summarizes and extends recent results linking
deterministic parsing to observed ``locality principles" in syntax.
It also argues that grammatical theories based on explicit phrase
structure rules are unlikely to provide comparable explanations of
why natural languages are built the way they are.

:aim 837
:author Eric Sven Ristad
:asort Ristad, E.S.
:title GPSG-Recognition is NP-Hard
:date March 1985
:cost $2.50
:pages 11
:adnum AD-A162716
:keywords GPSG, parsing, complexity, natural language, linguistics,
natural language parsing
:abstract 
Proponents of Generalized Phrase Structure Grammar (GPSG) often cite
its weak context-free generative power as proof of the computational
tractability of GPSG-Recognition.  It is well known that context-free
languages can be parsed by a wide range of algorithms.  Hence, it
might be thought that GPSG's weak context-free generative power should
guarantee that it, too, is efficiently parsible.  This widely-assumed
GPSG ``efficient parsibility" result is false: A reduction from
3-Satisfiability proves that GPSG-Recognition is in the class NP-hard,
and likely to be intractable.

:aim 838 
:author Jean Ponce 
:asort Ponce, J.  
:title Prism Trees: An Efficient Representation for Manipulating and
Displaying Polyhedra With Many Faces 
:date April 1985 
:cost $3.25 
:pages 22
:ADnum AD-A162601
:keywords computer graphics, hierarchical structures, set operations between
solids, geometric modelling, ray casting display.  
:abstract Computing surface and/or object intersections is a corner-stone of
many algorithms in geometric modeling and computer graphics, for
example set operations between solids, or surfaces ray casting
display. We present an object centered, information preserving,
hierarchial representation for polyhedra called {\it Prism Tree}. We
use the representation to decompose the intersection algorithms into
two steps: the {\it localization} of intersections, and their {\it
processing}. When dealing with polyhedra with many faces (typically
more than one thousand), the first step is by far the most expensive.
The {\it Prism Tree} structure is used to compute efficiently this
localization step. A preliminary implementation of the set operations
and ray casting algorithims has been constructed.

:aim 839
:author J.L.Marroquin
:asort Marroquin, J.L.
:title Optimal Bayesian Estimators For Image Segmentation and
Surface Reconstruction
:date April 1985
:cost $2.50
:pages 17
:adnum AD-A159692
:keywords Bayesian estimation, Markov random fields, image segmentation,
surface reconstruction, image restoration
:abstract  A very fruitful approach to the solution of image segmentation
and surface reconstruction tasks is their formulation as estimation problems
via the use of Markov random field models and Bayes theory. However, the
Maximuma Posteriori (MAP) estimate, which is the one most frequently
used, is suboptimal in these cases. We show that for segmentation problems
the optimal Bayesian estimator is the maximizer of the posterior marginals,
while for reconstruction tasks, the threshold posterior mean has the 
best possible performance. We present efficient distributed algorithms for
approximating these estimates in the general case. Based on these results,
we develop a maximum likelihood that leads to a parameter-free
distributed algorithm for restoring piecewise constant images. To illustrate
these ideas, the reconstruction of binary patterns is discussed in detail.

:aim 840
:title Inferring 3D Shapes from 2D Codons
:author Whitman Richards, Jan J. Koenderink, D.D. Hoffman
:asort Richards, W.A.; Koenderink, J.J.; Hoffman, D.D.
:date April 1985
:pages 19
:cost $2.50
:keywords vision, recognition, visual representation, object perception,
figure-ground, 3-D shape
:abstract
All plane curves can be described at an abstract level by a sequence
of five primitive elemental shapes, called ``codons", which capture the
sequential relations between the singular points of curvature. The
codon description provides a basis for enumerating all smooth 2D
curves. Let each of these smooth plane curves be considered as the
silhouette of an opaque object. Clearly an infinity of 3D objects can
generate any one of our ``codon" silhouettes. How then can we predict
which 3D object corresponds to a given 2D silhouette? To restrict the
infinity of choices, we impose three math- matical properties of
smooth surfaces plus one simple viewing constraint.  The constraint is
an extension of the notion of general position, and seems to drive our
preferred inferences of 3D shapes, given only the 2D contour.

:aim 841
:author W. Eric L. Grimson and Tom\'as Lozano-P\'erez
:asort Grimson, W.E.L.; Lozano-P\'erez, T.
:title Recognition and Localization of Overlapping Parts From Sparse Data
:date June 1985
:cost $3.75
:pages 41
:reference Also published as "Localizing Overlapping Parts by
Seaching the Interpretation Tree," {\it IEEE Transactions on Pattern
Analysis, and Machine Intelligence,} vol. PAMI-9, no. 4, July 1987.
:ADnum AD-A158394
:keywords object recognition, sensor interpretations
:abstract
This paper discusses how sparse local measurements of positions and
surface normals may be used to identify and locate overlapping
objects.  The objects are modeled as polyhedra (or polygons) having up
to six degrees of positional freedom relative to the sensors. The
approach operates by examining all hypotheses about pairings between
sensed data and object surfaces and efficiently discarding
inconsistent ones by using local constraints on: distances between
faces, angles between face normals, and angles (relative to the
surface normals) of vectors between sensed points. The method
described here is an extension of a method for recognition and
localization of non-overlapping parts previously described in [Grimson
and Lozano-P\'erez 84] and [Gaston and Lozano-P\'erez 84].

:aim 842
:author Tom\'as Lozano-P\'erez and Rodney A. Brooks 
:asort Lozano-P\'erez, T.; Brooks, R.A.
:title An Approach To Automatic Robot Programming
:date April 1985
:cost $3.25
:pages 35 
:adnum AD-A161120
:keywords robotics, task planning, robot programming
:abstract
In this paper we propose an architecture for a new task level system,
which we call TWAIN. Task-level programming attempts to simplify the
robot programming process by requiring that the user specify only
goals for the physical relationships among objects, rather than the
motions needed to achieve those goals. A task-level specification is
meant to be completely robot independent; no positions or paths that
depend on the robot geometry or kinematics are specified by the user.
We have two goals for this paper. The first is to present a more
unified treatment of some individual pieces of research in task
planning, whose relationship has not previously been described. The
second is to provide a new framework for further research in
task-planning. This is a slightly modified version of a paper that
appeared in {\it Proceedings of Solid Modeling by Computers: From
Theory to Applications}, Research Laboratories Symposium Series,
sponsored by General Motors, Warren, MI, September, 1983.

:aim 845
:author Norberto M. Grzywacz and Ellen C. Hildreth
:asort Grzywacz, N.M.; Hildreth, E.
:title The Incremental Rigidity Scheme for Recovering Structure from Motion: 
Position vs. Velocity Based Formulations
:date October 1985
:pages 53
:cost $3.75
:reference Also in {\it Journal of Optical Society of America A}, vol.
4, 1987, pages 503-518.
:adnum AD-A196214
:keywords motion analysis, structure from motion, image analysis, 3-D analysis,
velocity field, rigidity assumption.
:abstract Perceptual studies suggest that the visual system uses the
``rigidity" assumption to recover three-dimensional structure from
motion. Ullman (1984) recently proposed a computational scheme, the
{\it incremental rigidity scheme}, which uses the rigidity assumption
to recover the structure of rigid and non-rigid objects in motion.
The scheme assumes the input to be discrete positions of elements in
motion, under orthographic projection. We present formulations of
Ullman's method that use velocity information and perspective
projection in the recovery of structure. Theoretical and computer
analysis show that the velocity based formulations provide a rough
estimate of structure quickly, but are not robust over an extended
time period. The stable long term recovery of structure requires
disparate views of moving objects. Our analysis raises interesting
questions regarding the recovery of structure from motion in the human
visual system.

:aim 846
:author Ellen C. Hildreth and John M. Hollerbach
:asort Hildreth, E.; Hollerbach, J.M.
:title The Computational Approach to Vision and Motor Control
:date August 1985
:pages 84
:cost $4.00
:reference C.B.I.P. Memo 014; also published in {\it Handbook of
Physiology, Section 1: The Nervous System, Vol. V Higher Functions of
the Brain, Part II}, F. Plum, ed., American Physiological Society,
Bethesda, Maryland, 1987.
:keywords vision, robotics, motor control, natural computation,
computational approach, artificial intelligence
:abstract
Over the past decade, it has become increasingly clear that to
understand the brain, we must study not only its biochemical and
biophysical mechanisms and its outward perceptual and physical
behavior. We must also study the brain at a theoretical level that
investigates the {\it computations} that are necessary to perform its
functions. The control of movements such as reaching, grasping and
manipulating objects requires complex mechanisms that elaborate
information from many sensors and control the forces generated by a
large number of muscles.  The act of seeing, which intuitively seems
so simple and effortless, requires information processing whose
complexity we are just beginning to grasp.  A {\it computational
approach} to the study of vision and motor control has evolved within
the field of Artificial Intelligence, which inquires directly into the
nature of the information processing that is required to perform
complex visual and motor tasks.  This paper discusses a particular
view of the computational approach and its relevance to experimental
neuroscience. 

:aim 848a
:title ${\rm \bf Revised}^3$ Report on the Algorithmic Language Scheme
:author Jonathan Rees and William Clinger (editors), H. Abelson, N.I.
Adams IV, D. Bartley, G. Brooks, R.K. Dybvig, D.P. Friedman, R.
Halstead, C. Hanson, C.T. Haynes, E. Kohlbecker, D.  Oxley, K.M.
Pitman, G.J. Rozas, G.J. Sussman, M. Wand
:asort Abelson, H.; Adams, N.; Bartley, D.; Brooks, G.; Clinger, W.D.;
Dybvig, D.P.; Friedman, D.; Halstead, R.; Hanson, C.; Haynes, C.;
Kohlbecker, E.; Oxley, D.; Pitman, K.; Rees, J.; Rozas, B.; Sussman,
G.J.; Wand, M.
:date September 1986
:pages 66
:cost $6.00
:reference Indiana University Computer Science Dept. Technical Report 174,
October, 1986; {\it SIGPLAN Notices} 21 (12) December 1986.
:keywords SCHEME, LISP, functional programming, computer languages
:abstract
\halign{\indent\indent#\hfil\cr
Data and procedures and the values they amass,\cr
Higher order functions to combine and mix and match,\cr
Objects with their local state, the messages they pass,\cr
A property, a package, the control point for a catch-\cr
In the Lambda Order they are all first-class.\cr
One Thing to name them all, One Thing to define them,\cr
One Thing to place them in environments and bind them,\cr
In the Lambda Order they are all first-class.\cr}

:aim 849
:author John M. Hollerbach and Christopher G. Atkeson
:asort Hollerbach, J.M.; Atkeson, C.G.
:title Characterization of Joint-Interpolated Arm Movements
:date June 1985
:cost $2.50
:pages 19
:reference {{\it Generation and Modulation of Action Patterns}, H.
Heuer and C. Fromm, editors, Springer-Verlag, New York, NY, 1986.}
:keywords arm control, kinematics, trajectory planning
:abstract Two possible sets of planning variables for human arm movement are
joint angles and hand position. Although one might expect these possibilities
to be mutually exclusive, recently an apparently contradictory set of data
has appeared that indicates straight-line trajectories in both hand space
and joint space at the same time. To assist in distinguishing between these
viewpoints applied to the same data, we have theoretically characterized the
set of trajectories derivable from a joint based planning strategy and have
compared them to experimental measurements. We conclude that the apparent
straight lines in joint space happen to be artifacts of movement kinematics
near the workspace boundary.

:aim 854
:title A Closed Form Solution for Inverse Kinematics of Robot
Manipulator with Redundancy
:author Pyung H. Chang
:asort Chang, P.
:date March 1986
:pages 25
:cost $3.25
:adnum AD-A167850
:keywords inverse kinematics, manipulator with redundancy, singularity
avoidance 
:abstract
A closed form equation for inverse kinematics of manipulator with
redundancy is derived, using the Lagrangian multiplier method. The
proposed equation is proved to provide the exact equilibrium state for
the resolved motion method, and is shown to be a general expression
that yields the extended Jacobian method. The repeatability problem in
the resolved motion method does not exist in the proposed equation.
The equation is demonstrated to give more accurate trajectories than
the resolved motion method.

:aim 855
:title Sensing Strategies for Disambiguating Among Multiple Objects in
Known Poses
:author Grimson, W. Eric L.
:asort Grimson, W.E.L.
:date August 1985
:pages 36
:cost $3.25
:adnum AD-A165912
:keywords sensing strategies, object recognition, computer vision,
tactile sensing
:abstract
The need for intelligent interaction of a robot with its environment
frequently requires sensing of the environment. Further, the need for
rapid execution requires that the interaction between sensing and
action take place using as little sensory data as possible, while
still being reliable. Previous work has developed a technique for
rapidly determining the feasible poses of an object from sparse,
noisy, occluded sensory data. In this paper, we examine techniques
for acquiring position and surface orientation data about points on
the surfaces of objects, with the intent of selecting sensory points
that will force a unique interpretation of the pose of the object with
as few data points as possible. Under some simple assumptions about
the sensing geometry, we derive a technique for predicting optimal
sensing positions. The technique has been implemented and tested. To
fully specify the algorithm, we need estimates of the error in
estimating the position and orientation of the object, and we derive
analytic expressions for such error for the case of one particular
approach to object recognition.

:aim 856
:title The Computational Complexity of Two-Level Morphology
:author G. Edward {Barton, Jr.}
:asort Barton, G.E., Jr.
:date November 1985
:pages 37
:cost $3.25
:adnum AD-A165991
:keywords finite-state machines, morphological analysis, natural language, two-level morphology, KIMMO system, computational complexity, NP-completeness
:abstract
Morphological analysis requires knowledge of the stems, affixes,
combinatory patterns, and spelling-change processes of a language. The
computational difficulty of the task can be clarified by investigating
the computational characteristics of specific models of morphological
processing. The use of finite-state machinery in the ``two-level" model
by Kimmo Koskenniemi gives it the appearance of computational
efficiency, but closed examination shows the model does not guarantee
efficient processing. Reductions of the satisfiability problem show
that finding the proper lexical-surface correspondence in a two-level
generation or recognition problem can be computationally difficult.
However, another source of complexity in the existing algorithms can
be sharply reduced by changing the implementation of the dictionary
component. A merged dictionary with bit-vectors reduces the number of
choices among alternative dictionary subdivisions by allowing several
subdivisions to be searched at once.

:aim 857
:author Ryszard S. Michalski and Patrick H. Winston
:asort Michalski, R.S.; Winston, P.H.
:title Variable Precision Logic
:date August 1985
:pages 26
:cost $3.25
:keywords reasoning, inference, variable precision logic
:abstract Variable precision logic is concerned with problems of
reasoning with incomplete information and under time constraints. It
offers mechanisms for handling trade-offs between the precision of
inferences and the computational efficiency of deriving them. Of the
two aspects of precision, the {\it specificity} of conclusions and the
{\it certainty} of belief in them, we address here primarily the
latter, and employ {\it censored production rules} as an underlying
representational and computational mechanism. Such rules are created
by augmenting ordinary production rules with an {\it exception}
condition, and are written in the form {\it if A then B unless C},
where {\it C} is the exception condition.

:aim 858
:title Edge Detection
:author Ellen C. Hildreth
:asort Hildreth, E.
:date September 1985
:pages 22
:cost $3.25
:ADnum AD-A162662
:reference {{\it Encyclopedia of Artificial Intelligence}, S. Shapiro,
editor, John Wiley \& Sons, New York, NY, 1987.}
:keywords edge detection, computer vision, image processing, image filtering,
intensity changes, Gaussian filtering, multi-resolution image analysis,
zero crossings
:abstract
For both biological systems and machines, vision begins with a large
and unwieldy array of measurements of the amount of light reflected
from surfaces in the environment. The goal of vision is to recover
physical properties of objects in the scene, such as the location of
object boundaries and the structure, color and texture of object
surfaces, from the two-dimensional image that is projected onto the
eye or camera. This goal is not achieved in a single step; vision
proceeds in stages, with each stage producing increasingly more useful
descriptions of the image and then the scene.  The first clues about
the physical properties of the scene are provided by the {\it changes
of intensity} in the image. The importance of intensity changes and
edges in early visual processing has led to extensive research on
their detection, description and use, both in computer and biological
vision systems. This article reviews some of the theory that underlies
the detection of edges, and the methods used to carry out this
analysis.

:aim 861
:author Van-Duc Nguyen
:asort Nguyen, V.
:title The Synthesis of Force-Closure Grasps
:date September 1985
:pages 48
:cost $3.75
:adnum AD-A162635
:keywords planar grasps, force-closure, grasp planning, grasp synthesis
:abstract 
This paper addresses the problem of synthesizing planar grasps that
have force closure. A grasp on an object is a force closure grasp if
and only if we can exert, through the set of contacts, arbitrary force
and moment on this object. Equivalently, any motion of the object is
resisted by a contact force, that is the object cannot break contact
with the finger tips without some non-zero external work. The force
closure constraint is addressed from three different points of view:
mathematics, physics, and computational geometry. The last formulation
results in fast and simple polynomial time algorithms for directly
constructing force closure grasps. We can also find grasps where each
finger has an independent region of contact on the set of edges.

:aim 862
:author Nguyen, Van-Duc
:asort Nguyen, V.
:title The Synthesis of Stable Grasps in the Plane
:date October 1985
:pages 24
:cost $3.25
:adnum AD-A165903
:keywords grasp planning, stable grasp, grasp synthesis, active compliance
:abstract This paper addresses the problem of synthesizing stable
grasps on arbitrary planar polygons. Each finger is a virtual spring
whose stiffness and compression can be programmed. The contacts
between the finger tips and the object are point contacts without
friction. We prove that all force-closure grasps can be made stable,
and it costs {\it O(n)} time to synthesize a set of {\it n} virtual
springs such that a given force-closure grasp is stable. We can choose
the compliance center and the stiffness matrix of the grasp, and so
choose the compliant behavior of the grasped object about its
equilibrium.  The planning and execution of grasps and assembly
operations become easier and less sensitive to errors.

:aim 863
:author Shahriar Negahdaripour
:asort Negahdaripour, S.
:title Direct Passive Navigation: Analytical Solutions for Planes and Curved Surfaces
:date August 1985
:pages 17
:cost $2.50
:adnum AD-A161046
:keywords passive navigation, optical flow, structure and motion, planar surfaces, least squares
:abstract
In this paper, we derive a closed form solution for recovering the
motion of an observer relative to a planar surface directly from image
brightness derivatives.  We do not compute the optical flow as an
intermediate step, only the spatial and temporal intensity gradients
at a minimum of 9 points. We solve a linear matrix equation for the
elements of a 3X3 matrix whose eigenvalue decomposition is used to
compute the motion parameters and plane orientation.  We also show how
the procedure can be extended to curved surfaces that are locally
approximatable by quadratic patches. In this case, a minimum of 18
independent points are required to uniquely determine the elements of
two 3X3 matrices that are used to solve for the surface structure and
motion parameters.

:aim 864
:author Rodney A. Brooks
:asort Brooks, R.A.
:title A Robust Layered Control System For A Mobile Robot
:date September 1985
:pages 25
:ADnum AD-A160833
:cost $3.25
:keywords mobile robot, robot control
:abstract
We describe a new architecture for controlling mobile robots.  Layers
of control system are built to let the robot operate at increasing
levels of competence. Layers are made up of asynchronous modules which
communicate over low bandwidth channels. Each module is an instance of
a fairly simple computational machine. Higher level layers can subsume
the roles of lower levels by suppressing their outputs. However, lower
levels continue to function as higher levels are added. The result is
a robust and flexible robot control system. The system is intended to
control a robot that wanders the office areas of our laboratory,
building maps of its surroundings. In this paper we demonstrate the
system controlling a detailed simulation of the robot.

:aim 865
:author Gul Agha and Carl Hewitt
:asort Agha, G.; Hewitt, C.
:title Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism
:date October 1985
:pages 20
:cost $2.50
:adnum AD-A162422
:keywords concurrency, distributed computing, programming languages, object-oriented programming, actors, functional programming, parallel processing,
open systems
:abstract
We argue that the ability to model shared objects with changing local
states, dynamic reconfigurability, and inherent parallelism are
desirable properties of any model of concurrency.  The {\it actor
model} addresses these issues in a uniform framework. This paper
briefly describes the concurrent programming language {\it Act3} and
the principles that have guided its development. {\it Act3} advances
the state of the art in programming languages by combining the
advantages of object-oriented programming with those of functional
programming. We also discuss considerations relevant to large-scale
parallelism in the context of {\it open systems}, and define an
abstract model which establishes the equivalence of systems defined by
actor programs.

:aim 867
:author Daniel P. Huttenlocher
:asort Huttenlocher, D.P.
:title Exploiting Sequential Phonetic Constraints In Recognizing
Spoken Words
:date October 1985
:pages 26
:cost $3.25
:adnum AD-A165913
:keywords natural constraints, partial information, word recognition,
speech recognition
:abstract Machine recognition of spoken language requires developing
more robust recognition algorithms.  A recent study by Shipman and
Zue suggest using partial descriptions of speech sounds to eliminate
all but a handful of word candidates from a large lexicon. The current
paper extends their work by investigating the power of partial
phonetic descriptions for developing recognition algorithms. First, we
demonstrate that sequences of manner of articulation classes are more
reliable and provide more constraint than certain other classes. Alone
these are of limited utility, due to the high degree of variability in
natural speech. This variability is not uniform however, as most
modifications and deletions occur in unstressed syllables. Comparing
the relative constraint provided by sounds in stressed syllables, we
discover that the stressed syllables provide substantially more
constraint. This indicates that recognition algorithms can be made
more robust by exploiting the manner of articulation information in
stressed syllables.

:aim 868
:author Brian C. Williams
:asort Williams, B.C.
:title Circumscribing Circumscription: A Guide to Relevance and
Incompleteness 
:date October 1985
:pages 46
:cost $3.75
:adnum AD-A166241
:keywords circumscription, common sense reasoning, nonmonotonic reasoning,
conjectural reasoning, resource limitations, relevance, completeness.
:abstract
Intelligent agents in the physical world must work from incomplete
information due to partial knowledge and limited resources.  An agent
copes with these limitations by applying rules of conjecture to make
reasonable assumptions about what is known. Circumscription, proposed
by McCarthy, is the formalization of a particularly important rule of
conjecture likened to Occam's razor. That is, the set of all objects
satisfying a certain property is the smallest set of objects that is
consistent with what is known.  This paper examines closely the
properties and the semantics underlying circumscription, considering
both its expressive power and limitations.  In addition we study
circumscription's relationship to several related formalisms, such as
negation by failure, the closed world assumption, default reasoning
and Planner's THNOT. In the discussion a number of extensions to
circumscription are proposed,allowing one to tightly focus its scope
of applicability. In addition, several new rules of conjecture are
proposed based on the notions of revelance and minimality. Finally, a
synthesis between the approaches of McCarthy and Konoglie is used to
extend circumscription, as well as several other rules of conjecture,
to account for resource limitations.

:aim 869
:author John Batali
:asort Batali, J.
:title A Vision Chip
:date May 1981
:pages 53
:cost $3.75
:adnum AD-A162536
:keywords VLSI design, vision hardware, early vision
:abstract
Some well understood and well justified algorithms for early visual
processing must be implemented in hardware for later visual processing
to be studied. This paper describes the design and hardware
implementation of a particular operator of visual processing. I
constructed an NMOS VLSI circuit that computes the gradient, and
detects zero-crossings, in a digital video image in real time. The
algorithms employed by the chip, the design process that led to it,
and its capabilites and limitations are discussed.  For hardware to be
a useful tool for AI, designing it must be as much like programming as
possible. This paper concludes with some discussion of how such a goal
can be met.

:aim 870
:author Shimon Ullman
:asort Ullman, S.
:title The Optical Flow of Planar Surfaces
:date December 1985
:pages 17
:cost $2.50
:keywords motion, structure from motion, moving planes, optical flow,
planar surfaces
:abstract The human visual system can recover the 3-D shape of moving
objects on the basis of motion information alone. Computational
studies of this capacity have considered primarily non-planar rigid
objects. With respect to moving planar surfaces, previous studies by
Hay (1966), Tsai and Huang (1981), Longuet-Higgins (1984), have shown
that the planar velocity field has in general a two-fold ambiguity:
there are two different planes engaged in different motions that can
induce the same velocity field. The current analysis extends the
analysis of the planar velocity field in four directions: (1) the use
of flow parameters of the type suggested by Koenderink and van Doorn
(1975), (2) the exclusion of confusable non-planar solutions, (3) a
new proof and a new method for computing the 3-D motion and surface
orientation (4) a comparison with the information available in
orthographic velocity fields, which is important for determining the
stability of the 3-D recovery process.

:aim 871
:author John C. Mallery, Roger Hurwitz, Gavan Duffy
:asort Mallery, J.C.; Hurwitz, R.; Duffy, G.
:title Hermeneutics: From Textual Explication to Computer
Understanding?
:date May 1986
:pages 41
:cost $3.75
:reference See also updated version, ``Hermeneutics,'' {\it
Encyclopedia of Artificial Intelligence}, vol. 1, editor, S.C.
Shapiro, New York: John Wiley \& Sons, 1987, pages 362-376.
:abstract
Hermeneutics, a branch of continental European philosophy concerned
with human understanding and the interpretation of written texts,
offers insights that may contribute to the understanding of meaning,
translation, architectures for natural language understanding, and
even to the methods suitable for scientific inquiry in AI.  After
briefly reviewing the historical development of hermeneutics as a
method of interpretation, this article examines the contributions of
hermeneutics to the human sciences. This background provides
perspective for a review of recent hermeneutically-oriented AI
research, including the Alker, Lehnert and Schneider computer-assisted
techniques for coding the affective structure of narratives, the
earlier positive proposal by Winograd and Bateman, the later pessimism
of Winograd and Flores on the possibility of AI, as well as the
system-building efforts of Duffey and Mallery.

:aim 873
:author Fanya Montalvo
:asort Montalvo, F.
:title Diagram Understanding: The Intersection of Computer Vision and
Graphics
:date November 1985
:pages 35
:cost $3.25
:reference Also ``Diagram Understanding: Associating symbolic
descriptions with images,'' Proceedings of the IEEE Computer Society
Workshop on Visual Languages, June 1986, Dallas, TX; and ``Diagram
Understanding: The symbolic descriptions behind the scenes,'' in R.R.
Korfhage, T. Ichikawa, and E. Jungert (eds.), {\it Visual Languages
II}, Plenum Press, New York, 1989 forthcoming.
:keywords vision, graphics, representation, shape, diagram
understanding, knowledge-based graphics, knowledge acquisition
:abstract  A problem common to Computer Vision and Computer Graphics
is identified. It is the problem of representing, acquiring, and
validating symbolic descriptions of visual properties. The
intersection of Computer Vision and Computer Graphics provides a basis
for {\it diagrammatic conversations} between users and systems. I call
this problem domain {\it Diagram Understanding} because of its analogy
with Natural Language Understanding. The recognition and generation of
visual objects from symbolic descriptions are two sides of the same
coin. A paradigm for the discovery and validation of higher-level
visual properties is introduced. The paradigm involves two aspects.
One is the notion of {\it denotation}: the  map between symbolic
descriptions and visual properties. The second aspect involves a method
for discovering a natural and rich set of visual primitives. The
notion of visual property is expanded, and the paradigm is further
illustrated with a traditional business graphics example.

:aim 875
:author Ra\'ul Vald\'es-P\'erez
:asort Vald\'es-P\'erez, R.
:title Spatio-Temporal Reasoning and Linear Inequalitites
:date May 1986
:pages 33
:cost $3.25
:adnum AD-A176659
:keywords spatial reasoning, constraint networks, layout, 
temporal reasoning, problem solving
:abstract
Time and space are sufficiently similar to warrant in certain cases a
common representation in AI problem-solving systems.  What is
represented is often the constraints that hold between objects, and a
concern is the overall consistency of a set of such constraints.  This
paper scrutinizes two current approaches to spatio-temporal reasoning.
The suitableness of Allen's temporal algebra for constraint networks
is influenced directly by the mathematical properties of the algebra.
These properties are extracted by a formulation as a network of
set-theoretic relations, such that some previous theorems due to
Montanari apply.  Some new theorems concerning consistency of these
temporal constraint networks are also presented.  It is argued that
the linear programming approach to reasoning in time and space is
needlessly complex, and is otherwise unsuitable as a submodule for a
task that performs search.  In the spirit of the thematic tradeoff
between expressivity and tractability, simple linear inequalities are
adequately expressive and provide known algorithms which are amenable
to a search regimen.  Moreover, another known algorithm captures the
real physical constraint of {\it disjointness}, and is therefore

jen@PRC.UNISYS.COM (Judie Norton) (09/06/89)

received by jen@prc.unisys.com