[comp.doc.techreports] tr-input/mit.pt5

leff@smu.UUCP (Laurence Leff) (09/02/89)

Foundation
:reference CBIP Memo No. 32, also in {\it Foundations of Cognitive
Science}, edited by M. Posner, MIT Press/Bradford Books, 1989.
:keywords computer vision, human vision, binocular stereo, motion
analysis, object recognition
:abstract The computational approach to the study of vision inquires
directly into the sort of information processing needed to extract
important information from the changing visual image -- information
such as the three-dimensional structure and movement of objects in the
scene, or the color and texture of object surfaces.  An important
contribution that computational studies have made is to show how
difficult vision is to perform, and how complex are the processes
needed to perform visual tasks successfully. This article reviews some
computational studies of vision, focusing on edge detection, binocular
stereo, motion analysis, intermediate vision and object recognition.

:aim 1039
:author Gerald Jay Sussman and Jack Wisdom
:asort Sussman, G.J.; Wisdom, J.
:title The Motion of Pluto is Chaotic
:date April 1988
:pages 18
:cost $2.50
:adnum AD-A195920
:reference Also published in {\it Science}, Volume 241, 22 July 1988.
:keywords dynamics, orrery, chaos, numerical methods, Pluto, solar
system 
:abstract The Digital Orrery has been used to perform an integration
of the motion of the outer planets for 845 million years.  This
integration indicates that the long-term motion of the planet Pluto is
chaotic. Nearby trajectories diverge exponentially with an {\it
e}-folding time of only about 20 million years.

:aim 1040
:author Andrew A. Berlin and Henry M. Wu
:asort Berlin, A.; Wu, H.M.
:title Scheme86: A System for Interpreting Scheme
:date April 1988
:pages 23
:cost $3.25
:adnum AD-A195931
:reference Also in {\it Proceedings of the ACM Conference on Lisp and
Functional Programming}, 1988.
:keywords Scheme, Lisp, computer architecture, interpretive techniques
:abstract Scheme86 is a computer system designed to interpret programs
written in the Scheme dialect of Lisp. A specialized architecture,
coupled with new techniques for optimizing register management in the
interpreter, allow Scheme86 to execute {\it interpreted} Scheme at a
speed comparable to that of {\it compiled} Lisp on conventional
workstations. 

:aim 1041
:author Boris Katz and Beth Levin
:asort Katz, B.; Levin, B.
:title Exploiting Lexical Regularities in Designing Natural Language
Systems 
:date April 1988
:pages 23
:cost $3.25
:adnum AD-A195922
:keywords natural language processing, lexicon, verb classes,
question-answering, diathesis alternations
:abstract
This paper presents the lexical component of the START Question
Anwsering system developed at the MIT Artificial Intelligence
Laboratory. START is able to interpret correctly a wide range of
semantic relationships associated with alternate expressions of the
arguments of verbs. The design of the system takes advantage of the
results of recent linguistic research into the structure of the
lexicon, allowing START to attain a broader range of coverage than
many existing systems.

:aim 1042
:author Jacob Katzenelson
:asort Katzenelson, J.
:title Computational Structure of the N-body Problem
:date April 1988
:pages 52
:cost $3.75
:adnum AD-A196422
:keywords N-body problem, tree algorithms for particle simulation,
parallel computing, particle simulation
:abstract
This work considers the organization and performance of computations
on parallel computers of tree algorithms for the N-body problem where
the number of particles is on the order of a million. The N-body
problem is formulated as a set of recursive equations based on a few
elementary functions, which leads to a computational structure in the
form of a pyramid-like graph, where is each vertex is a process, and
each arc a communication link. The pyramid is mapped to three
different processor configurations: (1) A pyramid of processors
corresponding to the processes pyramid graph; (2) A hypercube of
processors, e.g., a connection-machine like architecture; (3) A rather
small array, e.g., $2 \times 2 \times 2$, of processors faster than
the ones considered in (1) and (2) above. The main conclusion is that
simulations of this size can be performed on any of the three
architectures in reasonable time.

:aim 1044
:author W. Eric L. Grimson and David Huttenlocher
:asort Grimson, W.E.L.; Huttenlocher, D.P.
:title On the Sensitivity of the Hough Transform for Object
Recognition
:date May 1988
:pages 40
:cost $3.25
:keywords Hough transform, object recognition
:abstract
A common method for finding an object's pose is the generalized Hough
transform, which accumulates evidence for possible coordinate
transformations in a parameter space and takes large clusters of
similar transformations as evidence of a correct solution.  We analyze
this approach by deriving theoretical bounds on the set of
transformations consistent with each data-model feature pairing, and
by deriving bounds on the likelihood of false peaks in the parameter
space, as a function of noise, occlusion, and tessellation effects. We
argue that blithely applying such methods to complex recognition tasks
is a risky proposition, as the probability of false positives can be
very high.

:aim 1046
:author Steven D. Eppinger and Warren P. Seering
:asort Eppinger, S.D.; Seering, W.P.
:title Modeling Robot Flexibility for Endpoint Force Control
:date May 1988
:pages 18
:cost $2.50
:reference Research originally presented at IEEE International
Conference on Robotics and Automation, Philadelphia, PA, April 1988.
:keywords robot force control, robot control, robot dynamics, flexible
structures
:abstract Dynamic models have been developed in an attempt to match
the response of a robot arm. The experimental data show rigid-body and
five resonant modes. The frequency response and pole-zero arrays for
various models of structural flexibility are compared with the data to
evaluate the characteristics of the models, and to provide insight
into the nature of the flexibility in the robot. Certain models are
better able to depict transmission flexibility while others describe
types of structural flexibility.

:aim 1049
:author Alan Bawden and Jonathan Rees
:asort Bawden, A.; Rees, J.
:title Syntactic Closures
:date June 1988
:pages 27
:cost $3.25
:adnum AD-A195921
:reference Also in the {\it Proceedings of the ACM Conference on Lisp
and Functional Programming}, 1988. 
:keywords Lisp, Scheme, macros, lexical scoping, extensible syntax,
referential transparency
:abstract 
In this paper we describe {\it syntactic closures}.  Syntactic
closures address the scoping problems that arise when writing macros.
We discuss some issues raised by introducing syntactic closures into
the macro expansion interface, and we compare syntactic closures with
other approaches.  Included is a complete implementation.

:aim 1050
:author Philip E. Agre and David Chapman
:asort Agre, P.E.; Chapman, D.
:title What are plans for?
:date September 1988
:pages 18
:cost $2.50
:adnum AD-A200726
:keywords planning, improvisation, action, instruction following
:abstract
What plans are like depends on how they're used.  We contrast two
views of plan use.  On the plan-as-program-view, plan use is the
execution of an effective procedure.  On the plan-as-communication
view, plan use is like following natural language instructions.  We
have begun work on computational models of plans-as-communication,
building on our previous work on improvised activity and on ideas from
sociology.

:aim 1059
:author Randall Davis and Walter Hamscher
:asort Davis, R.; Hamscher, W.
:title Model-Based Reasoning: Troubleshooting
:date July 1988
:pages 54
:cost $3.75
:adnum AD-A201614
:keywords model-based reasoning, diagnosis, knowledge-based systems,
diagnosis, troubleshooting
:abstract
This paper surveys the current state of the art in model-based
reasoning, particularly its application to diagnosis and
troubleshooting, reviewing areas that are well understood and
exploring areas that present challenging research topics.  It views
the fundamental paradigm as the interaction of prediction and
observation, exploring it by examining three fundamental subproblems:
hypotheses generation, testing, and discrimination.  We find that
while a wide range of apparently diverse systems have been built, they
are for the most part variations on the central theme outlined here.
Their diversity lies primarily in the varying amounts and kinds of
knowledge they bring to bear at each stage of the process.  Survey of
this familiar territory leads to the conclusion that diagnostic
reasoning from a model is reasonably well understood, but that there
is a rich supply of research issues in the modeling process
itself.  In a sense we know how to do model-based reasoning; we don't
know how to model the behavior of complex devices, how to create
models, and how to select the ``right'' model for the task at hand.

:aim 1060
:author Shimon Ullman and Ronen Basri
:asort Ullman, S.; Basri, R.
:title The Alignment of Objects with Smooth Surfaces
:date July 1988
:pages 20
:cost $2.50
:keywords
:abstract
This paper examines the recognition of rigid objects bounded by smooth
surfaces using an alignment approach.  The projected image of such an
object changes during rotation in a manner that is difficut to
predict.  A method to approach this problem is suggested, using the
3-D surface curvature at the points along the silhouette.  The
curvature information requires a single number for each point along he
object's silhouette, the magnitude of the curvature vector at the
point.  We have implemented and tested this method on images of
complex 3-D objects; it was found to give accurate predictions of the
objects' appearances for large transformations.  A small number of
models can be used to predict the new appearance of an object from any
viewpoint.

:aim 1061
:author Shimon Ullman and Amnon Sha'ashua
:asort Ullman, S.; Sha'ashua, A.
:title Structural Saliency: The Detection of Globally Salient
Structures Using a Locally Connected Network
:date July 1988
:pages 15
:cost $2.50
:adnum AD-A201619
:keywords segmentation, image partitioning, saliency, visual
perception, perceptual organization
:abstract
Certain salient structures in images attract our immediate attention
without requiring a systematic scan.  We present a method for
computing saliency by a simple iterative scheme, using a uniform
network of locally connected processing elements.  The network uses an
optimization approach to produce a ``saliency map,'' a representation
of the image emphasizing salient locations.  The main properties of
the network are: (i) the computations are simple and local, (ii)
globally salient structures emerge with a small number of iterations,
and (iii) as a by-product of the computations, contours are smoothed
and gaps are filled in.

:aim 1062
:author Allen C. Ward and Warren Seering
:asort Ward, A.C.; Seering, W.
:title Quantitative Inference in a Mechanical Design Compiler
:date January 1989
:pages 16
:cost $2.50
:adnum AD-A209900
:keywords quantitative inference, constraint propagation, qualitative
reasoning, design
:abstract
This paper presents the ideas underlying a program that
takes as input a schematic of a mechanical or hydraulic power
transmission system, plus specifications and a utility function, and
returns catalog numbers from predefined catalogs for the optimal
selection of components implementing the design.   It thus provides
the designer with a high level ``language'' in which to compose new
designs, then performs some of the detailed design process for
him.   The program is based on a formalization of quantitative
inferences about hierarchically organized sets of artifacts and
operating conditions, which allows design compilation without the
exhaustive enumeration of alternatives.

:aim 1068
:author Hsien-Che Lee
:asort Lee, H.
:title Estimating the Illuminant Color from the Shading of a Smooth
Surface
:date August 1988
:pages 24
:cost $3.25
:adnum AD-A20133
:keywords color vision, specular reflection, shading, illuminant
chromaticity
:abstract
The image of a uniform wall illuminated by a spot light often
gives a strong impression of the illuminant color.
How can it be possible to know if it is a white wall illuminated
by yellow light or a yellow wall illuminated by white light?
If the wall is a Lambertian reflector, it would not
be possible to tell the difference. However, in the real
world, some amount of specular reflection is almost always
present. In this memo, it is shown that the computation is possible
in most practical cases.

:aim 1069
:author William Dally, Andrew Chien, Stuart Fiske, Waldemar Horwat,
John Keen, Peter Nuth, Jerry Larivee, and Brian Totty
:asort Dally, W.; Chien, A.; Fiske, S.; Horwat, W.; Keen, J.; Nuth,
P.; Larivee, J.; Totty, B.
:title Message-Driven Processor Architecture, Version 11
:date August 1988
:pages 39
:cost $3.25
:adnum AD-A201618
:keywords processor architecture, VLSI, parallel processing, message
driven processor, cache, fine grain, concurrent smalltalk,
networks
:abstract
The Message Driven Processor is a node of a large-scale multiprocessor
being developed by the Concurrent VLSI Architecture Group.  It is
intended to support fine-grained, message passing, parallel
computation.  It contains several novel architectural features, such
as a low-latency network interface, extensive type-checking hardware,
and on-chip memory that can be used as an associative lookup table.
This document is a programmer's guide to the MDP.  It describes the
processor's register architecture, instruction set, and the data types
supported by the processor.  It also details the MDP's message sending
and exception handling facilities.

:aim 1070
:author Brain K. Totty
:asort Totty, B.
:title An Operating Environment for the Jellybean Machine
:date May 1988
:pages 156
:cost $4.50
:adnum AD-A201042
:keywords operating system, Jellybean Machine, parallel processing,
distributed systems, networks, virtual memory, ensemble machines
:abstract
The Jellybean Machine is a scalable MIMD concurrent processor
consisting of special purpose RISC processors loosely coupled into a
low latency network.  I have developed an operating system to provide
the supportive environment required to efficiently coordinate the
collective power of the distributed processing elements.  The system
services are developed in detail, and may be of interest to other
designers of fine grain, distributed memory processing networks.

:aim 1071
:author Berthold K.P. Horn
:asort Horn, B.K.P.
:title Parallel Networks for Machine Vision
:date December 1988
:pages 69
:cost $4.00
:keywords analog networks, early vision, coupled networks, networks
with feedback, resistive networks, layered networks, relaxation,
cooperative computation, Gaussian convolution, edge detection,
multiple scales, position and orientation, interpolation, motion
vision, direct motion vision
:abstract
The amount of computation required to solve many early vision problems
is prodigious, and so it has long been thought that systems that operate
in a reasonable amount of time will only become feasible when 
parallel systems become available.
Such systems now exist in digital form, but they are large and expensive.  
Simple analog networks can perform interesting computations, 
as has been known for a long time.
We have reached the point where it is feasible to experiment with
implementation of these idea in VLSI form, particularly if we focus on 
networks composed of locally interconnected passive elements, linear 
amplifiers, and simple non-linear components.

:aim 1073
:author Daphna Weinshall
:asort Weinshall, D.
:title Seeing ``Ghost'' Solutions in Stereo vision
:date September 188
:pages 12
:cost $2.50
:adnum AD-A203581
:reference CBIP Memo No. 33
:keywords stereo vision, periodic stereograms, unique matching
:abstract A unique matching is a stated objective of most
computational theories of stereo vision. This report describes
situations where humans perceive a small number of surfaces carried by
non-unique matching of random dot patterns, although a unique solution
exists and is observed unambiguously in the perception of isolated
features. We find both cases where non-unique matchings compete and
suppress each other and cases where they are all perceived as
transparent surfaces. The circumstances under which each behavior
occurs are discussed and a possible explanation is sketched. It
appears that matching reduces many false targets to a few, but may
still yield multiple solutions in some cases through a (possibly
different) process of surface interpolation.

:aim 1078
:author Davi Geiger and Tomaso Poggio
:asort Geiger, D.; Poggio, T.
:title An Optimal Scale for Edge Detection
:date September 1988
:pages 23
:cost $3.25
:adnum AD-A202747
:keywords edge detection, cross-validation, regularization
:abstract
Many problems in early vision are ill posed$ ^1 $.  Edge detection is a
typical example. This paper applies regularization techniques to the
problem of edge detection. We derive an optimal filter for edge
detection with a size controlled by the regularization parameter
$\lambda $ and compare it to the Gaussian filter. A formula relating the
signal-to-noise ratio to the parameter $\lambda $ is derived from
regularization analysis for the case of small values of $\lambda$. We
also discuss the method of Generalized Cross Validation for obtaining
the optimal filter scale.  Finally, we use our framework to explain two
perceptual phenomena:  coarsely quantized images becoming recognizable
by either blurring or adding noise.

:aim 1084
:author Allen C. Ward and Warren Seering
:asort Ward, A.C.; Seering, W.
:title The Performance of a Mechanical Design ``Compiler''
:date January 1989
:pages 12
:cost $2.50
:adnum AD-A209540
:keywords constraint propagation, design, qualitative reasoning,
quantitative inference
:abstract A mechanical design ``compiler'' has been developed which,
given an appropriate schematic, specifications, and utility function
for a mechanical design, returns catalog numbers for an optimal
implementation. The compiler has been successfully tested on a variety
of mechanical and hydraulic power transmission designs, and a few
temperature sensing designs. Times required have been at worst
proportional to the logarithm of the number of possible combinations
of catalog numbers.

:aim 1091
:author Rodney A. Brooks
:asort Brooks, R.A.
:title A Robot That Walks; Emergent Behaviors from a Carefully Evolved
Network 
:date February 1989
:pages 12
:cost $2.50
:adnum AD-A207958
:keywords subsumption architecture, walking robot, emergent behavior,
distributed control
:abstract
This paper suggests a possible mechanism for robot evolution by
describing a carefully designed series of networks, each one being a
strict augmentation of the previous one, which control a six legged
walking machine capable of walking over rough terrain and following a
person passively sensed in the infrared spectrum.  As the completely
decentralized networks are augmented, the robot's performance and
behavior repertoire demonstrably improve.

:aim 1094
:author Harold Abelson, Michael Eisenberg, Mathew Halfant, Jacob
Katzenelson, Elisha Sacks, Gerald Jay Sussman, Jack Wisdom, Ken Yip
:asort Abelson, H.; Eisenberg, M.; Halfant, M.; Katzenelson, J.;
Sacks, E.; Sussman, G.J.; Wisdom, J.; Yip, K.
:title Intelligence in Scientific Computing
:date November 1988
:pages 35
:cost $3.25
:reference Also published in {\it Communications of the ACM}, vol. 32,
no. 5, May 1989.
:abstract Combining numerical techniques with ideas from symbolic
computation and with methods incorporating knowledge of science and
mathematics leads to a new category of intelligent computational tools
for scientists and engineers. These tools autonomously prepare
simulation experiments from high-level specifications of physical
models. For computationally intensive experiments, they automatically
design special-purpose numerical engines optimized to perform the
necessary computations. They actively monitor numerical and physical
experiments. They interpret experimental data and formulate numerical
results in qualitative terms. They enable their human users to control
computational experiments in terms of high-level behavioral descriptions.

:aim 1096
:author Boris Katz
:asort Katz, B.
:title Using English for Indexing and Retrieving
:date October 1988
:pages 31
:cost $3.25
:adnum AD-A202227
:keywords natural language, information retrieval, question-answering
system, parsing, language generation, lexicon 
:abstract This paper describes a natural language system START. The
system analyzes English text and automatically transforms it into an
appropriate representation, the {\it knowledge base}, which
incorporates the information found in the text. The user gains access
to information stored in the knowledge base by querying it in English.
The system analyzes the query and decides through a matching process
what information in the knowledge base is relevant to the question.
Then it retrieves this information and formulates its response also in
English. 

:aim 1100
:author Charles Rich and Richard C. Waters
:asort Rich, C.; Waters, R.C.
:title Intelligent Assistance for Program Recognition, Design,
Optimization, and Debugging
:date January 1989
:pages 28
:cost $3.25
:adnum N00014-88-K-0487
:keywords automatic programming, debugging, design, optimization,
recognition, Programmer's Apprentice
:abstract
A recognition assistant will help reconstruct the design of a program,
given only its source code.  A design assistant will assist a programmer
by detecting errors and inconsistencies in his design choices and by
automatically making many straightforward implementation decisions.
An optimization assistant will help improve the performance of
programs by identifying intermediate results that can be reused.  A
debugging assistant will aid in the detection, localization, and repair of
errors in designs as well as completed programs.

:aim 1102
:author Richard C. Waters
:asort Waters, R.C.
:title XP: A Common Lisp Pretty Printing System
:date March 1989
:pages 40
:cost $3.25
:keywords LISP, pretty printing, abbreviated output, formatted output
:abstract
XP provides efficient and flexible support for pretty printing in
Common Lisp.  Its single greatest advantage is that it allows the full
benefits of pretty printing to be obtained when printing data
structures, as well as when printing program code.  XP is efficient,
because it is based on a linear time algorithm that uses a small fixed
amount of storage.  XP is flexible, because users can control the
exact form of the output via a set of special format directives.  XP
can operate on arbitrary data structures, because facilities are
provided for specifying pretty printing methods for any type of
object.

:aim 1105
:author Berthold K.P. Horn
:asort Horn, B.K.P
:title Height and Gradient from Shading
:date May 1989
:pages 63
:cost $4.00
:keywords photoclinometry, smoothness, depth and slope, shape from
shading, variational methods, digital elevation models
:abstract
The method described here for recovering the shape of a surface from a
shaded image can deal with complex, wrinkled surfaces.  Integrability
can be enforced easily because both surface height and gradient are
represented.  The robustness of the method stems in part from
linearization of the reflectance map about the current estimate of the
surface orientation at each picture cell.  The new scheme can find an
exact solution of a given shape-from-shading problem even though a
regularizing term is included.  This is a reflection of the fact that
shape-from-shading problems are {\it not} ill-posed when boundary
conditions are available or when the image contains singular points.

:aim 1111
:author W. Eric L. Grimson
:asort Grimson, W.E.L.
:title The Combinatorics of Heuristic Search Termination for Object
Recognition in Cluttered Environments
:date May 1989
:pages 30
:cost $3.25
:adnum AD-A209690
:keywords computer vision, object recognition, search, combinatorics
:abstract 
Many recognition systems use constrained search to locate objects in
cluttered environments.  Earlier analysis showed that the expected
search is quadratic in the number of model and data features, if all
the data comes from one object, but is exponential when spurious data
is included.  To overcome this, many methods terminate search once an
interpretation that is ``good enough'' is found.  We formally examine
the combinatorics of this, showing that correct termination procedures
dramatically reduce search.  We provide conditions on the object model
and the scene clutter such that the expected search is quartic.  These
results are shown to agree with empirical data for cluttered object
recognition.

:aim 1114
:author Davi Geiger and Federico Girosi
:asort Geiger, D.; Girosi, F.
:title Parallel and Deterministic Algorithms for MRFs: Surface
Reconstruction and Integration
:date May 1989
:cost $3.25
:pages 37
:keywords surface reconstruction, Markov random fields, mean field,
integration, parameter estimation, deterministic algorithms
:abstract
In recent years many researchers have investigated the use of Markov
random fields (MRFs) for computer vision. The computational complexity
of the implementation has been a drawbacks of MRFs. In this paper we
derive deterministic approximations to MRFs models. All the
theoretical results are obtained in the framework of the mean field
theory from statistical mechanics. Because we use MRFs models the mean
field equations lead to parallel and iterative algorithms.  One of the
considered models for image reconstruction is shown to give in a
natural way the graduate non convexity algorithm proposed by Blake and
Zisserman.

:aim 1115
:author Andrew Christian
:asort Christian, A.
:title Design Considerations for an Earth-Based Flexible Robotic System
:date March 1989
:cost $2.50
:pages 16
:adnum AD-A209635
:keywords robot, flexible, design, vibration
:abstract This paper provides insights into the problems of designing
a robot with joint and link flexibility.  The relationship between the
deflection of the robot under gravity is correlated with the
fundamental frequency of vibration.  We consider different types of
link geometry and evaluate the flexibility potential of different
materials.  Some general conclusions and guidelines for constructing a
flexible robot are given.

:aim 1118
:author Michael D. Monegan
:asort Monegan, M.D.
:title An Object-Oriented Software Reuse Tool
:date April 1989
:pages 91
:cost $8.00
:adnum AD-A208018
N00014-88-K-0487 
:keywords software reuse, object-oriented, library, reusability
information, software development, querying, browsing
:abstract
The Object-oriented Reuse Tool (ORT) supports the reuse of
object-oriented software by maintaining a library of reusable classes
and recording information about their reusability as well as
information associated with their design and verification.  In the
early design phases of object-oriented development, ORT facilitates
reuse by providing a flexible way to navigate the library, thereby
aiding in the process of refining a design to maximally reuse existing
classes.  A collection of extensions to {\ORT} have also been
identified.  These extensions would compose the remainder of a system
useful in increasing reuse in object-oriented software production.

:aim 1119
:author Henry M. Wu
:asort Wu, H.M.
:title A Multiprocessor Architecture Using Modular Arithmetic for Very
High Precision Computation
:date April 1989
:pages 12
:cost $2.50
:adnum AD-A208154
:keywords modular arithmetic, computer architecture, multiprocessor,
residue number system, computer arithmetic, pipelining, chaos
:abstract
We outline a multiprocessor architecture that uses modular arithmetic
to implement numerical computation with 900 bits of intermediate
precision.  A proposed prototype, to be implemented with off-the-shelf
parts, will perform high-precision arithmetic as fast as some
workstations and mini-computers can perform IEEE double-precision
arithmetic.  We discuss how the structure of modular arithmetic
conveniently maps into a simple, pipelined multiprocessor
architecture. We present techniques we developed to overcome a few
classical drawbacks of modular arithmetic.  Our architecture is
suitable to and essential for the study of chaotic dynamical systems.

:aim 1120
:author Anita M. Flynn, Rodney A. Brooks, William M. Wells III, David
S. Barrett
:title SQUIRT: The Prototypical Mobile Robot for Autonomous Graduate
Students 
:date July 1989
:pages 31
:cost $3.25
:keywords miniature robot, autonomous robot, subsumption architecture
:abstract 
This paper describes an exercise in building a complete robot aimed at
being as small as possible but using off the shelf components
exclusively.  The result is an autonomous mobile robot slightly larger
than one cubic inch which incorporates sensing, actuation, onboard
computation and onboard power supplies.  Nicknamed Squirt, this robot
acts as a "bug", hiding in dark corners and venturing out in the
direction of last heard noises, only moving after the noises are long
gone.

:aim 1126
:author Anita M. Flynn, Rodney A. Brooks, Lee S. Taurow
:asort Flynn, A.M.; Brooks, R.A.; Taurow, L.S.
:title Twilight Zones and Cornerstones: A Gnat Robot Double Feature
:date July 1989
:pages 44
:cost $3.75
:keywords gnat robot, micro robot, piezoelectric motor, IR/Optical
camera, recursive gnat robot assembly line, disposable robots
:abstract
We want to build tiny gnat-sized robots, a millimeter or two in
diameter. They will be cheap, disposable, totally self-contained
autonomous agents able to do useful things in the world.  This paper
consists of two parts. The first describes why we want to build them.
The second is a technical outline of how to go about it.  Gnat robots
are going to change the world.

:aim 1131
:author Daphna Weinshall
:asort Weinshall, D.
:title Direct Computation of 3D Shape and Motion Invariants
:date May 1989
:pages 26
:cost $3.25
:reference C.B.I.P. Memo No. 38
N00014-85-K-0124 
:keywords motion, optical flow, qualitative vision, stereo, Gaussian
curvature, focus of expansion
:abstract
Structure from motion often refers to the computation of 3D structure
from a matched sequence of images. However, a depth map of a surface
is difficult to compute and may not be a good representation for
storage and recognition.  Given matched images, I will first show that
the sign of the normal curvature in a given direction at a given point
in the image can be computed from a simple difference of slopes of
line-segments in one image.  Using this result, local surface patches
can be classified as convex, concave, parabolic (cylindrical),
hyperbolic (saddle point) or planar.  At the same time the
translational component of the optical flow is obtained, from which
the focus of expansion can be computed.

:aim 1134
:author David McAllester and Bob Given
:asort McAllester, D.; Given, B.
:title Taxonomic Syntax for First Order Inference
:date June 1989
:cost $3.25
:pages 36
:keywords taxonomy, theorem proving, knowledge representation, types, 
inference, automated reasoning
:abstract
Most knowledge representation languages are based on classes and
taxonomic relationships between classes.  Taxonomic hierarchies without
defaults or exceptions are semantically equivalent to a collection of
formulas in first order predicate calculus.  Although designers of
knowledge representation languages often express an intuitive feeling
that there must be some advantage to representing facts as taxonomic
relationships rather than first order formulas, there are few, if any,
technical results supporting this intuition.  We attempt to remedy
this situation by presenting a taxonomic syntax for first order
predicate calculus and a series of theorems that support the claim
that taxonomic syntax is superior to classical syntax.

:aim 1136
:author Thomas Marill
:asort Marill, T.
:title Computer Perception of Three-Dimensional Objects
:date August 1989
:pages 40
:cost $3.25
:keywords computer vision, three-dimensional, minimization, line
drawings, perception, psychological vision
:abstract
We first pose the following problem: To develop a program which takes
line-drawings as input and constructs three-dimensional objects as
output; such that the output objects are the same as the ones we see
when we look at the input line-drawing. We then introduce the
principle of minimum standard-deviation of angles (MSDA) and discuss a
program based on MSDA.  We present the results of testing this program
with a variety of line-drawings and show that the program constitutes
a solution to the stated problem over the range of line-drawings
tested. Finally, we relate this work to its historical antecedents in
the psychological and computer-vision literature.

:aim 1138
:author Shimon Edelman, Heinrich B\"ulthoff, Daphna Weinshall
:asort Edelman, S.; B\"ulthoff, H.; Weinshall, D.
:title Stimulus Familiarity Determines Recognition Strategy for Novel
3D Objects
:date July  1989
:pages 22
:cost $3.25
Foundaton, NSF IRI-8719392
:reference C.B.I.P. Memo No. 40
:keywords psychophysics, vision, recognition, perceptual learning
:abstract
We describe a psychophysical investigation of the effects of object
complexity and familiarity on the variation of recognition time and
recognition accuracy over different views of novel 3D objects. Our
findings indicate that with practice the response times for different
views become more uniform and the initially orderly dependency of
the response time on the distance to a ``good'' view disappears. One
possible interpretation of our results is in terms of a tradeoff between
memory needed for storing specific-view representations of objects and
time spent in recognizing the objects.

:aim 1140
:author Tomaso Poggio and Frederico Girosi
:asort Poggio, T.; Girosi, F.
:title A Theory of Networks for Approximation and Learning
:date July 1989
:pages 87
:cost $4.00
:reference C.B.I.P. Memo No. 31
:keywords learning, approximation theory, regularization, networks,
recognition, splines
:abstract
Learning an input-output mapping from a set of examples, of the type
that many neural networks have been constructed to perform, can be
regarded as synthesizing an approximation of a multi-dimensional
function.  We develop a theoretical framework for approximation based on
regularization techniques that leads to a class of three-layer networks
that we call Generalized Radial Basis Functions (GRBF).  GRBF networks
are not only equivalent to generalized splines, but are also closely
related to several pattern recognition methods and neural network
algorithms.  The paper introduces several extensions and applications of
the technique and discusses intriguing analogies with neurobiological
data.

:aim 1141
:author Ellen C. Hildreth, Norberto M. Grzywacz, Edward H. Adelson and
Victor K. Inada
:asort Hildreth, E.C.; Grzywacz, N.M.; Adelson, E.H.; Inada, V.K.
:title The Perceptual Buildup of Three-Dimensional Structure from
Motion 
:date August 1989
:pages 36
:cost $3.25
:reference C.B.I.P Memo No. 20
Foundation
:keywords motion perception, structure from motion, motion analysis,
visual perception, 3-D vision
:abstract
We present psychophysical experiments that measure the accuracy of
perceived 3--D structure derived from relative image motion. The
experiments are motivated by Ullman's incremental rigidity scheme,
which builds up 3--D structure incrementally over an extended time.
Our main conclusions are: first, the human system derives an accurate
model of the relative depths of moving points, even in the presence of
noise; second, the accuracy of 3--D structure improves with time,
eventually reaching a plateau; and third, the 3--D structure currently
perceived depends on previous 3--D models.  Through computer
simulations, we relate the psychophysical observations to the behavior
of Ullman's model.

:aim 1145
:author Andrew Berlin and Daniel Weise
:asort Berlin, A.; Weise, D.
:title Compiling Scientific Code Using Parital Evaluation
:date July 1989
:pages 21
:cost $3.25
:keywords partial evaluation, scientific computation, parallel
architectures, parallelizing compilers
:abstract
Scientists are faced with a dilemma: Either they can write abstract
programs that express their understanding of a problem, but which do
not execute efficiently; or they can write programs that computers can
execute efficiently, but which are difficult to write and difficult to
understand.  We have developed a compiler that uses partial evaluation
and scheduling techniques to provide a solution to this dilemma.

:tr 474
:author Guy L. {Steele, Jr.}
:asort Steele, G.L., Jr.
:title Rabbit: A Compiler for Scheme
:date May 1978
:cost $9.00
:pages 272
:adnum AD-A061996
:abstract
We have developed a compiler for the lexically-scoped dialect of LISP
known as SCHEME.  The compiler knows relatively little about specific
data manipulation primitives such as arithmetic operators, but
concentrates on general issues of environment and control.  Rather
than having specialized knowledge about a large variety of control and
environment constructs, the compiler handles only a small basis set
which reflects the semantics of lambda-calculus.  All of the
traditional imperative constructs, such as sequencing, assignment,
looping, GOTO, as well as many standard LISP constructs such as AND,
OR and COND, are expressed as macros in terms of the applicative basis
set.  A small number of optimization techniques, coupled with the
treatment of function calls as GOTO statements, serve to produce code
as good as that produced by more traditional compilers.

:tr 595 
:author Guy Lewis {Steele, Jr.}
:asort Steele, G.L., Jr.
:title The Definition and Implementation Of A Computer
Programming Language Based On Constraints
:date August 1980
:cost $10.00
:pages 372
:reference VLSI Memo \#80-32
:adnum AD-A096556
:abstract
The constraint paradigm is a model of computation in which values are
deduced whenever possible, but deductions must be {\it local}. One may
visualize a constraint ``program" as a network of devices connected by
wires. Data values may flow along the wires, and computation is
performed by the devices, which use only locally available information
(with a few exceptions), and places newly derived values on other,
locally attached wires. In this way computed values are {\it
propagated}.  Advantages and disadvantages of the constraint paradigm
are discussed, and a number of implementations of its programming
languages are presented.  The goal approached, though not quite
reached, is a complete programming system which will implicitly
support the constraint paradigm to the same extent that LISP, say,
supports automatic storage management.

:tr 604
:author Charles Rich
:asort Rich, C.
:title Inspection Methods In Programming
:date June 1981
:cost $10.00
:pages 287
:adnum AD-A110030
:abstract
The work reported here lies in the area of overlap between artificial
intelligence and software engineering.  As research in artificial
intelligence, it is a step towards a model of problem solving in the
domain of programming.  In particular, this work focuses on the
routine aspects of programming which involve the application of
previous experience with similar programs.  I call this programming by
inspection.  Programming is viewed here as a kind of engineering
activity.  Analysis and synthesis by inspection are a prominent part
of expert problem solving in many other engineering disciplines, such
as electrical and mechanical engineering.  

:tr 633
:author William Douglas Clinger
:asort Clinger, W.D.
:title Foundations of Actor Semantics
:date May 1981
:cost $9.00
:pages 177
:abstract
The actor message-passing model of concurrent computation has inspired
new ideas in the areas of knowledge-based systems, programming
languages and their semantics, and computer systems architecture.
This thesis extends and unifies the work of Carl Hewitt, Irene Greif,
Henry Baker, and Giuseppe Attardi, who developed the mathematical
content of the model.  The ordering laws postulated by Hewitt and
Baker can be proved using a notion of global time.  The most general
ordering laws are equivalent to an axiom of realizability in global
time.  Since nondeterministic concurrency is more fundamental that
deterministic sequential computation, there may be no need to take
fixed points in the underlying domain of a power domain.  Power
domains built from incomplete domains can solve the problem of
providing a fixed point semantics for a class of nondeterministic
programming languages in which a fair merge can be written.  The
locality laws postulated by Hewitt and Baker may be proved for the
semantics of an actor-based language.  Altering the semantics slightly
can falsify the locality laws.  The locality laws thus constrain what
counts as an actor semantics.

:tr 704
:author Daniel Carl Brotsky
:asort Brotsky, D.C.
:title An Algorithm for Parsing Flow Graphs
:date March 1984
:cost $8.00
:pages 152
:adnum AD-A142440
:keywords parsing, graph grammars, directed graphs, graph analysis,
Earley's Algorithm, program analysis, Programmer's Apprentice
:abstract
This report desribes research about {\it flow graphs} -- labeled,
directed, acyclic graphs which abstract representations used in a
variety of Artificial Intelligence applications.  Flow graphs may be
derived from {\it flow grammars} such as strings be derived from
string grammars; this derivation process forms a useful model for the
stepwise refinement processes used in programming and other
engineering domains.  The central result of this report is a parsing
algorithm for flow graphs. Given a flow grammar and a flow graph, the
algorithm determines whether the grammar generated the graph and, if
so, finds all possible derivations for it.  The author has implemented
the algorithm in LISP. The intent of this report is to make flow-graph
parsing available as an analytic tool for researchers in Artificial
Intelligence.  The report explores the intuitions behind the parsing
algorithm, contains numerous, extensive examples of its behavior, and
provides some guidance for those who wish to customize the algorithm
to their own uses.

:tr 707
:author Walter Hamscher
:asort Hamscher, W.
:title Using Structural and Functional Information in Diagnostic
Design
:date June l983
:cost $7.00
:pages 68
:adnum AD-A131859
:abstract  
We wish to design a diagnostic for a device from knowledge of its
structure and function.  The diagnostic should achieve both {\it coverage}
of the faults that can occur in the device, and should strive to
achieve {\it specificity} in its diagnosis when it detects a fault.  A
system is described that uses a simple model of hardware structure and
function, representing the device in terms of its internal primitive
functions and connections.  The system designs a diagnostic in three
steps.  First, an extension of path sensitization is used to design a
test for each of the connections in the device.  Next, the resulting
tests are improved by increasing their specificity.  Finally the tests
are ordered so that each relies on the fewest possible connections.
We describe an implementation for this system and show examples of the
results for some simple devices.