[comp.doc.techreports] tr-input/mit.pt6

leff@smu.UUCP (Laurence Leff) (09/02/89)

:tr 715
:author George Edward {Barton, Jr.}
:asort Barton, G.E., Jr.
:title A Multiple-Context Equality-Based Reasoning System
:date April 1983
:cost $8.00
:pages 145
:adnum AD-A132369
:abstract
Expert Systems are too slow.  This work attacks that problem by
speeding up a useful system component that remembers facts and tracks
down simple consequences. The redesigned component can assimilate new
facts more quickly because it uses a compact, grammar-based internal
representation to deal with whole classes of equivalent expressions at
once.  It can support faster hypothetical reasoning because it
remembers the consequences of several assumption sets at once. The new
design is targeted for situations in which many of the stored facts
are equalities.  The deductive machinery considered here supplements
stored premises with simple new conclusions. The stored premises
include permanently asserted facts and temporarily adopted
assumptions. The new conclusions are derived by substituting equals
for equals and using the properties of the logical connectives AND,
OR, and NOT.  The deductive system provides supporting premises for
its derived conclusions. Reasoning that involves quantifiers is beyond
the scope of its limited and automatic operation.  The expert system 
which the reasoning system is a	component is expected to be responsible
for overall control of reasoning.

:tr 720
:author John Canny
:asort Canny, J.
:title Finding Edges and Lines in Images
:date June 1983
:cost $6.00
:pages 146
:adnum AD-A130824
:abstract
This thesis is an attempt to formulate a set of edge detection
criteria that capture as directly as possible the desirable properties
of an edge operator.  Variational techniques are used to find a
solution over the space of all linear shift invariant operators.  The
detector should have low probability of error, the marked points
should be as close as possible to the centre of the true edge, and
there should be low probability of more than one response to a single
edge. The technique is used to find optimal operators for step edges
and for extended impulse profiles (ridges or valleys in two
dimensions).  The extension of the one dimensional operators to two
dimensions results in a set of operators of varying width, length and
orientation.  The problem of combining these outputs into a single
description is discussed, and a set of heuristics for the integration
are given.

:tr 728
:author Daniel G. Theriault
:asort Theriault, D.G.
:title Issues in the Design and Implementation of Act2
:date June 1983
:cost $7.00
:pages 213
:adnum AD-A132326
:abstract 
Act2 is a highly concurrent programming language designed to
exploit the processing power available from parallel computer
architectures.  The language supports advanced concepts in software
engineering, providing high-level constructs suitable for implementing
artificially-intelligent applications.  Act2 is based on the Actor
model of computation, consisting of virtual computational agents which
communicate by message-passing.  Act2 serves as a framework in which
to integrate an actor language, a description and reasoning system,
and a problem-solving and resource management system.  This document
describes issues in Act2's design and the implementation of an
interpreter for the language.  

:tr 749
:author Reid Gordon Simmons
:asort Simmons, R.G.
:title Representing and Reasoning About Change in Geologic Interpretation
:date December 1983
:cost $8.00
:pages 131
:adnum AD-A149279
:keywords qualitative simulation, quantitative simulation, 
multiple representation, spatial reasoning, temporal reasoning, 
numeric reasoning, geologic interpretation
:abstract
Geologic interpretation is the task of inferring a sequence of events
to explain how a given geologic region could have been formed. This
report describes the design and implementation of one part of a geologic
interpretation problem solver -- a system which uses a simulation
technique called {\it imagining} to check the validity of a candidate
sequence of events.  Imagining uses a combination of qualitative and
quantitative simulations to reason about the chnges which occurred to
the geologic region.  The spatial changes which occur are simulated by
constructing a sequence of diagrams.  This quantitative simulation
needs numeric parameters which are determined by using the qualitative
simulation to establish the cumulative changes to an object and by
using a description of the current geologic region to make
quantitative measurements. The diversity of reasoning skills used in
imagining has necessitated the development of multiple
representations, each specialized for a different task.
Representations to facilitate doing temporal, spatial and numeric
reasoning are described in detail.  We have also found it useful to
explicitly represent {\it processes}.  Both the qualitative and
quantitative simulations use a discrete ``layer cake" model of geologic
processes, but each uses a separate representation, specialized
to support the type of simulation.  These multiple representations have
enabled us to develop a powerful, yet modular, system for reasoning
about change.

:tr 753
:author Richard C. Waters
:asort Waters, R.C.
:title KBEmacs: A Step Toward the Programmer's Apprentice
:date May 1985
:pages 236
:cost $9.00
:adnum AD-A157814
:keywords computer aided design, program editing, programming environment,
reuseable software components, Programmer's Apprentice
:abstract
The Knowledge-Based Editor in Emacs (KBEmacs) is the current
demonstration system as part of the Programmer's Apprentice project.
KBEmacs is capable of acting as a semi-expert assistant to a person
who is writing a program -- taking over some parts of the programming
task. Using KBEmacs, it is possible to construct a program issuing a
series of high level commands.  This series of commands can be as much
as an order of magnitude shorter than the program it descibes.
KBEmacs is capable of operating on ADA and LISP programs of realistic
size and complexity. Although KBEmacs is neither fast enough nor
robust enough to be considered a true prototype, both of these
problems could be overcome if the systems were to be reimplemented.

:tr 754
:author Richard H. Lathrop
:asort Lathrop, R.H.
:title Parallelism in Manipulator Dynamics
:date December 1983
:cost $8.00
:pages 109
:adnum AD-A142515
:keywords robots, robotics, industrial robots, cybernetics, 
parallel processing, pipeline processing, large scale integration
:abstract
This paper addresses the problem of efficiently computing the motor
torques required to drive a lower-pair kinematic chain given the
desired trajectory.  It investigates the high degree of parallelism
inherent in the computations, and presents two ``mathematically exact"
formulations especially suited to high-speed, highly parallel
implementations.  The first is a parallel version of the recent linear
Newton-Euler recursive algorithm.  The second reports a new parallel
algorithm which shows that it is possible to improve upon the linear
time dependency.  Either formulation is susceptible to a systolic
pipelined architecture in which complete sets of joint torques emerge
at successive intervals of four floating-point operations.  We
indicate possible applications to incorporating dynamical
considerations into trajectory planning, e.g. it may be possible to
build an on-line trajectory optimizer.

:tr 767
:author Brian C. Williams
:asort Williams, B.C.
:title Qualitative Analysis of MOS Circuits
:date July 1984
:cost $8.00
:pages 90
:adnum AD-A149267
:keywords causal reasoning, VLSI, qualitative physics, design
automation, qualitative circuit simulation, representation of
knowledge, circuit theory, problem solving, expert systems
:abstract
With the push towards sub-micron technology, transistor models have
become increasingly complex.  The number of components in integrated
circuits has forced designer's efforts and skills towards higher
levels of design.  This has created a gap between design expertise and
the performance demands increasingly imposed by the technology. To
alleviate this problem, software tools must be developed that provide
the designer with expert advice on circuit performance and design.
This requires a theory that links the intuitions of an expert circuit
analyst with the corresponding principles of formal theory (i.e.,
algebra, calculus, feedback anaylsis, network theory, and
electrodynamics), and that makes each underlying assumption explicit.
Temporal Qualitative Analysis is a technique for analyzing the
qualitative large signal behavior of MOS circuits that straddle the
line between the digital and analog domains. Temporal Qualitative
Analysis is based on the following four components: First, a
qualitative representation is composed of a set of open regions
separated by boundaries. These boundaries are chosen at the
appropriate level of detail for the analysis. This concept is used in
modeling time, space, circuit state variables, and device operating
regions. Second, constraints between circuit state variables are
established by circuit theory. At a finer time scale, the designer's
intuition of electrodynamics is used to impose a causal relationship
among these constraints. Third, large signal behavior is modeled by
Transition Analysis, using continuity and theorems of calculus to
determine how quantities pass between regions over time. Finally,
Feedback Analysis uses knowledge about the structure of equations and
the properties of structure classes to resolve ambiguities.

:tr 789
:author Kenneth D. Forbus
:asort Forbus, K.D.
:title Qualitative Process Theory
:date July 1984
:cost $9.00
:pages 179
:adnum AD-A148987
:keywords qualitative reasoning, common sense reasoning, naive
physics, artificial intelligence, problem solving, mathematical
reasoning
:abstract
Qualitative Process theory defines a simple notion of physical processes
that appears useful as a language in which to write dynamical
theories.  This report describes the basic concepts of Qualitative
Process theory, several different kinds of reasoning that can be
performed with them, and discusses its impact on other issues in
common sense reasoning about the physical world, such as causal
reasoning and measurement interpretation. Several extended examples
illustrate the utility of the theory, including figuring out that a
boiler can blow up, that an oscillator with friction will eventually
stop, and how to say that you can pull with a string, but not push
with it. This report also describes GIZMO, an implemented computer
program which uses Qualitative Process theory to make predictions and
interpret simple measurements.

:tr 791
:author Bruce R. Donald
:asort Donald, B.R.
:title Motion Planning with Six Degrees of Freedom
:date May 1984
:cost $9.00
:pages 261
:adnum AD-A150312
:keywords motion planning, robotics, path planning, configuration
space, obstacle avoidance, spatial reasoning, geometric modelling,
piano mover's problem, computational geometry, applied differential
topology, Voronoi diagrams
:abstract
The motion planning problem is of central importance to the fields of
robotics, spatial planning, and automated design. In robotics we are
interested in the automatic synthesis of robot motions, given
high-level specifications of tasks and geometric models of the robot
and obstacles. The ``Mover's" problem is to find a continuous,
collision-free path for a moving object through an environment
containing obstacles. We present an implemented algorithm for the
``classical" formulation of the three-dimensional Mover's problem:
Given an arbitrary rigid polyhedral moving object ``P" with three
translational and three rotational degrees of freedom, find a
continuous, collision-free path taking ``P" from some initial
configuration to a desired goal configuration. This thesis describes
the first known implementation of a complete algorithm (at a given
resolution) for the full six degree of freedom Mover's problem. The
algorithm transforms the six degree of freedom planning problem into a
point navigation problem in a six-dimensional configuration space
(called C-space). The C-space obstacles, which characterize the
physically unachievable configurations, are directly represented by
six-dimensional manifolds whose boundaries are five dimensional C-surfaces.

:tr 793
:author Daniel Sabey Weld
:asort Weld, D.S.
:title Switching Between Discrete and Continuous Process Models to Predict Genetic Activity
:date May 1984
:cost $7.00
:pages 83
:reference Also "The Use of Aggregation in Qualitative Simulation",
{\it Artificial Intelligence}, vol. 30, no. 1, October 1986.
:keywords QP theory, simulation, aggregation, multiple representation
:abstract
Two kinds of process models have been used in programs that reason
about change: discrete and continuous models. We describe the design
and implementation of a qualitative simulator, PEPTIDE, which uses
both kinds of process models to predict the behavior of molecular
genetic systems. The program uses a discrete process model to simulate
both situations involving abrupt changes in quantities and the actions
of small numbers of molecules. It uses a continuous process model to
predict gradual changes in quantities. A novel technique, called
aggregation, allows the simulator to switch between these models
through the recognition and summary of cycles. The flexibility of
PEPTIDE's aggregator allows the program to detect cycles within cycles
and predict the behavior of complex situations.

:tr 794
:author Eugene C. Ciccarelli IV
:asort Ciccarelli, E.C.
:title Presentation Based User Interfaces
:date August 1984
:cost $9.00
:pages 196
:adnum AD-A150311
:keywords user interfaces, presentation systems, programming tools,
display, editor
:abstract
A prototype {\it presentation system base} is described. It offers
mechanisms, tools, and ready-made parts for building user interfaces.
A general user interface mode underlies the base, organized around the
concept of a {\it presentation}: a visible text or graphic form
conveying information. The base and model emphasize domain
independence and style independence.  In order to illustrate the
model's generality and descriptive capabilities, extended model
structures for several existing user interfaces are discussed.  The
base provides an initial presentation data base network, graphics to
continuously display it, and editing functions.  To demonstrate the
base's utility, three interfaces to an operating system were
constructed, embodying different styles: icon, menu, and graphical
annotation.

:tr 802
:author David Chapman
:asort Chapman, D.
:title Planning for Conjunctive Goals
:date November 1985
:pages 67
:cost $5.00
:adnum AD-A165883
:keywords planning, nonlinearity, conjunctive goals, TWEAK, action
representation, frame problem, intractability
:abstract
The problem of achieving conjunctive goals has been central to
domain-independent planning research; the nonlinear constraint-posting
approach has been most successful. Previous planners of this type
have been complicated, heuristic, and ill-defined. I have combined and
distilled the state of the art into a simple, precise, implemented
algorithm (TWEAK) which I have proved correct and complete. I analyze
previous work on domain-independent conjunctive planning; in
retrospect it becomes clear that all conjunctive planners, linear and
nonlinear, work the same way. The efficiency of these planners depends
on the traditional add/delete-list representation for actions, which
drastically limits their usefulness. I present theorems that suggest
that efficient general purpose planning with more expressive action
representations is impossible, and suggest ways to avoid this problem.

:tr 807
:author Andrew Lewis Ressler
:asort Ressler, A.L.
:title A Circuit Grammar for Operational Amplifier Design
:date January 1984
:cost $8.00
:pages 92
:adnum AD-A149566
:keywords artificial intelligence, computer aided design, grammar,
operational amplifier, circuit, design, language
:abstract
In this thesis I show that much of the behavior of a designer engaged
in ordinary electrical circuit design can be modelled by a clearly
defined computational mechanism executing a set of stylized rules.  By
analogy with context-free languages, a class of circuits is generated
by a phrase-structure grammar of which each rule describes how one
type of abstract object can be expanded into a combination of more
concrete parts.  Analysis must be done at each level of the expansion
to constrain the search to a reasonable set.  The rules of my
circuit grammar provide constraints which allow the approximate
qualitative analysis of partially instantiated circuits.  As part of
this research I have developed a computer program , CIROP, which
implements my theory in the domain of operational amplifier design.

:tr 810
:author Michael Andreas Erdmann
:asort Erdmann, M.A.
:title On Motion Planning with Uncertainty
:date August 1984
:cost $9.00
:pages 261
:adnum AD-A149521
:keywords motion planning, mechanical assembly, parts mating,
robotics, configuration space, friction, compliance, uncertainty
:abstract
Planning in the presence of uncertainty, which arises from errors in
modelling, sensing, and control, constitutes one facet of the general
motion planning problem in robotics.  This thesis investigates
geometrical tools for modelling and overcoming uncertainty. It
describes an algorithm for computing backprojections of desired task
configurations, considers the structure of backprojection regions and
of task goals that ensures goal recognizability, and develops a
representation of friction in configuration space.

:tr 834
:author Peter Merrett Andreae
:asort Andreae, P.M.
:title Justified Generalization: Acquiring Procedures From Examples
:date January,1985
:cost $8.00
:pages 161
:adnum AD-A156408
:keywords machine learning, constraining generalization, justification of
generalization.
:abstract
This thesis describes an implemented system called NODDY for acquiring
procedures from examples presented by a teacher. NODDY is based on two
principles for constraining generalization.  The first principle is to
exploit domain based constraints, which can be used both to reduce the
space of possible generalizations to manageable size, and to generate
negative examples out of positive examples.  The second principle is
to avoid spurious generalizations by requiring justification.  NODDY
also demonstrates methods for three types of constructive
generalization: inferring loops (a kind of group), inferring complex
relations and state variables, and inferring predicates.

:tr 843
:author Peter J. Sterpe
:asort Sterpe, P.J.
:title TEMPEST: A Template Editor for Structured Text
:date June 1985
:pages 42
:cost $7.00
:keywords text editors, structured text, templates, reuse
:abstract
TEMPEST is a full screen editor that incorporates a structural
paradigm in addition to the more traditional textual paradigm provided
by most editors. While the textual paradigm treats the text as a
sequence of characters, the structural paradigm treats it as a
collection of named {\it blocks} which the user can define, group,
and manipulate. Blocks can be defined to correspond to the structural
features of the text, thereby providing more meaningful objects to
operate on than characters or lines. The structural representation of
the text is kept in the background giving TEMPEST the appearance of a
typical text editor. The structural and textual interfaces coexist
equally, however, so one can always operate on the text from either
point of view. TEMPEST's representation scheme provides no semantic
understanding of structure. This approach sacrifices depth, but affords
a broad range of applicability and requires very little computational
overhead. A prototype has been implemented to illustrate the
feasibility and potential areas of application of the central ideas.
It was developed and runs on an IBM Personal Computer.

:tr 852
:title Local Rotational Symmetries
:author Margaret Morrison Fleck
:asort Fleck, M.M.
:date August 1985
:pages 156
:cost $8.00
:ADnum AD-A159522
:keywords shape representation, computer vision, artificial intelligence,
smoothed local symmetries, local symmetries, multiple-scale representations,
hierarchical representations, rotational symmetries, round regions
:abstract 
This thesis describes a representation for the two-dimensional round
regions called Local Rotational Symmetries, a companion to Brady's
Smoothed Local Symmetry Representation for elongated shapes.  An
algorithm for computing Local Rotational Symmetry representations at
multiple scales of resolution has been implemented.  Results suggest
that Local Rotational Symmetries provide a more robustly computable
and perceptually accurate description of round regions than previously
proposed representations.  Computation of Smoothed Local Symmetries
and Local Rotational Symmetries has been modified in the course of
development. First, grey scale image smoothing proves to be better
than boundary smoothing for creating representations at multiple
scales of resolution, because it is more robust and it allows
qualitative changes in representation between scales. Secondly, it is
proposed that shape representations at different scales be explicitly
related, so that information can be passed between scales and
computation at each scale can be kept local.  Such a model for
multi-scale computation is desirable both to allow efficient
computation and to accurately model human perceptions.

:tr 853
:author Jonathan Hudson Connell
:asort Connell, J.H.
:title Learning Shape Descriptions: Generating and Generalizing Models
of Visual Objects
:date September 1985
:pages 101
:cost $8.00
:adnum AD-A162562
:keywords learning, concept learning, SLS, shape description, machine vision, high-level vision, smoothed local symmetries
:abstract We present the results of an implemented system for learning
structual prototypes from gray-scale images. We show how to divide an
object into subparts and how to encode the properties of these
subparts and the relations between them. We discuss the importance of
hierarchy and grouping in representing objects and show how a notion
of visual similarity can be embedded in the description language.
Finally we exhibit a learning algorithm that forms class models from
the descriptions produced and uses these models to recognize new
members of the class.

:tr 859
:author Anita M. Flynn
:asort Flynn, A.M.
:title Redundant Sensors for Mobile Robot Navigation
:date September 1985
:pages 70
:cost $7.00
:adnum AD-A161087
:keywords mobile robot, sensors, path planning, navigation, map making
:abstract Redundant sensors are needed on a mobile robot so that the accuracy
with which it percieves its surroundings can be increased. Sonar and infrared
sensors are used here in tandem, each compensating for deficiences in the 
other. The robot combines the data from both sensors to build a representation
which is more accurate than if either sensor were used alone. Another represen-
tation, the curvature primal sketch, is extracted from this percieved workspace
and is used as the input to two path planning programs: one based on configur-
ation space and one based on a generalized cone formulation of free space.

:tr 860
:author Jose Luis Marroquin
:asort Marroquin, J.L.
:title Probabilistic Solution of Inverse Problems
:date September 1985
:pages 206
:cost $9.00
:adnum AD-A161130
:keywords inverse problems, computer vision, surface interpolation,
image restoration, Markov random fields, optimal estimation, simulated
annealing
:abstract 
In this thesis we study the general problem of reconstructing a
function, defined on a finite lattice, from a set of incomplete, noisy
and/or ambiguous observations. The goal of this work is to demonstrate
the generality and practical value of a probabilistic (in particular,
Bayesian) approach to this problem, particularly in the context of
Computer Vision. In this approach, the prior knowledge about the
solution is expressed in the form of a Gibbsian probability
distribution on the space of all possible functions, so that the
reconstruction task is formulated as an estimation problem.

:tr 874
:author Richard Elliot Robbins
:asort Robbins, R.E.
:title BUILD: A Tool for Maintaining Consistency in Modular Systems
:date November 1985
:pages 52
:cost $7.00
:adnum AD-A162744
:keywords consistent construction, system maintenance, system
modeling, module interconnection language
:abstract BUILD is a tool for keeping modular systems in a consistent
state by managing the construction tasks (e.g. compilation, linking,
etc.) associated with such systems. It employs a user supplied system
model and a procedural description of a task to be performed in order
to perform the task. This differs from existing tools which do not
explicitly separate knowledge about systems from knowledge about how
systems are manipulated. BUILD provides a static framework for
modeling systems and handling construction requests that makes use of
programming environment specific definitions. By altering the set of
definitions, BUILD can be extended to work with new programming
environments and to perform new tasks.

:tr 900
:author David Mark Siegel
:asort Siegel, D.M.
:title Contact Sensors for Dextrous Robotic Hands
:date June 1986
:pages 139 
:cost $8.00
:adnum AD-A174654
:keywords robotics, tactile sensing, thermal sensing, computational
architecture, robotic hands, haptics
:abstract
This thesis examines a tactile sensor and a thermal sensor for use
with the Utah-MIT dexterous four fingered hand.  The tactile sensor
utilizes capacitive transduction with a novel design based entirely on
silicone elastomers.  The thermal sensor measures a material's heat
conductivity by radiating heat into an object and measuring the
resulting temperature variations.  The computational requirements for
controlling a sensor equipped dexterous hand are severe. A
computational architecture based on interconnecting high performance
microcomputers and a set of software primitives tailored for sensor
driven control has been proposed. The system has been implemented and
tested on the Utah-MIT hand.

:tr 901
:author Kenneth W. {Haase, Jr.}
:asort Haase, K.W.
:title ARLO: Another Representation Language Offer
:date October 1986
:pages 95
:cost $8.00
:adnum AD-A174567
:keywords knowledge representation, representation languages,
meta-representation, reflection, artificial intelligence, AI
languages, RLL
:abstract
This paper describes ARLO, a {\it representation language language}
loosely modelled after Greiner and Lenat's RLL-1. ARLO is a
structure-based representation language for describing structure-based
representation languages, {\it including itself}. A given
representation language is specified in ARLO by a collection of
structures describing how its descriptions are interpreted, defaulted,
and verified. This high level description is compiled into lisp code
and ARLO structures whose interpretation fulfills the specified
semantics of the representation.  In addition, ARLO itself -- as a
representation language for expressing and compiling partial and
complete language specifications -- is described and interpreted in
the same manner as the languages it describes and implements. This
self description can be extended or modified to expand or alter the
expressive power of ARLO's initial configuration. Languages which
describe themselves -- like ARLO -- provide powerful mediums for
systems which perform automatic self-modification, optimization,
debugging, or documentation. AI systems implemented in such a
self-descriptive language can reflect on their own capabilites and
limitations, applying general learning and problem solving strategies
to enlarge or alleviate them.

:tr 904
:author Linda M. Wills
:asort Wills, L.M.
:title Automated Program Recognition
:date February 1987
:pages 202
:cost $9.00
:adnum AD-A186421
:keywords analysis by inspection, computer aided instruction, 
graph grammars, parsing, Programmer's Apprentice, Pland Calculus, program
recognition, program understanding
:abstract 
The key to understanding a program is recognizing familiar algorithmic
fragments and data structures in it. Automating this recognition
process will make it easier to perform many tasks which require
program understanding, e.g., maintenance, modification, and
debugging. This report describes a recognition system, called the
Recognizer, which automatically identifies occurences of stereotyped
computational fragments and data structures in programs. The
Recognizer is able to identify these familiar fragments and structures,
even though they may be expressed in a wide range of syntactic forms.
It does so systematically and efficiently by using a parsing
technique. Two important advances have made this possible. The first
is a language-independent graphical representation for programs and
programming structures which canonicalizes many syntactic features of
programs. The second is an efficient graph parsing algorithm.

:tr 905
:author Van-Duc Nguyen
:asort Nguyen, V.
:title The Synthesis of Stable Force-Closure Grasps
:date July 1986
:pages 134
:cost $8.00
:adnum AD-A186419
:keywords grasp synthesis, force-closure, slip, grasp analysis,
stability, active stiffness control
:abstract The thesis addresses the problem of synthesizing grasps that
are force-closure and stable. The synthesis of force-closure grasps
constructs independent regions of contact for the fingertips, such
that the grasped object is stable, and has a desired stiffness matrix
about its stable equilibrium. The thesis presents fast and simple
algorithms for directly constructing stable force-closure grasps based
on the shape of the grasped object. The formal framework of
force-closure and stable grasps provides a partial explanation to why
we stably grasp objects so easily, and to why our fingers are better
soft than hard.

:tr 906
:author Robert Joseph Hall
:asort Hall, R.J.
:title Learning by Failing to Explain
:date May 1986
:pages 140
:cost $8.00
:adnum AD-A174730
:keywords learning, explanation, heuristic parsing, design, 
graph grammars, subgraph isomorphism
:abstract
Explanation-based Generalization requires that the learner obtain an
explanation of why a precedent exemplifies a concept.  It is,
therefore, useless if the system fails to find this explanation.
However, it is not necessary to give up and resort to purely empirical
generalization methods.  In fact, the system may already know almost
everything it needs to explain the precedent.  {\it Learning by
Failing to Explain} is a method which is able to exploit current
knowledge to prune complex precedents, isolating the mysterious parts
of the precedent.  The idea has two parts: the notion of partially
analyzing a precedent to get rid of the parts which are already
explainable, and the notion of re-analyzing old rules in terms of new
ones, so that more general rules are obtained.

:tr 908
:author John G. Harris
:asort Harris, J.G.
:title The Coupled Depth/Slope Approach to Surface Reconstruction
:date June 1986
:pages 80
:cost $8.00
:adnum AD-A185641
:keywords surface reconstruction, parallel algorithms, analog networks
:abstract 
Reconstructing a surface from sparse sensory data is a well known
problem in computer vision.  Early vision modules typically supply
sparse depth, orientation and discontinuity information.  The surface
reconstruction module incorporates these sparse and possibly
conflicting measurements of a surface into a consistent, dense depth
map.  The coupled depth/slope model developed here provides a novel
computational solution to the surface reconstruction problem.  This
method explicitly computes dense slope representations as well as
dense depth representations.  This marked change from previous surface
reconstruction algorithms allows a natural integration of orientation
constraints into the surface description, a feature not easily
incorporated into earlier algorithms.  In addition, the coupled
depth/slope model generalizes to allow for varying amounts of
smoothness at different locations on the surface.  This computational
model helps conceptualize the problem and leads to two possible
implementations -- analog and digital.  The model can be implemented
as an electrical or biological analog network since the only
computations required at each locally connected node are averages,
additions and subtractions.  A parallel digital algorithm can be
derived by using finite difference approximations.  The resulting
system of coupled equations can be solved iteratively on a
mesh-of-processors computer, such as the Connection Machine.
Furthermore, concurrent multi-grid methods are designed to speed the
convergence of this digital algorithm.

:tr 912
:author Chae Hun An
:asort An, C.H.
:title Trajectory and Force Control of a Direct Drive Arm
:date September 1986
:pages 160
:cost $8.00
:adnum AD-A174405
:keywords force control, direct drive arm, trajectory control, link
estimation, load estimation
:abstract
Using the MIT Serial Link Direct Drive Arm as the main experimental
device, various issues in trajectory and force control of manipulators
were studied in this thesis: estimating the dynamic model of a
manipulator and its load, evaluating trajectory following performance
by feedforward and computed torque control algorithms, and studying
the problem of instability in force control.

:tr 918
:author Guy Blelloch
:asort Blelloch, G.
:title AFL-1: A Programming Language for Massively Concurrent Computers
:date November 1986
:pages 132
:cost $8.00
:adnum AD-A186422
:keywords programming languages, massively parallel systems,
connectionist network, activity flow, Connection Machine, rule base
systems
:abstract
Computational models are arising in which programs are constructed by
specifying large networks of very simple computational devices.
Although such models can potentially make use of a massive amount of
concurrency, their usefulness as a programming model for the design of
complex systems will ultimately be decided by the ease in which such
networks can be programmed (constructed). This report outlines a
language for specifying computational networks. The language (AFL-1)
consists of a set of primitives, and a mechanism to group these
elements into higher level structures. An implementation of this
language runs on the Thinking Machines Corporation Connection Machine.
Two significant examples were programmed in the language, an expert
system (CIS), and a planning system (AFPLAN). These systems are
explained and analyzed in terms of how they compare with similar
systems written in conventional languages.

:tr 925
:author Guillermo Juan Rozas
:asort Rozas, G.J.
:title A Computational Model for Observation in Quantum Mechanics
:date March 1987
:pages 73
:cost $7.00
:adnum AD-A181768
:keywords quantum mechanics, computational models, Scheme, search
:abstract
A computational model of observation in quantum mechanics is
presented.  The model provides a clean and simple computational
paradigm which can be used to illustrate and possibly explain some of
the unintuitive and unexpected behavior of some quantum mechanical
systems.  As examples, the model is used to simulate three seminal
quantum mechanical experiments.  The results obtained agree with the
predictions of quantum mechanics (and physical measurements), yet the
model is perfectly deterministic and maintains a notion of locality.

:tr 932
:author Steven J. Gordon
:asort Gordon, S.J.
:title Automated Assembly Using Feature Localization
:date December 1986
:pages 279
:cost $10.00
:adnum AD-A181262
:keywords robotic assembly, part position measuring, real-time vision,
flexible assembly, 3-D vision, light stripe system, light strip
calibration
:abstract
Automated assembly of mechanical devices is studied by researching
methods of operating assembly equipment in a variable manner.  The
general parts assembly operation involves the removal of alignment
errors within some tolerance and without damaging the parts. Two
methods for eliminating alignment errors are discussed: {\it a priori
suppression} and, in more detail, {\it measurement and removal}.
During the study of this technique, a fast and accurate six
degree-of-freedom position sensor based on a light-stripe vision
technique was developed. Specifications for the sensor were derived
from an assembly-system error analysis.  Studies on extracting
accurate information from the sensor by optimally reducing redundant
information, filtering quantization noise, and careful calibration
procedures were performed.  Prototype assembly systems for both error
elimination techniques were implemented and used to assemble several
products.

:tr 936
:author Stephen J. Buckley
:asort Buckley, S.J.
:title Planning and Teaching Compliant Motion Strategies
:date January 1987
:pages 199
:cost $9.00
:adnum AD-A186418
:keywords motion planning, mechanical assembly, parts mating,
robotics, compliance, guiding
:abstract
A compliant motion strategy is a sequence of motions which cause
an object in the grasp of a robot to slide along obstacles in its
environment, in an attempt to reach a goal region. This paper
examines three aspects of programming compliant motion strategies.
The first aspect is verifying the correctness of a compliant motion
strategy. We describe an implemented program which does this. The
second aspect is teaching compliant motion strategies. We describe
a robot teaching system which accepts individual robot motion
commands from a user, and attempts to build a compliant motion
strategy from the specified motions. The third aspect is offline
generation of compliant motion strategies. An implemented program
is described which accepts a geometric model of the robot and its
environment as input. The program attempts to synthesize a compliant
motion strategy which is guaranteed to work despite uncertainty in
the control and sensing of the robot.

:tr 942
:author Christopher Granger Atkeson
:asort Atkeson, C.G.
:title Roles of Knowledge in Motor Learning
:date February 1987
:pages 154
:cost $8.00
:adnum AD-A186420
:keywords motor control, motor learning, learning, practice, robotics,
system identification
:abstract
The goal of this thesis is to apply the computational approach to
motor learning. The particular tasks used to assess motor learning are
loaded and unloaded free arm movement, and the thesis includes work on
rigid body load eatimation, arm model estimation, optimal filtering
for model parameter estimation, and trajectory learning from practice.
Learning algorithms have been developed and implemented in the context
of robot arm control.

:tr 963
:author Gil J. Ettinger
:asort Ettinger, G.
:title Hierarchical Object Recognition Using Libraries of
Parameterized Model Sub-parts
:date June 1987
:pages 174
:cost $8.00
:adnum AD-A187476
:keywords machine vision, object recognition, model libraries,
structure hierarchy, scale hierarchy, parameterized objects, curvature
primal sketch, constrained search
:abstract
This thesis describes the development of a model-based vision system
that exploits hierarchies of both object structure and object scale to
achieve robust recognition based on effective organization and
indexing schemes for model libraries.  The goal of the system is to
recognize parameterized instances of non-rigid model objects contained
in a large knowledge base despite the presence of noise and occlusion.
The approach taken in this thesis is to develop an object shape
representation that incorporates a component sub-part hierarchy, and a
scale hierarchy.  After analysis of the issues and inherent tradeoffs
in the recognition process, a system is implemented using a
representation based on significant contour curvature changes and a
recognition engine based on geometric constraints of feature
properties.  Examples of the system's performance are given, followed
by an analysis of the results.  In conclusion, the system's benefits
and limitations are presented.

:tr 968
:author Harry Voorhees
:asort Voorhees, H.
:title Finding Texture Boundaries in Images
:date June 1987
:pages 105
:cost $8.00
:adnum AD-A190554
:keywords image understanding, machine vision, texture boundary
detection, textons, blob detection
:abstract
Texture provides one cue for interpreting the physical cause of an
intensity edge, such as occlusion, shadow, surface orientation or
reflectance change. Marr, Julesz, and others have proposed that
texture is represented by small lines or blobs, called ``textons" by
Julesz [1981a], together with their attributes such as orientation,
elongation, and density. Psychophysical studies suggest that texure
boundaries are perceived where distributions of attributes over
neighborhoods of textons differ significantly. However, these studies,
which deal with synthetic images, neglect to consider two important
questions: How can textons be extracted from images of natural scenes?
And how, exactly, are texture boundaries then found? This thesis
presents an algorithm for
computing blobs from natural images and a statistic for measuring the
difference between two sample distributions of blob attributes. As
part of the blob detection algorithm, methods for estimating image are
presented, which are applicable to edge detection as well.

:tr 972
:author Robert C. Berwick
:asort Berwick, R.
:title Principle-Based Parsing
:date June 1987
:pages 113
:cost $8.00
:abstract
During the past few years, there has been much discussion of a shift
from rule-based systems to principle-based systems for natural
language processing. This paper outlines the major computational
advantages of principle-based parsing, its differences from the usual
rule-based approach, and surveys several existing principle-based
parsing systems used for handling languages as diverse as Warlpiri,
English, and Spanish, as well as language translation.

:tr 974
:author Michael D. Riley
:asort Riley, M.D.
:title Time-frequency Representations for Speech Signals
:date June 1987
:pages 152
:cost $8.00
:adnum AD-A188661
:keywords speech analysis, time-frequency representations, auditory
signal processing, non-stationary signal processing
:abstract
This work first examines quadratic transforms of an auditory signal to
determine the most appropriate joint time-frequency energy
representations for speech signals in sonorant regions.  It then
proposes using time-frequency ridges to obtain a rich, symbolic
description of the phonetically relevant features in these
time-frequency energy surfaces, the so-called {\sl schematic
spectrogram}.  Many speech examples are given showing the performance
for some traditionally difficult cases: semi-vowels and glides,
nasalized vowels, consonant-vowel transitions, female speech, and
imperfect transmission channels.

:tr 978
:author Daniel Wayne Weise
:asort Weise, D.W.
:title Formal Multilevel Hierarchical Verification of Synchronous MOS
Circuits 
:date June 1987
:pages 172
:cost $8.00
:adnum AD-A187532
:keywords hardware verification, hierarchical verification, multilevel
verification, VLSI, constraint systems, simulation, hardware
description languages, function from structure
:abstract
I have designed and implemented a system for the multilevel
verification of synchronous MOS circuits. The system, called Silica
Pithecus, determines if an MOS circuit meets a specification of the
circuit's intended digital behavior.  If not, Silica Pithecus returns
to the designer the reason for the failure.  Transistors are modelled
as bidirectional devices of varying resistances, and nodes are
modelled as capacitors. Silica Pithecus operates hierarchically,
interactively, and incrementally. Major contributions of this research
include a formal understanding of the relationship between different
behavioral descriptions of the same device, and a formalization of the
relationship between the structure, behavior, and context of a device.
My methods find sufficient conditions on the inputs of circuits which
guarantee the correct operation of the circuit in the desired
descriptive phenomena such as races and charge sharing.  Informal
notions such as races and hazards are shown to be derivable from the
correctness conditions used by my methods.

:tr 980
:author James V. Mahoney
:asort Mahoney, J.V.
:title Image Chunking: Defining Spatial Building Blocks for Scene
Analysis 
:date August 1987
:pages 188
:cost $9.00
:adnum AD-A187072
:keywords machine vision, chunking, segmentation, tracing, blob
detection, image understanding, visual routines, region growing
:abstract
This report develops a framework for the fast extraction of scene
entities, based on a simple, local model of parallel computation.  An
image chunk is a subset of an image that can act as a unit in the
course of spatial analysis.  A parallel preprocessing stage constructs
a variety of simple chunks uniformly over the visual array.  On the
basis of these chunks, subsequent serial processes locate relevant
scene components and assemble detailed descriptions of them rapidly.
This report defines image chunks that facilitate the most potentially
time-consuming operations of spatial analysis---boundary tracing, area
coloring, and the selection of locations at which to apply detailed
analysis.  Fast parallel processes for computing these chunks from
images, and chunk-based formulations of indexing, tracing, and
coloring, are presented.

:tr 982
:author Bruce R. Donald 
:asort Donald, B.R.  
:title Error Detection and Recovery for Robot Motion Planning with Uncertainty
:date July 1987 
:pages 310 
:cost $11.00 
:adnum AD-A187746
:keywords robotics, motion planning, uncertainty, error detection and
recovery, computational geometry, geometric reasoning, planning with
uncertainty, model error, EDR, failure mode analysis

jen@PRC.UNISYS.COM (Judie Norton) (09/06/89)

received by jen