[mod.techreports] mitai4 tech reports

E1AR0002@SMUVM1.BITNET (02/25/86)

:aim 642
:title {Semantics of Inheritance and Attributions in the Description System Omeg
a}
:author Giuseppe Attardi and Maria Simi
:asort Attardi, G.; Simi, M.
:date August 1981
:cost $2.75
:pages 38
:ADnum (AD-A104776)
:keywords description, inheritance, semantic networks, model, attribute,
knowledge representation, logic, consistency
:abstract
Omega is a description system for knowledge embedding which
incorporates some of the attractive modes of expression in natural
language such as descriptions, inheritance, quantification, negation,
attributions and multiple viewpoints.  Omega represents an
investigation both on logic formalisms more expressive than first
order predicate logic and on the foundations of knowledge
representation.  The logic of Omega combines mechanisms of the
predicate calculus, type systems, and set theory.  As a logic, Omega
achieves the goal of an intuitively sound and consistent theory of
classes which permits unrestricted abstraction within a powerful logic
system.  Description abstraction is the construct provided in Omega
corresponding to set abstraction.  Attributions and inheritance are
the basic mechanisms for knowledge structuring.  To achieve
flexibility and incrementality, the language allows to express
descriptions with an arbitrary number of attributions, rather than
predicates with a fixed number of arguments as in predicate logic.
This requires an unusual interpretation for instance descriptions,
which in turn provides insights into the use and meaning of several
kind of attributions.  The logic foundations for Omega are
investigated.  Semantic models are provided, and axiomatization is
derived and the consistency and completeness of the logic is
established.
:end

:aim 643
:title  {A Local Front End for Remote Editing}
:author Richard M. Stallman
:asort Stallman, R.M.
:date February 1982
:cost $2.25
:pages 28
:ADnum (AD-A113496)
:keywords communications, editor, networks, display, extensible
:abstract
The Local Editing Protocol allows a local programmable terminal to
execute the most common editing commands on behalf of an extensible
text editor on a remote system, thus greatly improving speed of
response without reducing flexibility.  The Line Saving Protocol
allows the local system to save text which is not displayed, and
display it again later when it is needed, under the control of the
remote editor.  Both protocols are substantially system and editor
independent.
:end

:aim 644
:title {The SUPDUP Protocol}
:author Richard M. Stallman
:asort Stallman, R.M.
:date July 1983
:cost $2.75
:pages 42
:keywords communications, display, networks
:abstract
The SUPDUP protocol provides for login to a remote system over a
network with terminal-independent output, so that only the local
system need know now to handle the user's terminal.  It offers
facilities for graphics and for local assistance to remote text
editors.  This memo contains a complete description of the SUPDUP
protocol in fullest possible detail.
:end

:aim 645
:title  {Marr's Approach to Vision}
:author Tomaso Poggio
:asort Poggio, T.
:date August 1981
:cost $1.50
:pages 7
:ADnum (AD-A104198)
:keywords Marr, computational approach, biological visual perception,
zero crossings
:abstract
In the last seven years a new computational approach has led to
promising advances in the understanding of biological visual
perception.  The foundations of the approach are largely due to the
work of a single man, David Marr at M.I.T.  Now, after his death in
Boston on November 17th 1980, research in vision will not be the same
for the growing number of those who are following his lead.
:end

:aim 646
:title The Connection Machine
:author W. Daniel Hillis
:asort Hillis, W.D.
:date September 1981
:cost $2.25
:pages 29
:ADnum (AD-A107463)
:keywords concurrent architecture, content addressable memory,
multiprocessing, associative memory, parallel computers, tessellated,
cellular array
:abstract
This paper describes the connection memory, a machine for concurrently
manipulating knowledge stored in semantic networks.  We need the
connection memory because conventional serial computers cannot move
through such networks fast enough.  The connection memory sidesteps
the problem by providing processing power proportional to the size of
the network.  Each node and link in the network has its own simple
processor.  These connect to form a uniform locally-connected network
of perhaps a million processor/memory cells.
:end

:aim 647
:title {Nature Abhors an Empty Vacuum}
:author Marvin Minsky
:asort Minsky, M.
:date August 1981
:cost $1.50
:pages 13
:ADnum (AD-A106362)
:keywords discrete-physics, quantum, Heisenberg, vacuum
:abstract
Imagine a crystalline world of tiny, discrete "cells", each knowing
only what its nearest neighbors do.  Each volume of space contains
only a finite amount of information, because space and time come in
discrete units.  In such a universe, we'll construct analogs of
particles and fields -- and ask what it would mean for these to
satisfy constraints like conservation of momentum.  In each case
classical mechanics will break down -- on scales both small and large,
and strange phenomena emerge: a maximal velocity, a slowing of
internal clocks, a bound on simultaneous measurement, and quantum-like
effects in very weak, or intense fields.
:end

:aim 648
:unavailable
:title {A Lightness Scale from Image Intensity Distributions}
:author W.A. Richards
:asort Richards, W.A.
:date August 1981
:pages 36
:ADnum (AD-109917)

:aim 650
:title {Microelectronics in Nerve Cells: Dendritic Morphology and Information Pr
ocessing}
:author T. Poggio, C. Koch, and V. Torre
:asort Poggio, T.; Koch, C.; Torre, V.
:date October 1981
:cost $2.75
:pages 52
:keywords cable theory, microelectronics, ganglion cells, synapses,
motion detection
:abstract
The electrical properties of the different anatomical types of retinal
ganglion cells in the cat were calculated on the basis of passive
cable theory from measurements made on histological material provided
by Boycott and Wassle (1974).  The interactions between excitation and
inhibition when the inhibitory battery is near the resting potential
can be strongly nonlinear in these cells.  We analyze some of the
integrative properties of an arbitrary passive dendritic tree and we
then derive the functional properties which are characteristic for the
various types of ganglion cells.  In particular, we derive several
general results concerning the spatial specificity of shunting
inhibition in "vetoing" an excitatory input (the "on path" property)
and its dependence on the geometrical and electric properties of the
dendritic tree.  Our main conclusion is that specific branching
patterns coupled with a suitable distribution of synapses are able to
support complex information processing operations on the incoming
signals.  Thus, a neuron seems likely to resemble an (analog) LSI
circuit with thousands of elementary processing
units - the synapses - rather than a single logical gate.  A dendritic
tree would then be near to the ultimate in microelectronics with
little patches of postsynaptic membrane representing the fundamental
units for several elementary computations.
:end

:aim 651
:title A Program Testing Assistant
:author David Chapman
:asort Chapman, D.
:date November 1981
:cost $2.25
:pages 24
:ADnum (AD-A108147)
:keywords debugging, program testing assistant, Programmer's Apprentice,
programming environment, testing
:abstract
This paper describes the design and implementation of a program
testing assistant which aids a programmer in the definition,
execution, and modification of test cases during incremental program
development.  The testing assistant helps in the interactive
definition of test cases and executes them automatically when
appropriate.  It modifies test cases to preserve their usefulness when
the program they test undergoes certain types of design changes.  The
testing assistant acts as a fully integrated part of the programming
environment and cooperates with existing programming tools, including
a display editor, compiler, interpreter, and debugger.
:end

:aim 652
:unavailable
:title {Some Powerful Ideas}
:author Robert Lawler
:asort Lawler, R.
:date December 1981
:pages 26
:reference See Logo Memo 60

:aim 653
:title {Computational Approaches to Image Understanding}
:author Michael Brady
:asort Brady, M.
:date October 1981
:cost $3.50
:pages 186
:reference {See {\it Computing Surveys}, Vol. 14, No. 1, 15 March 1982}
:ADnum (AD-A108191)
:abstract
Recent theoretical developments in Image Understanding are surveyed
Among the issues discussed are: edge finding, region finding, texture,
shape from shading, shape from texture, shape from contour, and the
representations of surfaces and objects.  Much of the work described
was developed in the DARPA Image Understanding project.  In memory
of Max Clowes and David Marr.
:end

:aim 654
:title {Rotationally Symmetric Operators for Surface Interpolation}
:author Michael Brady and Berthold K.P. Horn
:asort Brady, M.; Horn, B.K.P.
:date November 1981
:cost $2.25
:pages 36
:ADnum (AD-A109032)
:keywords vision
:abstract
The use of rotationally symmetric operators in vision is reviewed and
conditions for rotational symmetry are derived for linear and
quadratic forms in the first and second partial directional
derivatives of a function f(x,y).  Surface interpolation is considered
to be the process of computing the most conservative solution
consistent with boundary conditions.  The "most conservative" solution
is modelled using the calculus of variations to find the minimum
function that satisfies a given performance index.  To guarantee the
existence of a minimum function, Grimson has recently suggested that
the performance index should be a semi-norm.  It is shown that all
quadratic forms in the second partial derivatives of the surface
satisfy this criterion.  The seminorms that are, in addition,
rotationally symmetric form a vector space whose basis is the square
Laplacian and the quadratic variation.  Whereas both seminorms give
rise to the same Euler condition in the interior, the quadratic
variation offers the tighter constraint at the boundary and is to be
preferred for surface interpolation.
:end

:aim 656
:title What Your Programs Are Doing
:author Henry Lieberman
:asort Lieberman, H.
:date February 1982
:cost $2.25
:pages 39
:ADnum (AD-A113494)
:keywords visualization, example-based programming, debugging,
alpha beta search, interactive programming, graphics, program testing,
LISP
:abstract
An important skill in programming is being able to visualize the
operation of procedures, both for constructing programs and debugging
them.  Tinker is a programming environment for Lisp that enables the
programmer to "see what the program is doing" while the program is
being constructed, by displaying the result of each step in the
program on representative examples.  To help the reader visualize the
operation of Tinker itself, an example is presented of how he or she
might use Tinker to construct an alpha-beta tree search program.
:end

:aim 657
:title {Nonlinear Interactions in a Dendritic Tree: Localization, Timing and Rol
e in Information Processing}
:author T. Poggio and C. Koch
:asort Poggio, T.; Koch, C.
:date September 1981
:cost $1.50
:pages 8
:keywords microcircuits, synapses, nerve cells, nonlinear cables,
analog circuits
:abstract
In a dendritic tree transient synaptic inputs activating ionic
conductances with an equilibrium potential near the resting potential
can veto very effectively other excitatory inputs.  Analog operations
of this type can be very specific with respect to relative locations
of the inputs and their timing.  We examine with computer experiments
the precise conditions underlying this effect in the case of a cat
retinal ganglion cell.  The critical condition required for strong and
specific interactions is that the peak inhibitory conductance change
must be sufficiently large, almost independently of other electrical
parameters.  In this case, a passive dendritic tree may perform
hundreds of independent analog operations on its synaptic inputs,
without requiring any threshold mechanism.
:end

:aim 660
:title {How To Play 20 Questions With Nature and Win}
:author Whitman Richards
:asort Richards, W.A.
:date December 1982
:cost $2.25
:pages 26
:keywords vision, information processing, perception, intrinsic
images, object recognition
:abstract
The 20 Questions Game played by children has an impressive record of
rapidly guessing an arbitrarily selected object with rather few,
well-chosen questions.  This same strategy can be used to drive the
perceptual process, likewise beginning the search with the intent of
deciding whether the object is Animal-Vegetable-or Mineral.  For a
perceptual system, however,  several simple questions are required even
to make this first judgement as to the Kingdom the object belongs.
Nevertheless, the answers to these first simple questions, or their
modular outputs, provide a rich data base which can serve to classify
objects or events in much more detail than one might expect, thanks to
constraints and laws imposed upon natural processes and things. The
questions, then, suggest a useful set of primitive modules for
initializing perception.
:end

:aim 661
:title {Workshop on the Design and Control of Dextrous Hands}
:author John M. Hollerbach
:asort Hollerbach, J.M.
:date April 1982
:cost $2.25
:pages 21
:ADnum (AD-A114973)
:keywords robotics, end effectors, dextrous hands
:abstract
The Workshop for the Design and Control of Dextrous Hands was held at
the MIT Artificial Intelligence Laboratory on November 5-6, 1981.
Outside experts were brought together to discuss four topics:
kinematics of hands, actuation and materials, touch sensing, and
control.  This report summarizes the discussions of the participants,
and attempts to identify a consensus on applications, mechanical
design, and control.
:end

:aim 662
:title {Passive Navigation}
:author Anna R. Bruss and B.K.P. Horn
:asort Bruss, A.R.; Horn, B.K.P.
:date November 1981
:cost $1.50
:pages 20
:ADnum (AD-A110070)
:keywords passive navigation, optical flow, time-varying imagery
:abstract
A method is proposed for determining the motion of a body relative to
a fixed environment using the changing image seen by a camera attached
to the body.  The optical flow in the image plane is the input, while
the instantaneous rotation and translation of the body are the output.
If optical flow could be determined precisely, it would only have to
be known at a few places to compute the parameters of the motion.  In
practice, however, the measured optical flow will be somewhat
inaccurate.  It is therefore advantageous to consider methods which
use as much of the available information as possible.  We employ a
least-squares approach which minimizes some measure of the discrepancy
between the measured flow and that predicted from the computed motion
parameters.  Several different error norms are investigated.  In
general, our algorithm leads to a system of nonlinear equations from
which the motion parameters may be computed numerically.  However, in
the special cases where the motion of the camera is purely
translational or purely rotational, use of the appropriate norm leads
to a system of equation from which these parameters can be determined
in closed form.
:end

:aim 663
:title {The Implicit Constraints of the Primal Sketch}
:author W.E.L Grimson
:asort Grimson, W.E.L.
:date October 1981
:cost $2.25
:pages 36
:ADnum (AD-A114789)
:keywords primal sketch, zero crossings, surface consistency,
surface interpolation
:abstract
Computational theories of structure-from-motion and stereo vision only
specify the computation of three-dimensional surface information at
points in the image at which the irradiance changes.  Yet, the visual
perception is clearly of complete surfaces, and this perception is
consistent for different observers.  Since mathematically the class of
surfaces which could pass through the known boundary points provided
by the stereo system is infinite and contains widely varying surfaces,
the visual system must incorporate some additional constraints besides
the known points in order to compute the complete surface.
:end

:aim 664
:unavailable
:title {Qualitative Process Theory}
:author Kenneth D. Forbus
:asort Forbus, K.D.
:date February 1982
:pages 54
:reference See AIM 664A

:aim 664A
:unavailable
:title {Qualitative Process Theory}
:author Kenneth D. Forbus
:asort Forbus, K.D.
:date May 1983
:cost $3.00
:pages 97
:ADnum (AD-A112225)
:reference See AI-TR-789

:aim 665
:title {Expert Systems: Where Are We?  And Where Do We Go From Here?}
:author Randall Davis
:asort Davis, R.
:date June 1982
:reference  See {\it The AI Magazine}, Spring 1982.
:cost $2.25
:pages 40
:keywords expert systems, debugging, causality, structural models,
behavioral models
:abstract
Work on Expert Systems has received extensive attention recently,
prompting growing interest in a range of environments.  Much has been
made of the basic concept and of the rule-based system approach
typically used to construct the programs.  In this paper we review
what we know, assess the current prospects, and suggest directions
appropriate for the next steps of basic research.
:end

:aim 666
:title {The Perception of Subjective Surfaces}
:author Michael Brady and W. Eric L. Grimson
:asort Brady, M.; Grimson, W.E.L.
:date November 1981
:reference (A.I. Memo 582 never written)
:cost $2.75
:pages 48
:ADnum (AD-A113495)
:keywords surface perception, subjective contours, edge detection, occlusion
:abstract
It is proposed that subjective contours are an artifact of the
perception of natural three-dimensional surfaces.  A recent theory of
surface interpolation implies that "subjective surfaces" are
constructed in the visual system by interpolation between
three-dimensional values arising from interpretation of a variety of
surface cues.  We show that subjective surfaces can take any form,
including singly and doubly curved surfaces, as well as the commonly
discussed fronto-parallel planes.  In addition, it is necessary in the
context of computational vision to make explicit the discontinuities,
both in depth and in surface orientation, in the surfaces constructed
by surface interpolation.  It is proposed that subjective contours
form the boundaries of the subjective surfaces due to these
discontinuities.
:end

:aim 667
:title {Reasoning Utility Package User's Manual, Version One}
:author David Allen McAllester
:asort McAllester, D.A.
:date April 1982
:cost $2.75
:pages 56
:ADnum (AD-A114756)
:keywords reasoning utilities, automated deduction, backtracking
congruence closures, theorem proving, truth maintenance, dependencies,
demonic invocation
:abstract
RUP (Reasoning Utility Package) is a collection of procedures for
performing various computations relevant to automated reasoning.  RUP
contains a truth maintenance system (TMS) which can be used to perform
simple propositional deduction (unit clause resolution), to record
justifications, to track down underlying assumptions, and to perform
incremental modifications when premises are changed.  This TMS can be
used with an automated premise controller which automatically retracts
"assumptions" before "solid facts" when contradictions arise and
searches for the most solid proof of an assertion.  RUP also contains
a procedure for efficiently computing all the relevant consequences of
any set of equalities between ground terms.  A related utility
computes "substitution simplifications" of terms under an arbitrary
set of unquantified equalities and a user defined simplicity order.
RUP also contains demon writing macros which allow one to write
PLANNER like demons that trigger on various types of events in the
data base.  Finally there is a utility for reasoning about partial
orders and arbitrary transitive relations.  In writing all of these
utilities an attempt has been made to provide a maximally flexible
environment for automated reasoning.
:end

:aim 668
:title {CARTOON: A Biologically Motivated Edge Detection Algorithm}
:author W. Richards, H.K. Nishihara and B. Dawson
:asort Richards, W.A.; Nishihara, H.K.; Dawson, B.
:date June 1982
:cost $2.25
:pages 24
:keywords vision, vision algorithm, edge detection
:abstract
Caricatures demonstrate that only a few significant "edges" need to be
captured to convey the meaning of a complex pattern of images
intensities.  The most important of these "edges" are image intensity
changes arising from surface discontinuities or occluding boundaries
The CARTOON algorithm is an attempt to locate these special intensity
changes using a modification of the zero-crossing coincidence scheme
suggested by Marr and Hildreth (1980).
:end

:aim 670
:title {The Relation Between Proximity and Brightness Similarity in Dot Patterns
}
:author Steven W. Zucker, Kent A. Stevens, and Peter T. Sander
:asort Zucker, S.W.; Stevens, K.A.; Sander, P.T.
:date May 1982
:cost $1.50
:pages 15
:keywords vision, texture, grouping, gestalt, dot patterns
:abstract
The Gestalt studies demonstrated the tendency to visually organize
dots on the basis of similarity, proximity, and global properties such
as closure, good continuation, and symmetry.  The particular
organization imposed on a collection of dots is thus determined by
many factors, some local, some global.  We discuss computational
reasons for expecting the initial stages of grouping to be achieved by
processes with purely local support.  In the case of dot patterns, the
expectation is that neighboring dots are grouped on the basis of
proximity and similarity of contrast, by processes that are
independent of the overall organization and the various global
factors.  We describe experiments that suggest a purely local
relationship between proximity and brightness similarity in perceptual
grouping.
:end

:aim 671
:title {Multi-Level Reconstruction of Visual Surfaces: Variational Principles an
d Finite Element Representations}
:author Demetri Terzopoulos
:asort Terzopoulos, D.
:date April 1982
:cost $3.00
:pages 91
:ADnum (AD-A115033)
:keywords computer vision, hierarchical representations,
variational principles, stereo, surface reconstruction, finite elements,
multi-level relaxation, interpolation
:abstract
Computational modules early in the human vision system typically
generate sparse information about the shapes of visible surfaces in
the scene.  Moreover, visual processes such as stereopsis can provide
such information at a number of levels spanning a range of
resolutions.  In this paper, we extend this multi-level structure to
encompass the subsequent task of reconstructing full surface
descriptions from the sparse information.  We describe the three steps
of the mathematical development.  Examples of the generation of
hierarchies of surface representations from stereo constraints are
given.  Finally, the basic surface approximation problem is revisited
in a broader mathematical context whose implications are of relevance
to vision.
:end

:aim 672
:title A Primer for the Act-1 Language
:author Daniel G. Theriault
:asort Theriault, D.G.
:date April 1982
:cost $3.00
:pages 94
:ADnum (AD-A115072)
:keywords actors, parallelism, concurrency, programming languages,
programming language system, message passing
:abstract
This paper describes the current design for the Act-I computer
programming language, and describes the Actor computational model,
which the language was designed to support.  It provides a perspective
from which to view the language, with respect to existing computer
language systems and to the computer system and environment under
development for support of the language.  The language is informally
introduced in a tutorial fashion and demonstrated through examples.
:end

:aim 674
:title {Solving the Find-Path Problem by Representing Free Space as Generalized
Cones}
:author Rodney A. Brooks
:asort Brooks, R.A.
:date May 1982
:cost $2.25
:pages 21
:ADnum (AD-A115047)
:keywords robotics, find-path, collision avoidance, path planning,
generalized cones
:abstract
Free space is represented as a union of (possibly overlapping)
generalized cones.  An algorithm is presented which efficiently finds
good collision free paths for convex polygonal bodies through space
littered with obstacle polygons.  The paths are good in the sense that
the distance of closest approach to an obstacle over the path is
usually far from minimal over the class of topologically equivalent
collision free paths.  The algorithm is based on characterizing the
volume swept by a body as it is translated and rotated as a
generalized cone and determining under what conditions one generalized
cone is a subset of another.
:end

:aim 675
:title {Zero-Crossings and Spatiotemporal Interpretation in Vision}
:author Tomaso Poggio, Kenneth Nielsen, and Keith Nishihara
:asort Poggio, T.; Nielsen, K.; Nishihara, H.K.
:date May 1982
:cost $2.75
:pages 48
:ADnum (AD-A117608)
:keywords interpolation, zero crossings, aliasing, electrical coupling
:abstract
We will briefly outline a computational theory of the first stages of
human vision according to which (a) the retinal image is filtered by a
set of centre-surround receptive fields (of about 5 different spatial
sizes) which are approximately bandpass in spatial frequency and (b)
zero-crossings are detected independently in the output of each of
these channels.  Zero-crossings in each channel are then a set of
discrete symbols which may be used for later processing such as
contour extraction and stereopsis.  A formulation of Logan's
zero-crossing results is proved for the case of Fourier polynomials
and an extension of Logan's theorem to 2-dimensional functions is also
proved.
:end

:aim 676
:title {Implementation of a Theory for Inferring Surface Shape from Contours}
:author Kent A. Stevens
:asort Stevens, K.A.
:date August 1982
:cost $2.25
:pages 27
:ADnum (AD-A127285)
:keywords vision, surface perception, contours, shape
:abstract
Human vision is adept at inferring the shape of a surface from the
image of curves lying across the surface.  The strongest impression of
3-D shape derives from parallel (but not necessarily equally spaced)
contours.  In (Stevens 1981a) the computational problem of inferring
3-D shape from image configurations is examined, and a theory is given
for how the visual system constrains the problem by certain
assumptions.  The assumptions are three: that neither the viewpoint
nor the placement of the physical curves on the surface is misleading,
and that the physical curves are lines of curvature across the
surface.  These assumptions imply that parallel image contours
correspond to parallel curves lying across an approximately
cylindrical surface.  Moreover, lines of curvature on a cylinder are
geodesic and planar.  These properties provide strong constraint on
the local surface orientation.  We describe a computational method
embodying these geometric constraints that is able to determine the
surface orientation even in places where locally it is very weakly
constrained, by extrapolatating from places where it is strongly
constrained.  This computation has been implemented, and predicts
local surface orientation that closely matches the apparent
orientation.  Experiments with the implementation support the theory
that our visual interpretation of surface shape from contour assumes
the image contours correspond to lines of curvature.
:end

:aim 677
:title {Parsing and Generating English Using Communicative Transformations}
:author Boris Katz and Patrick H. Winston
:asort Katz, B.; Winston, P.H.
:date May 1982
:cost $1.50
:pages 18
:ADnum (AD-A117440)
:keywords parsing, generation, natural language, semantic networks,
commutative transformations, language understanding
:abstract
This paper is about an implemented natural language interface that
translates from English into semantic net relations and from semantic
net relations back into English.  The parser and companion generator
were implemented for two reasons: (a) to enable experimental work in
support of a theory of learning by analogy; (b) to demonstrate the
viability of a theory of parsing and generation built on commutative
transformations.  The learning theory was shaped to a great degree by
experiments that would have been extraordinarily tedious to perform
without the English interface with which the experimental data base
was prepared, revised, and revised again.  Inasmuch as current work on
the learning theory is moving toward a tenfold increase in data-base
size, the English interface is moving from a facilitating role to an
enabling one.  The parsing and generation theory has two particularly
important features: (a) the same grammar is used for both parsing and
generation; (b) the transformations of the grammar are commutative.
:end

:aim 678
:title {Learning by Augmenting Rules and Accumulating Censors}
:author Patrick H. Winston
:asort Winston, P.H.
:date May 1982
:reference Revised September 1982
:cost $2.25
:pages 23
:ADnum (AD-A117439)
:keywords learning, artificial intelligence, analogy
:abstract
This paper is a synthesis of several sets of ideas: ideas about
learning from precedents and exercises, ideas about learning using
near misses, ideas about generalizing if-then rules, and ideas about
using censors to prevent procedure misapplication.  The synthesis
enables two extensions to an implemented system that solves problems
involving precedents and exercises and that generates if-then rules as
a byproduct.  These extensions are as follows:  If-then rules are
augmented by unless conditions, creating augmented if-then rules.  An
augmented if-then rule is blocked whenever facts in hand directly
demonstrate the truth of an unless condition. When an augmented
if-then rule is used to demonstrate the truth of an unless condition,
the rule is called a censor.  Like ordinary augmented if-then rules,
censors can be learned.  Definition rules are introduced that
facilitate graceful refinement.  The definition rules are also
augmented if-then rules.  They work by virtue of unless entries that
capture certain nuances of meaning different from those expressible by
necessary conditions.  Like ordinary augmented if-then rules,
definition rules can be learned. The strength of the ideas is
illustrated by way of representative experiments.  All of these
experiments have been performed with an implemented system.
:end

:aim 679
:title {Learning Physical Descriptions from Functional Definitions, Examples, an
d Precedents}
:author Patrick H. Winston, Thomas O. Binford, Boris Katz, and Michael
Lowry
:asort Winston, P.H.; Binford, T.O.; Katz, B.; Lowry, M.
:date November 1982
:reference Revised January 1983
:cost $2.25
:pages 23
:ADnum (AD-A127047)
:keywords learning, form and function
:abstract
It is too hard to tell vision systems what things look like.  It is
easier to talk about purpose and what things are for.  Consequently,
we want vision systems to use functional descriptions to identify
things, when necessary, and we want them to learn physical
descriptions for themselves, when possible.  This paper describes a
theory that explains how to make such systems work.  The theory is a
synthesis of two sets of ideas: ideas about learning from precedents
and exercises developed at MIT, and ideas about physical description
developed at Stanford.  The strength of the synthesis is illustrated
by way of representative experiments.  All of these experiments have
been performed with an implemented system.
:end

:aim 680A
:unavailable
:title {LETS, An Expressional Loop Notation}
:author Richard C. Waters
:asort Waters, R.C.
:date October 1982
:pages 57
:ADnum (AD-A122108)
:keywords loops, programming languages, LISP

:aim 681
:title {Supporting Organizational Problem Solving with a Workstation}
:author Gerald Barber
:asort Barber, G.
:date July 1982
:cost $2.25
:pages 30
:ADnum (AD-A130481)
:keywords problem solving, office information systems, workstations,
OMEGA, viewpoints, office semantics, change and contradiction,
office automation
:abstract
This paper describes an approach to supporting work in the office.
Using and extending ideas from the field of Artificial Intelligence
(AI) we describe office work as a problem solving activity.  A
knowledge embedding language called Omega is used to embed knowledge
of the organization into an office worker's workstation in order to
support the office worker in his or her problem solving.  A particular
approach to reasoning about change and contradiction is discussed.
This approach uses Omega's viewpoint mechanism.  Omega's view point
mechanism is a general contradiction handling facility.  Unlike other
knowledge representation systems, when a contradiction is reached the
reasons for the contradiction can be analyzed by the deduction
mechanism without having to resort to a backtracking mechanism.  The
view point mechanism is the heart of the Problem Solving Support
Paradigm.  An example is presented where Omega's facilities are used
to support an office worker's problem solving activities.  The example
illustrates the use of view points and of Omega's capabilities to
reason about it's own reasoning process.
:end

:aim 683
:title {Visual Algorithms}
:author Tomaso Poggio
:asort Poggio, T.
:date May 1982
:cost $2.25
:pages 28
:ADnum (AD-A127251)
:keywords polynomial algorithms, parallel/serial, neural hardware,
perceptrons, nonlinear mappings
:abstract
Nonlinear, local and highly parallel algorithms can perform several
simple but important visual computations. Specific classes of
algorithms can be considered in an abstract way. I study here the
class of polynomial algorithms to exemplify some of the important
issues for visual processing like linear vs. nonlinear and local vs.
global. Polynomial algorithms are a natural extension of Perceptrons
to time dependent grey level images. Although they share most of the
limitations of Perceptrons, they are powerful parallel computational
devices.  Several of their properties are characterized and especially
(a) their equivalence with Perceptrons for geometrical figures and (b)
the synthesis of nonlinear algorithms (mappings) via associative
learning. Finally, the paper considers how algorithms of this type
could be implemented in nervous hardware, in terms of synaptic
interactions strategically located in a dendritic tree.  The
implementation of three specific algorithms is briefly outlined:
(a) direction sensitive motion detection, (b) detection of
discontinuities in the optical flow, (c) detection and localization of
zero-crossings in the convolution of the image with the Laplacian (of
a Gaussian). In the appendix, another (nonlinear) differential
operator, the second directional derivative along the gradient, is
briefly discussed as an alternative to the Laplacian.
:end

:aim 684
:title {A Subdivision Algorithm in Configuration Space for Findpath with Rotatio
n}
:author Rodney A. Brooks and Tomas Lozano-Perez
:asort Brooks, R.A.; Lozano-Perez, T.
:date December 1982
:cost $2.75
:pages 41
:ADnum (AD-A130565)
:keywords configuration space, find-path, collision avoidance, robotics
:abstract
A hierarchical representation for configuration space is presented,
along with an algorithm for searching that space for collision-free
paths. The details of the algorithm are presented for polygonal
obstacles and a moving object with two translational and one
rotational degrees of freedom.
:end

:aim 685
:title {Symbolic Error Analysis and Robot Planning}
:author Rodney A. Brooks
:asort Brooks, R.A.
:date September 1982
:cost $3.00
:pages 85
:ADnum (AD-A121007)
:keywords robotics, error analysis, planning, symbolic algebra
:abstract
A program to control a robot manipulator for industrial assembly
operations must take into account possible errors in parts placement
and tolerances of the parts themselves.  Previous approaches to this
problem have been to (1) engineer the situation so that the errors
are small or (2) let the programmer analyze the errors and take
explicit account of them.  This paper gives the mathematical
underpinnings for building programs (plan checkers) to carry out
approach (2) automatically.  The plan checker uses a geometric
CAD-type data base to infer the effects of actions and the propagation
of errors.  It does this symbolically rather than numerically, so that
computations can be reversed and desired resultant tolerances can be
used to infer required initial tolerances or necessity for sensing.
The checker modifies plans to include sensing and adds constraints to
the plan which ensure that it will succeed.  An implemented system is
described and results of its execution are presented.  The plan
checker could be used as part of an automatic planning system or as an
aid to a human robot programmer.
:end

:aim 686
:title {Computers, Brains, and the Control of Movement}
:author John M. Hollerbach
:asort Hollerbach, J.M.
:date June 1982
:cost $l.50
:pages 12
:keywords motor control, robotics
:abstract
Many of the problems associated with the planning and execution of
human arm trajectories are illuminated by planning and control
strategies which have been developed for robotic manipulators.  This
comparison may provide explanations for the predominance of straight
line trajectories in human reaching and pointing movement, the role of
feedback during arm movement, as well as plausible compensatory
mechanisms for arm dynamics.
:end

:aim 687
:title {The Computational Problem of Motor Control}
:author Tomaso Poggio and B.L. Rosser
:asort Poggio, T.; Rosser, B.L.
:date May 1983
:cost $1.50
:pages 11
:keywords motor control, associative learning, look-up table
:abstract
We review some computational aspects of motor control.  The problem of
trajectory control is phrased in terms of an efficient representation
of the operator connecting joint angles to joint torques.  Efficient
look-up table solutions of the inverse dynamics are related to some
results on the decomposition of function of many variables.  In a
biological perspective, we emphasize the importance of the constraints
coming from the properties of the biological hardware for determining
the solution to the inverse dynamic problem.
:end

:aim 691
:title {Open Systems}
:author Carl Hewitt, Peter de Jong
:asort Hewitt, C.; de Jong, P.
:date December 1982
:cost $2.25
:pages 28
:keywords open systems, conceptual modeling, actors, sprites,
description, semantics, problem solving
:abstract
This paper describes some problems and opportunities associated with
conceptual modeling for the kind of "open systems" we foresee must and
will be increasingly recognized as a central line of computer system
development.  Computer applications will be based on communication
between sub-systems which will have been developed separately and
independently.  Some of the reasons for independent development are the
following: competition, different goals and responsibilities,
economics, and geographical distribution.  We must deal with all the
problems that arise from this conceptual disparity of sub-systems
which have been independently developed.  Sub-systems will be
open-ended and incremental -- undergoing continual evolution.  There are
no global objects.  The only thing that all the various sub-systems
hold in common is the ability to communicate with each other.  In this
paper we study Open Systems from the viewpoint of Message Passing
Semantics, a research program to explore issues in the semantics of
communication in parallel systems such as negotiation, transaction
management, problem solving, change, and self-knowledge.
:end

:aim 692
:title {Policy-Protocol Interaction in Composite Processes}
:author C.J.Barter
:asort Barter, C.J.
:date September 1982
:cost $1.50
:pages 22
:adnum AD-A135733
:abstract
Message policy is defined to be the description of the disposition of
messages of a single type, when received by a group of processes.
Group policy applies to all the processes of a group, but for a single
message type.  It is proposed that group policy be specified in an
expression which is separate from the code of the processes of the
group, and in a separate notation.  As a result, it is posssible to
write policy expressions which are independent of process state
variables, and as well use a simpler control notation based on regular
expressions.  Input protocol, on the other hand, applies to single
processes (or a group as a whole) for all message types.
Encapsulation of processes is presented with an unusual emphasis on
the transactions and resources which associate with an encapsulated
process rather than the state space of the process environment.  This
is due to the notion of encapsulation without shared variables, and to
the association between group policies, message sequences and
transactions.
:end

:aim 697
:title {Binocular Shading And Visual Surface Reconstruction}
:author W.E.L. Grimson
:asort Grimson, W.E.L.
:date August 1982
:cost $2.25
:pages 24
:ADnum (AD-A127058)
:keywords shading, visual surface reconstruction, reflection
properties, photometric stereo
:abstract
Zero-crossing or feature-point based stereo algorithms can, by
definition, determine explicit depth information only at particular
points in the image. To compute a complete surface description, this
sparse depth map must be interpolated. A computational theory of this
interpolation or reconstruction process, based on a {\it surface
consistency constraint}, has previously been proposed. In order to
provide stronger boundary conditions for the interpolation process,
other visual cues to surface shape are examined in this paper. In
particular, it is shown that, in principle, shading information from
the two views can be used to determine the orientation of the surface
normal along the feature-point contours, as well as the parameters of
the reflective properties of the surface material. The numerical
stability of the resulting equations is also examined.
:end