[comp.doc.techreports] tr-input/mit.pt1

leff@smu.UUCP (Laurence Leff) (08/27/89)

From ai.mit.edu!deb@uunet.UU.NET Sat Aug 26 04:18:43 1989
Return-Path: <ai.mit.edu!deb@uunet.UU.NET>
Received: by smu.edu (5.51/5.0)
	id AA11610; Sat, 26 Aug 89 04:18:08 CST
Received: from life.ai.mit.edu by uunet.uu.net (5.61/1.14) with SMTP 
	id AA09060; Fri, 25 Aug 89 11:15:51 -0400
Received: from rice-chex (rice-chex.ai.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) id AA03535; Fri, 25 Aug 89 11:04:09 EDT
From: ai.mit.edu!deb@uunet.UU.NET (Debra Sterling)
Received: by rice-chex (4.0/AI-4.10) id AA01731; Fri, 25 Aug 89 11:03:55 EDT
Date: Fri, 25 Aug 89 11:03:55 EDT
Message-Id: <8908251503.AA01731@rice-chex>
To: jherath@gmuvax.gmu.edu, HAINES@cs.umass.edu, trlist-request@smu.edu
Subject: MIT AI Lab Bibliography-Part 1 of 7
Status: R


The bibliography of available publications from the AI Lab at MIT
follows in seven parts.  If you have a moment, would you acknowledge
receipt of all parts?

Thanks, Deb Sterling, Publications, AI Lab, MIT


% Except for the Bibliographies, publications are available in hardcopy
% only. 

% TO ORDER, specify publications number and author and enclose
% a check payable to the M.I.T. Artificial Intelligence Laboratory for
% the correct amount of U.S. funds.  Prices of publications include
% surface postage to domestic and overseas addresses.

% PREPAYMENT IS REQUIRED. Please note on order if check is sent
% separately. 

% Send orders with payment to:

%   Publications, Room NE43-818
%   M.I.T. Artificial Intelligence Laboratory
%   545 Technology Square
%   Cambridge, MA 02139 USA

% For additional information:

%   Phone number: (617) 253-6773}
%   Net address:  Publications@ai.mit.edu}

% Another source for these publications is:

% NTIS. Reports assigned an ``AD'' number (such as AD-A123456) are
% available from the National Technical Information Service, 5285 Port
% Royal Road, Springfield, Virginia 22161.  NTIS price information can
% be obtained by calling (703) 487-4650.

% If you would like to receive future updates by e-mail,
% please send your net address to publications@wheaties.ai.mit.edu 

% This is the master list of current AI memos and technical reports.
% The format is for the LISP program that generates the bibliography.
% Memos are listed first, then technical reports. To jump directly to
% the TRs, search for ":tr 474".

:aim 191
:author MIT Artificial Intelligence Laboratory
:title Current Publications Bibliography
:date updated periodically
:reference Available from the MIT AI Lab Publications Office, 545
Technology Square, Cambridge, MA, 02139.
:cost FREE

:aim 501
:title Archived Publications Bibliography
:date March 1988, revised August 1988
:pages 17
:cost FREE
:abstract This is a list (no abstracts) of old AI Lab memos and
technical reports dating approximately from 1958 to 1984.  These
publications are available only from the MIT Microreproduction
Laboratory and/or the National Technical Information Service.
Ordering information is included on the cover page.

:aim 239
:title {HAKMEM}
:author M. Beeler, R. W. Gosper, R. Schroeppel
:asort Beeler, M.; Gosper, W.; Schroeppel, R.
:date February 1972
:cost $4.50
:pages 105
:abstract
Here is some little known data which may be of interest to computer
hackers. The items and examples are so sketchy that to decipher them
may require more sincerity and curiosity than a non-hacker can muster.
Doubtless, little of this is new, but nowadays it's hard to tell. So
we must be content to give you an insight, or save you some cycles,
and to welcome further contributions of items, new or used.

:aim 353
:title {Lambda: The Ultimate Imperative}
:author Guy Lewis {Steele, Jr.} and Gerald Jay Sussman
:asort Steele, G.L., Jr.; Sussman, G.J.
:date March 1976
:cost $3.25
:pages 41
:ADnum AD-A030751
:abstract
We demonstrate how to model the following common programming
constructs in terms of an applicative order language similar to LISP:
Simple Recursion, Iteration, Compound Statements and Expressions, GO
TO and Assignment, Continuation-Passing, Escape Expressions, Fluid
Variables, Call by Name, Call by Need, and Call by Reference.  The
models require only (possibly self-referent) lambda application,
conditionals, and (rarely) assignment.  No complex data structures
such as stacks are used.  The models are transparent, involving only
local syntactic transformations.  Some of these models, such as those
for GO TO and assignment, are already well known, and appear in the
work of Landin, Reynolds, and others.  The models for escape
expressions, fluid variables, and call by need with side effects are
new.  This paper is partly tutorial in intent, gathering all the
models together for purposes of context.

:aim 379
:title {LAMBDA: The Ultimate Declarative}
:author Guy Lewis {Steele, Jr.}
:asort Steele, G.L., Jr.
:date November 1976
:cost $3.75
:pages 47
:ADnum AD-A034090
:keywords environments, lambda calculus, procedurally defined data,
data types, optimizing compilers, control structures, function
invocation, temporary variables, continuation passing, actors, lexical
scoping, dynamic binding
:abstract
In this paper, a sequel to ``Lambda: The Ultimate Imperative,`` a new
view of LAMBDA as a {\it renaming} operator is presented and
contrasted with the usual functional view taken by LISP. This view,
combined with the view of function invocation as a kind of generalized
GOTO, leads to several new insights into the nature of the LISP
evaluation mechanism and the symmetry between form and function,
evaluation and application, and control and environment.  

:aim 453
:title {The Art of the Interpreter or, The Modularity Complex (Parts Zero, One, and Two)}
:author Guy Lewis Steele Jr. and Gerald Jay Sussman
:asort Steele, G.L., Jr.; Sussman, G.J.
:date May 1978
:cost $4.00
:pages 75
:ADnum AD-A062925
:keywords abstraction, actors, applicative order, bindings, control structures, debugging, dynamic scoping, environments, fluid variables,
FUNARG problem, functional objects, interactive programming,
lambda calculus, lexical scoping, LISP, modularity, procedural data,
recursion equations, referential transparency, SCHEME, side effects,
static scoping, structured programming
:abstract
We examine the effects of various language design decisions on the
programming styles available to a user of the language, with
particular emphasis on the ability to incrementally construct modular
systems. At each step we exhibit an interactive meta-circular
interpreter for the language under consideration.  Each new
interpreter is the result of an incremental change to a previous
interpreter.  The interpreters we exhibit are all written in a simple
dialect of LISP, and all implement LISP-like languages. A subset of
these interpreters constitute a partial historical reconstruction of
the actual evolution of LISP.

:aim 502A
:title {Constraints - A Language for Expressing Almost-Hierarchical
Descriptions} 
:author Gerald Jay Sussman and Guy Lewis {Steele, Jr.}
:asort Sussman, G.J.; Steele, G.L., Jr.
:date August 1981
:cost $3.25
:pages 40
:reference Replaces Memos 433 and 502.  {See {\it Artificial
Intelligence Journal}, Vol. 14, pp. 1-39, 1980.} 
:abstract
We present an interactive system organized around networks of
constraints rather than the programs which manipulate them. We
describe a language of hierarchical constraint networks. We describe
one method of deriving useful consequences of a set of constraints
which we call propagation. Dependence analysis is used to spot and
track down inconsistent subsets of a constraint set. Propagation of
constraints is most flexible and useful when coupled with the ability
to perform symbolic manipulations on algebraic expressions. Such
manipulations are in turn best expressed as alterations or
augmentations of the constraint network. Almost-Hierarchical
Constraint Networks can be constructed to represent the multiple
viewpoints used by engineers in the synthesis and analysis of
electrical networks. These multiple viewpoints are used in terminal
equivalence and power arguments to reduce the apparent synergy in a
circuit so that it can be attacked algebraically.

:aim 519A
:title {EMACS: The Extensible, Customizable, Self-Documenting Display Editor}
:author Richard M. Stallman
:asort Stallman, R.M.
:date June 1979
:reference Revised March 1981.
:cost $3.25
:pages 29
:ADnum AD-A097914
:keywords display, editor, extensible, interactive, self-documenting
:abstract
EMACS is a display editor which is implemented in an interpreted high
level language.  This allows users to extend the editor by replacing
parts of it, to experiment with alternative command languages, and to
share extensions which are generally useful.  The ease of extension
has contributed to the growth of a large set of useful features. This
paper describes the organization of the EMACS system, emphasizing the
way in which extensibility is achieved and used.

:aim 551
:title {An Outlook On Truth Maintenance}
:author David A. McAllester
:asort McAllester, D.A.
:date August 1980
:cost $3.75
:pages 47
:ADnum AD-A093190
:keywords theorem proving, automated deduction, dependencies,
relevance, truth maintenance, backtracking, assumptions, likelihood
:abstract
Truth maintenance systems have been used in several recent problem
solving systems to record justifications for deduced assertions, to
track down the assumptions which underlie contradictions when they
arise, and to incrementally modify assertional data structures when
assumptions are retracted.  A TMS algorithm is described here that is
substantially different from previous systems.
:end

:aim 555
:title {EMACS Manual for TWENEX Users}
:author Richard M. Stallman
:asort Stallman, R.M.
:date September 1980
:reference Revised May 1981, October 1981, March 1983.
:cost $4.50
:pages 241
:ADnum AD-A093886
:abstract
This manual documents the use and simple customization of the display
editor EMACS with the Twenex (officially known as ``TOPS-20") operating
system.  The reader is not expected to be a programmer.  Even simple
customizations do not require programming skill, but the user who is
not interested in customizing can ignore the scattered customization
hints.  This is primarily a reference manual, but can be used as a
primer.
:end

:aim 556
:title {Phantom Stacks: If you look too hard, they aren't there}
:author Richard M. Stallman
:asort Stallman, R.M.
:date July 1980
:cost $2.50
:pages 12
:ADnum AD-A093189
:abstract
A stack is a very efficient way of allocating and de-allocating memory,
but it works only with a restricted pattern of usage.  Garbage
collection is completely flexible but comparatively costly.  The
implementation of powerful control structures naturally uses memory
which usually fits in with stack allocation but must have the
flexibility to do otherwise from time to time.  How can we manage
memory which only once in a while violates stack restrictions, without
paying a price the rest of the time?  This paper provides an extremely
simple way of doing so, in which only the part of the system which
actually uses the stack needs to know anything about the stack.  We
call them Phantom Stacks because they are liable to vanish if
subjected to close scrutiny.  Phantom Stacks will be used in the next
version of the Artificial Intelligence Lab's Scheme microprocessor
chip.
:end

:aim 642
:title {Semantics of Inheritance and Attributions in the Description System Omega}
:author Giuseppe Attardi and Maria Simi
:asort Attardi, G.; Simi, M.
:date August 1981
:cost $3.75
:pages 38
:ADnum AD-A104776
:keywords description, inheritance, semantic networks, model, attribute,
knowledge representation, logic, consistency
:abstract
Omega is a description system for knowledge embedding which
incorporates some of the attractive modes of expression in natural
language.  Omega represents an investigation both on logic formalisms
more expressive than first order predicate logic and on the
foundations of knowledge representation.  As a logic, Omega achieves
the goal of an intuitively sound and consistent theory of classes
which permits unrestricted abstraction within a powerful logic system.
Description abstraction is the construct provided in Omega
corresponding to set abstraction.  Attributions and inheritance are
the basic mechanisms for knowledge structuring.  Semantic models are
provided, and axiomatization is derived and the consistency and
completeness of the logic is established.

:aim 647
:title {Nature Abhors an Empty Vacuum}
:author Marvin Minsky
:asort Minsky, M.
:date August 1981
:cost $2.50     
:pages 13
:ADnum AD-A106362
:keywords discrete-physics, quantum, Heisenberg, vacuum
:abstract
Imagine a crystalline world of tiny, discrete ``cells", each knowing
only what its nearest neighbors do.  Each volume of space contains
only a finite amount of information, because space and time come in
discrete units.  In such a universe, we'll construct analogs of
particles and fields -- and ask what it would mean for these to
satisfy constraints like conservation of momentum.  In each case
classical mechanics will break down -- on scales both small and large,
and strange phenomena emerge: a maximal velocity, a slowing of
internal clocks, a bound on simultaneous measurement, and quantum-like
effects in very weak, or intense fields.
:end

:aim 673
:author Ken Haase
:asort Haase, K.W.
:title CAULDRONS: An Abstraction For Concurrent Problem Solving
:date September 1986
:pages 44
:cost $3.75
:adnum AD-A194128
:keywords cauldrons, problem solving, distributed AI
:abstract This research extends a tradition of distributed theories of
{\it mind} into the implementation of a distributed problem solver. In
this problem solver a number of ideas from Minsky's {\bf Society of
Mind} are implemented and are found to provide powerful abstractions
for the programming of distributed systems. These abstractions are the
{\it cauldron}, a mechanism for instantiating reasoning contexts, the
{\it frame}, a way of modularly describing those contexts and the {\it
goal-node}, a mechanism for bringing a particular context to bear on a
specific task. The implementation of both these abstractions and the
distributed problem solver in which they run is described, accompanied
by examples of their application to various domains.

:aim 679
:title {Learning Physical Descriptions from Functional Definitions,
Examples, and Precedents} 
:author Patrick H. Winston, Thomas O. Binford, Boris Katz, and Michael
Lowry
:asort Winston, P.H.; Binford, T.O.; Katz, B.; Lowry, M.
:date November 1982
:reference Revised January 1983.
:cost $3.25
:pages 23
:ADnum AD-A127047
:keywords learning, form and function
:abstract
It is too hard to tell vision systems what things look like.  It is
easier to talk about purpose and what things are for.  Consequently,
we want vision systems to use functional descriptions to identify
things, when necessary, and we want them to learn physical
descriptions for themselves, when possible.  This paper describes a
theory that explains how to make such systems work.  The theory is a
synthesis of two sets of ideas: ideas about learning from precedents
and exercises developed at MIT, and ideas about physical description
developed at Stanford.  The strength of the synthesis is illustrated
by way of representative experiments.  All of these experiments have
been performed with an implemented system.
:end

:aim 683
:title {Visual Algorithms}
:author Tomaso Poggio
:asort Poggio, T.
:date May 1982
:cost $3.25
:pages 28
:ADnum AD-A127251
:keywords polynomial algorithms, parallel/serial, neural hardware,
perceptrons, nonlinear mappings
:abstract
Nonlinear, local and highly parallel algorithms can perform several
simple but important visual computations.  I study here the class of
polynomial algorithms to exemplify some of the important issues for
visual processing like linear vs. nonlinear and local vs.  global.  I
also consider how algorithms of this type could be implemented in
nervous hardware, in terms of synaptic interactions strategically
located in a dendritic tree.  In the appendix, another (nonlinear)
differential operator, the second directional derivative along the
gradient, is briefly discussed as an alternative to the Laplacian.

:aim 691
:title {Open Systems}
:author Carl Hewitt, Peter de Jong
:asort Hewitt, C.; de Jong, P.
:date December 1982
:cost $3.25
:pages 28
:keywords open systems, conceptual modeling, actors, sprites,
description, semantics, problem solving
:abstract
This paper describes some problems and opportunities associated with
conceptual modeling for the kind of ``open systems" we foresee must and
will be increasingly recognized as a central line of computer system
development.  Computer applications will be based on communication
between sub-systems which will have been developed separately and
independently.  The only thing that all the various sub-systems hold
in common is the ability to communicate with each other.  In this
paper we study Open Systems from the viewpoint of Message Passing
Semantics, a research program to explore issues in the semantics of
communication in parallel systems such as negotiation, transaction
management, problem solving, change, and self-knowledge.  :end

:aim 701
:title {Computational Introspection}
:author John Batali
:asort Batali, J.
:date February 1983
:cost $4.00
:pages 68
:ADnum AD-A127132
:keywords problem solving, reflection, action, philosophy of mind,
introspection, LISP, representation
:abstract
Introspection is the process of thinking about one's own thoughts and
feelings.  In this paper, I discuss recent attempts to make
computational systems that exhibit introspective behavior: [Smith,
1982], [Weyhrauch, 1978], and [Doyle, 1980]. Each presents a system
capable of manipulating representations of its own program and current
context. I argue that introspective ability is crucial for intelligent
systems -- without it an agent cannot represent certain problems that it
must be able to solve.  A theory of intelligent action would describe
how and why certain actions intelligently achieve an agent's goals.
The agent would both embody and represent this theory: it would be
implemented as the program for the agent; and the importance of
introspection suggests that the agent represent its theory of action
to itself.
:end

:aim 702
:title {Representations for Reasoning About Change}
:author Reid G. Simmons and Randall Davis
:asort Simmons, R.G.; Davis, R.
:date April 1983
:cost $3.75
:pages 54
:reference See SIGART Proc., April 1983, Workshop on Motion, Toronto,
Canada.
:ADnum AD-A131649
:keywords processes, action, knowledge representation, expert systems,
qualitative reasoning
:abstract
This paper explores representations used to reason about objects which
change over time and the processes which cause changes.  Specifically,
we are interested in solving a problem known as geologic
interpretation.  To help solve this problem, we have developed a
simulation technique, which we call {\it imagining}.  Imagining takes
a sequence of events and simulates them by drawing diagrams
In order to do this imagining, we have developed two representations
of objects, one involving {\it histories} and the other involving {\it
diagrams} and two corresponding representations of physical
processes, each suited to reasoning about one of the object
representations.  

:aim 747
:title A Structural Approach to Analogy
:author Hormoz Mansour
:asort Mansour, H.
:date November 1983
:pages 28
:cost $3.25
:keywords structural analogy, contextual analogy, knowledge
acquisition, nested frames, automatic acquisition
:abstract
There are multiple sorts of reasoning by analogy between two domains;
the one with which we are concerned is a type of contextual analogy.
The purpose of this paper is to see whether two domains that look
analogous would be analogous in all aspects and contexts. To perform
this, we analyse the domain according to different particularities.
For each particularity or context we continue the analysis and search
for another one within the same domain. In this way we create a kind
of structure for the different domains. This sort of analysis is
represented by frames and frames which are nested within each other.
This paper describes this concept of structural analogy and an
implemented system based on it called* ``MULTI-ANALOG,'' a limited
example of knowledge-acquisition, problem solving, and
automatic-acquisition.

:aim 750
:title {Design Issues in Parallel Architecture for Artificial
Intelligence}
:author Carl Hewitt and Henry Lieberman
:asort Hewitt, C.; Lieberman, H.
:date November 1983
:cost $2.50
:pages 15
:adnum AD-A142482
:keywords architecture, parallelism, actors, Act2, artificial intelligence,
Apiary, message passing, reasoning
:abstract
Development of highly intelligent computers requires a conceptual
foundation that will overcome the limitations of the von Neumann
architecture. Architectures for such a foundation should meet the
following design goals:  *) Address the fundamental organizational
issues of large-scale parallelism and sharing in a fully integrated
way.  This means attention to organizational principles, as well as
hardware and software.  *) Serve as an experimental apparatus for
testing large-scale artificial intelligence systems.  *) Explore the
feasibility of an architecture based on abstractions, which serve as
natural computational primitives for parallel processing.  Such
abstractions should be logically independent of their software and
hardware host implementations.  In this paper we lay out some of the
fundamental design issues in parallel architectures for Artificial
Intelligence, delineate limitations of previous parallel architectures,
and outline a new approach that we are pursuing. 

:aim 751
:title Analog ``Neuronal" Networks in Early Vision
:author Christof Koch, Jose Marroquin, and Alan Yuille
:asort Koch, C.; Marroquin, J.L.; Yuille, A.L.
:date June 1985
:pages 17
:cost $2.50
:keywords analog networks, analog-digital hardware, parallel computers,
surface interpolation, surface reconstruction, optimization problem,
regularization theory, early vision
:abstract 
Hopfield and Tank (1985) have shown that networks of nonlinear analog
``neurons" can be effective in computing the solution of optimization
problems in early vision. In this paper, we show how these networks
can be generalized to solve the non-convex energy functionals of early
vision. We illustrate this approach by implementing a specific network
solving the problem of reconstructing a smooth surface while
preserving its discontinuities from sparsely sampled data (Geman and
Geman, 1984; Terzopoulos, 1984). These results suggest a novel
computational strategy for solving such problems for both biological
and artificial vision systems.

:aim 755
:title The Copycat Project: An Experiment in Nondeterminism and
Creative Analogies
:author Douglas Hofstadter
:asort Hofstadter, D.
:date January 1984
:cost $3.75
:pages 47
:adnum AD-A142744
:keywords analogy, nondeterminism, parallelism, randomness,
statistically emergent mentality, semanticity, slippability,
computational temperature
:abstract
A microworld is described, in which many analogies involving
strikingly different concepts and levels of subtlety can be made.  The
question ``What differentiates the good ones from the bad ones?" is
discussed, and then the problem of how to implement a computational
model of the human ability to come up with such analogies (and to have
a sense for their quality) is considered.  

:aim 768
:author V. Torre and T. Poggio
:asort Torre, V.; Poggio, T.
:title On Edge Detection
:date August 1984
:cost $3.25
:adnum AD-A148573
:pages 41
:keywords numerical differentiation, zero crossings, regularization
:abstract
This paper discusses the critical, intermediate goal of edge detection,
which is the detection and characterization of significant intensity
change.
To characterize the types of intensity changes derivatives of
different types, and possibly different scales, are needed.  Thus, we
consider this part of edge detection as a problem in numerical
differentiation. We show that numerical differentiation of images is
an ill-posed problem in the sense of Hadamard.  Differentiation needs
to be {\it regularized} by a regularizing filtering operation before
differentiation. This shows that this part of edge detection consists
of two steps, a {\it filtering} step and a {\it differentiation} step.
Following this perspective, the paper discusses in detail the
following theoretical aspects of edge detection: (1) The properties of
different types of filters are derived. (2) Relationships among
several 2-D differential operators are established. (3) Geometrical
and topological properties of the zero crossings of differential
operators are studied in terms of transversality and Morse theory. We
discuss recent results on the behavior and the information content of
zero crossings obtained with filters of different sizes. Finally, some
of the existing local edge detector schemes are briefly outlined in
the perspective of our theoretical results.

:aim 773
:author Tomaso Poggio and Vincent Torre
:asort Poggio, T.; Torre, V.
:title Ill-Posed Problems and Regularization Analysis in Early Vision
:date April 1984
:cost $2.50
:adnum AD-A147753
:pages 14
:reference Also, C.B.I.P. Paper 001
:keywords early vision, regularization theory, edge detection,
ill-posed problems, motion analysis, variational problems
:abstract
One of the best definitions of early vision is that it is inverse
optics -- a set of computational problems that both machines and
biological organisms have to solve.  While in classical optics the
problem is to determine the images of physical objects, vision is
confronted with the inverse problem of recovering three-dimensional
shape from the light distribution in the image. Most processes of
early vision such as stereomatching, computation of motion and all the
``structure from" processes can be regarded as solutions to inverse
problems. This common characteristic of early vision can be
formalized - {\it most early vision problems are ``ill-posed problems"
in the sense of Hadamard}. We will show that a mathematical theory
developed for regularizing ill-posed problems leads in a natural way
to the solution of early vision problems in terms of variational
principles of a certain class.  This is a new theoretical framework
for some of the variational solutions already obtained in the analysis
of early vision processes. It also shows how several other problems in
early vision can be approached and solved.

:aim 774
:author Gerald Roylance
:asort Roylance, G.
:title Some Scientific Subroutines in LISP
:date September 1984
:cost $2.50
:adnum AD-A147889
:pages 12
:lisp, math subroutines
:abstract
Here's a LISP Library of Mathematical functions that calculate
hyperbolic and inverse hyperbolic functions. Bessel functions,
elliptic integrals, the gamma and beta functions, and the incomplete
gamma and beta functions.  There are probability density functions,
cumulative distributions, and random number generators for the normal,
Poisson, chi-square, Student's T, and Snedecor's F functions. Multiple
linear regression, Fletcher-Powell unconstrained minimization,
numerical integration, root finding, and convergence. Code to factor
numbers and to do the Solovay-Strassen probabilistic prime test.

:aim 783
:author Tomaso Poggio and Christof Koch
:asort Poggio, T.; Koch, C.
:title An Analog Model of Computation for the Ill-Posed Problems of
Early Vision
:date May 1984
:cost $2.50
:adnum AD-A147726
:pages 16
:reference Also C.B.I.P. Paper 002; and {\it Journal of Robotics
Research}, Vol. 5, No. 1, Spring 1986.
:keywords early vision, parallel processing,
elect./chem./neuronal networks, regularization analysis,
neural hardware, analog computation, motion analysis, variational problems 
:abstract
A large gap exists at present between computational theories of vision
and their possible implementation in neural hardware. The model of
computation provided by the digital computer is clearly unsatisfactory
for the neurobiologist, given the increasing evidence that neurons are
complex devices, very different from simple digital switches. It is
especially difficult to imagine how networks of neurons may solve the
equations involved in vision algorithms in a way similar to digital
computers. In this paper, we suggest an analog model of computation in
electrical or chemical networks for a large class of vision problems,
that maps more easily into biologically plausible mechanisms.  Poggio
and Torre (1984) have recently recognized that early vision problems
such as motion analysis (Horn and Schunck, 1981; Hildreth, 1984a,b),
edge detection (Torre and Poggio, 1984), surface interpolation
(Grimson, 1981; Terzopoulos, 1984), shape-from-shading (Ikeuchi and
Horn, 1981) and stereomatching can be characterized as mathematically
ill-posed problems in the sense of Hadamard (1923). Ill-posed problems
can be ``solved", according to regularization theories, by variational
principles of a specific type. A natural way of implementing
variational problems are electrical, chemical or neuronal networks. We
present specific networks for solving several low-level vision
problems, such as the computation of visual motion and edge detection.

:aim 788
:author G. Edward {Barton, Jr.}
:asort Barton, G.E., Jr.
:title Toward A Principle-Based Parser
:date July 1984
:cost $3.75
:adnum AD-A147637
:pages 47
:keywords natural language, parsing, syntax, linguistics, generative
grammar, GB-theory, metarules, modularity
:abstract
Parser design lags behind linguistic theory. While modern
transformational grammar has largely abandoned complex,
language-specific rules systems in favor of modular subsystems of
principles and parameters, the rule systems that underlie existing
natural-language parsers are still large, detailed, and complicated.
The shift to modular theories in linguistics took place because of the
scientific disadvantages of such rule systems.  Those scientific ills
translate into engineering maladies that make building
natural-language systems difficult. The cure for these problems should
be the same in parser design as it was in linguistic theory. The shift
to modular theories of syntax should be replicated in parsing
practice; a parser should base its actions on interacting modules of
principles and parameters rather than a complex, monolithic rule
system. If it can be successfully carried out, the shift will make it
easier to build natural-language systems because it will shorten and
simplify the language descriptions that are needed for parsing.  It
will also allow parser design to track new developments in linguistic theory.

:aim 792
:author J.L. Marroquin
:asort Marroquin, J.L.
:title Surface Reconstruction Preserving Discontinuities
:date August 1984
:cost $3.25
:adnum AD-A146741
:pages 25
:keywords surface reconstruction, discontinuities, Markov random
fields, Bayesian estimation 
:abstract
This paper presents some experimental results that indicate the
plausibility of using non-convex variational principles to reconstruct
piecewise smooth surfaces from sparse and noisy data.  This method
uses prior generic knowledge about the geometry of the discontinuities
to prevent the blurring of the boundaries between continuous
subregions. We include examples of the application of this approach to
the reconstruction of synthetic surfaces, and to the interpolation of
disparity data from the stereo processing of real images.

:aim 795
:author Christof Koch and Tomaso Poggio
:asort Koch, C.; Poggio, T.
:title Biophysics of Computation: Neurons, Synapses and Membranes
:date October 1984
:cost $4.00
:pages 73
:reference C.B.I.P. Paper 008; and {\it Synaptic Function}, G.M.
Edelman, W.E. Gall and W.M. Cowan, editor, John Wiley \& Sons, New
York, NY, 1987.
:keywords computational systems, biophysics, information processing,
AND-NOT, spikes, spines,
active processes
:abstract
Synapses, membranes and neurotransmitter play an important role in
processing information in the nervous system. We do not know, however,
what biophysical mechanisms are critical for neuronal computations,
what elementary information processing operations they implement, and
which sensory or motor computations they underlie. In this paper, we
outline an approach to these problems. We review a number of different
biophysical mechanisms, such as synaptic interactions between
excitation and inhibition, dendritic spines, non-impulse generating
membrane nonlinearities and transmitter-regulated voltage channels.
For each one, we discuss the information processing operations that
may be implemented. All of these mechanisms act either within a few
milliseconds, such as the action potential or synaptic transmission,
or over several hundred milliseconds or even seconds, modulating some
property of the circuit. In some cases we will suggest specific
examples where a biophysical mechanism underlies a given computation.
In particular, we will discuss the neuronal operation, and their
implementation, underlying direction selectivity in the vertebrate retina.

:aim 796
:author Alan Bawden and Philip E. Agre
:asort Bawden, A.; Agre, P.E.
:title What a parallel programming language has to let you say
:date September 1984
:cost $3.25
:adnum AD-A147854
:pages 26
:keywords Connection Machine, programming languages, parallel computers, compiler theory, message passing
:abstract
We have implemented in simulation a prototype language for the
Connection Machine called CL1. CL1 is an extrapolation of serial
machine programming language technology: in CL1 one programs the
individual processors to perform local computations and talk to the
communications network. We present details of the largest of our
experiments with CL1, an interpreter for Scheme (a dialect of LISP)
that allows a large number of different Scheme programs to be run in
parallel on the otherwise SIMD Connection Machine. Our aim was not to
propose Scheme as a language for Connection Machine programming, but
to gain experience using CL1 to implement an interesting and familiar
algorithm. Consideration of the difficulties we encountered led us to
the conclusion that CL1 programs do not capture enough of the causal
structure of the processes they describe. Starting from this
observation, we have designed a successor language call CGL (for
Connection Graph Language).

:aim 798
:author Hormoz Mansour
:asort Mansour, H.
:title The Use of Censors for Nonmonotonic Reasoning and Analogy in
Medical Decision-Making
:date November 1985
:pages 43
:cost $3.75
:keywords censor, analogy, non-monotonic logic, learning,
thyroid diseases, neuroendocrinology
:abstract A patient rarely has a single, isolated disease. The
situation is usually much more complex since the different parts of
the human organism and metabolism interact with each other and follow
several feedback patterns. These interactions and feedback patterns
become more important with the addition of the external environment.
When a disease is present, the first steps of the medical diagnosis
should be to research and to determine whether another disease
interacts with (``Censors'') or changes the significant symptoms,
syndromes, or results of the laboratory tests of the first disease.
Understanding of this interaction and the appropriate reasoning is
based on a type of non-monotonic logic. We will try, within this paper,
to see the effect of two diseases on each other. One important part of
the effect of two diseases on each other is the entrancing effect of
what we call ``Censors.'' In addition, causal reasoning, reasoning by
analogy, and learning from precedents are important and necessary for
a human-like expert in medicine. Some aspects of their application to
thyroid diseases, with an implementation system, are considered in
this paper.

:aim 800
:author Demetri Terzopoulos
:asort Terzopoulos, D.
:title Computing Visible-Surface Representations
:date March 1985
:cost $3.75
:pages 61
:adnum AD-A160602
:keywords vision, multiresolution reconstruction, finite elements,
discontinuities, surface representation, variational principles,
generalized splines, regularization
:abstract
The low-level interpretation of images provides contraints on 3-D
surface shape at multiple resolutions, but typically only at scattered
locations over the visual field. Subsequent visual processing can be
facilitated substantially if the scattered shape constraints are
immediately transformed into visible-surface representations that
unambiguously specify surface shape at every image point. The required
transformation is shown to lead to an ill-posed surface reconstruction
problem. A well posed variational principle formulation is obtained by
invoking ``controlled continuity," a physically nonrestrictive
(generic) assumption about surfaces which is nonetheless strong enough
to guarantee unique solutions.  The variational principle, which
admits an appealing physical interpretation, is locally discretized by
applying the finite element method to a piecewise, finite element
representation of surfaces. This forms the mathematical basis of a
unified and general framework for computing visible-surface
representations.  An efficient surface reconstruction algorithm is developed.

:aim 801
:author Kent Pitman
:asort Pitman, K.
:title The Description of Large Systems
:date September 1984
:cost $3.25
:adnum AD-A148072
:pages 32
:keywords compilation, large systems, LISP, system maintenance
:abstract 
In this paper, we discuss the problems associated with the description
and manipulation of large systems when their sources are not
maintained as single files.  We show why and how tools that address
these issues, such as Unix MAKE and Lisp Machine DEFSYSTEM, have
evolved.

:aim 803
:author Demetri Terzopoulos
:asort Terzopoulos, D.
:title Multigrid Relaxation Methods and the Analysis of Lightness,
Shading and Flow
:date October 1984
:cost $3.25
:adnum AD-A158173
:pages 23
:keywords computer vision, lightness, optical flow, partial
differential equations, multigrid relaxation, shape from shading,
variational principles, parallel 
:abstract
Image analysis problems posed mathematically as variational principles
or as partial differential equations, are amenable to numerical
solution by relaxation algorithms that are local, iterative, and often
parallel. Although they are well suited structurally for
implementation on massively parallel, locally-interconnected
computational architectures, such distributed algorithms are seriously
handicapped by an inherent inefficiency at propagating constraints
between widely separated processing elements. Hence, they converge
extremely slowly when confronted by the large representation necessary
for low-level vision. Application of multigrid methods can overcome
this drawback, as we established in previous work on 3-D surface
reconstruction. In this paper, we develop efficient multiresolution
iterative algorithms for computing lightness, shape-from-shading, and
optical flow, and we evaluate the performance of these algorithms on
synthetic images. The multigrid methodology that we describe is
broadly applicable in low-level vision. Notably, it is an appealing
strategy to use in conjunction with regularization analysis for the
efficient solution of a wide range of ill-posed visual reconstruction problems.

:aim 806
:author John Canny
:asort Canny, J.
:title Collision Detection for Moving Polyhedra
:date October 1984
:cost $3.25
:adnum AD-A148961
:pages 17
:keywords collision detection, collision avoidance, motion planning,
robotics, geometric modelling	      
:abstract
We consider the problem of moving a three dimensional solid object
among polyhedral obstacles. The traditional formulation of
configuration space for this problem uses three translational
parameters and three {\it angles} (typically Euler angles), and the
constraints between the object and obstacles involve transcendental
functions. We show that a quaternion representation of rotation yields
constraints which are purely algebraic in a higher-dimensional space.
By simple manipulation, the constraints may be projected down into a
six dimensional space with no increase in complexity. Using this
formulation, we derive an efficient {\it exact} intersection test for
an object which is translating and rotating among obstacles.

:aim 811 
:author Richard J. Doyle 
:asort Doyle, R.J.  
:title Hypothesizing and Refining Causal Models 
:date December 1984 
:cost $4.50 
:ADnum AD-A158165 
:pages 108 
:keywords learning, causal reasoning, qualitative reasoning, telelogical
reasoning, theory formation, planning, analogy, quantities.  
:abstract An important common sense competence is the ability to
hypothesize causal relations. This report presents a set of
constraints which make the problem of formulating causal hypotheses
about simple physical systems a tractable one. The constraints include
a temporal and physical proximity requirement and a set of abstract
causal explanations for changes in physical systems in terms of
dependences between quantities.  These constraints were embedded in a
learning system which was tested in two domains: a sink and a toaster.
Inaccurate predictions indicate deficiencies in the causal models and
the need to rehypothesize. The learning system successfully generates
and refines naive causal models of these simple physical systems.

:aim 813
:author Berthold K.P. Horn and Michael J. Brooks
:asort Horn, B.K.P.; Brooks, M.J.
:title The Variational Approach to Shape from Shading
:date March 1985
:cost $3.25
:pages 33
:reference {{\it Computer Vision, Graphics, and Image Processing}, Vol.
33, No. 2, 1986.}
:keywords calculus of variations, parallel iteration, regularization,
shading, shape, shape from shading
:abstract
We develop a systematic approach to the discovery of parallel
iterative schemes for solving the shape-from-shading problem on a
grid. A standard procedure for finding such schemes is outlined, and
subsequently used to derive several new ones. The shape-from-shading
problem is known to be mathematically equivalent to a nonlinear
first-order partial differential equation in surface elevation.  To
avoid the problems inherent in methods used to solve such equations,
we follow previous work in reformulating the problem as one of finding
a surface orientation field that minimizes the integral of the
brightness error. The calculus of variations is then employed to
derive the appropriate Euler equations on which iterative schemes can
be based. Different schemes result if one uses different parameters to
describe surface orientation. We derive two new schemes, using unit
surface normals, that facilitate the incorporation of the occluding
boundary information.  These schemes, while more complex, have several
advantages over previous ones.

:aim 815
:author Kenneth Man-Kam Yip
:asort Yip, K.
:title Tense, Aspect and Cognitive Representation of Time
:date December 1984
:cost $3.25
:ADnum AD-A159306
:pages 26
:keywords temporal logic, linguistic constraints, learnability, tense
and aspect, processing constraints, markedness
:abstract This paper explores the relationships between a
computational theory of temporal representation (as develped by James
Allen) and a formal linguistic theory of tense (as developed by
Norbert Hornstein) and aspect. It aims to provide explicit answers to
four fundamental questions: (1) what is the computational justification
for the primitives of a linguistic theory; (2) what is the
computational explanation of the formal grammatical constraints; (3)
what are the processing constraints imposed on the learnability and
markedness of these theoretical constructs; and (4) what are the
constraints that a linguistic theory imposes on representation.  We
show that one can effectively exploit the interface between the
language faculty and the cognitive faculties by using linguistic
constraints to determine restrictions on the cognitive representations
and {\it vice versa}. Three main results are obtained: (1) We derive
an explanation of an observed grammatical constraint on tense -- the
Linear Order Constraint -- from the information monotonicity property
of the constraint propagation algorithm of Allen's temporal system:
(2) We formulate a principle of markedness for the basic tense
structures based on the computational efficiency of the temporal
representations; and (3) We show Allen's interval-based temporal
system is not arbitrary, but it can be used to explain independently
motivated linguistic constraints on tense and aspect interpretations.
We also claim that the methodology of research developed in this study
-- ``cross-level" investigation of independently motivated formal
grammatical theory and computational models -- is a powerful paradigm
with which to attack representational problems in basic cognitive
domains, e.g. space, time, causality, etc.

:aim 816
:author Richard C. Waters
:asort Waters, R.C.
:title PP: A Lisp Pretty Printing System
:date December 1984
:cost $3.25
:ADnum AD-A157092
:pages 37
:keywords pretty printing, formatted output, abbreviated output, LISP
:abstract
The PP system provides an efficient implementation of the Common Lisp
pretty printing function PPRINT. In addition, PP goes beyond ordinary
pretty printers by providing mechanisms which allow the user to
control the exact form of pretty printed output. This is done by
extending Lisp in two ways. First, several new FORMAT directives are
provided which support dynamic decisions about the placement of
newlines based on the line width available for output. Second, the
concept of print-self methods is extended so that it can be applied to
lists as well as to objects which can receive messages. Together,
these extensions support pretty printing of both programs and data
structures. The PP system also modifies the way that the Lisp printer
handles the abbreviation of output. The traditional mechanisms for
abbreviating lists based on nesting depth and length are extended so

jen@PRC.UNISYS.COM (Judie Norton) (09/06/89)

received by jen@prc.unisys.com, Freeman Library, Unisys, Paoli, PA