E1AR0002@SMUVM1.BITNET (11/15/86)
TECHNICAL NOTE: 309\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: AN ABSTRACT PROLOG INSTRUCTION SET\\
AUTHOR: DAVID H.D. WARREN\\
DATE: OCTOBER 1983\\[0.01in]
ABSTRACT: This report describes an abstract Prolog instruction set
suitable for software, firmware, or hardware implementation. The
instruction set is abstract in that certain details of its encoding
and implementation are left open, so that it may be realized in a
number of different forms. The forms that are contemplated are:
\begin{itemize}
\item Translation into a compact bytecode, with emulators written in
C (for maximum portability), Progol (a macrolanguage generating
machine code, for efficient software implementations as an
alternative to direct compilation on machines such as the
VAX), and VAX-730 microcode.
\item Compilation into the standard instructions of machines such as
the VAX or DECsystem-10/20.
\item Hardware (or firmware) emulation of the instruction set on a
specially designed Prolog processor.
\end{itemize}
The abstract machine described herein (new Prolog Engine) is a major
revision of the old Prolog Engine described in a previous document.
The new model overcomes certain difficulties in the old model, which
are discussed in a later section. The new model can be considered to
be a modification of the old model, where the stack contains compiler-
defined goals called environments instead of user-defined goals. The
environments correspond to some number of goals forming the tail of a
clause. The old model was developed having primarily in mind a
VAX-730 microcode implementation. The new model has, in addition,
been influenced by hardware implementation considerations, but should
remain equally amenable to software or firmware implementation on
machines such as the VAX.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 310\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: OVERVIEW OF THE IMAGE UNDERSTANDING TESTBED\\
AUTHOR: ANDREW J. HANSON\\
DATE: OCTOBER 1983\\[0.01in]
ABSTRACT: The Image Understanding Testbed is a system of hardware and
software that is designed to facilitate the integration, testing, and
evaluation of implemented research concepts in machine vision. The
system was developed by the Artificial Intelligence Center of SRI
International under the joint sponsorship of the Defense Advanced
Research Projects Agency (DARPA) and the Defense Mapping Agency (DMA).
The primary purpose of the Image Understanding (IU) Testbed is to
provide a means for transferring technology from the DARPA-sponsored
IU research program to DMA and other organizations in the defense
community.
The approach taken to achieve this purpose has two components:
\begin{itemize}
\item The establishment of a uniform environment that will be as
compatible as possible with the environments of research centers at
universities participating in the IU program. Thus, organizations
obtaining copies of the testbed can receive new results of ongoing
research as they become available.
\item The acquisition, integration, testing, and evaluation of
selected scene analysis techniques that represent mature examples of
generic areas of research activity. These contributions from IU
program participants will allow organizations with testbed copies to
immediately begin investigating potential applications of IU
technology to problems in automated cartography and other areas of
scene analysis.
\end{itemize}
An important component of the DARPA IU research program is the
development of image-understanding techniques that could be applied to
automated cartography and military image interpretation tasks; this
work forms the principal focus of the testbed project. A number of
computer modules developed by participants in the IU program have been
transported to the uniform testbed environment as a first step in the
technology transfer process. These include systems written in UNIX C,
MAINSAIL, and FRANZ LISP. Capabilities of the computer programs
include segmentation, linear feature delineation, shape detection,
stereo reconstruction, and rule-based recognition of classes of
three-dimensional objects.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 312\hfill PRICE: \$15.00\\[0.01in]
\noindent TITLE: PLANNING ENGLISH REFERRING EXPRESSIONS\\
AUTHOR: DOUGLAS APPELT\\
DATE: OCTOBER 1983\\[0.01in]
ABSTRACT: This paper describes a theory of language generation based
on planning. To illustrate the theory, the problem of planning
referring expressions is examined in detail. A theory based on
planning makes it possible for one to account for noun phrases that
refer, that inform the hearer of additional information, and that are
coordinated with the speaker's physical actions to clarify his
communicative intent. The theory is embodied in a computer system
called KAMP, which plans both physical and linguistic actions, given
a high-level description of the speaker's goals.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 313\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: COMMUNICATION AND INTERACTION IN MULTI-AGENT PLANNING\\
AUTHOR: MICHAEL GEORGEFF\\
DATE: DECEMBER 9, 1983\\[0.01in]
ABSTRACT: A method for synthesizing multi-agent plans from simpler
single-agent plans is described. The idea is to insert communication
acts into the single-agent plans so that agents can synchronize
activities and avoid harmful interactions. Unlike most previous
planning systems, actions are represented by \underline{sequences} of states,
rather than as simple state change operators. This allows the
expression of more complex kinds of interaction than would otherwise
be possible. An efficient method of interaction and safety analysis
is then developed and used to identify critical regions in the plans.
An essential feature of the method is that the analysis is performed
without generating all possible interleavings of the plans, thus
avoiding a combinatorial explosion. Finally, communication primitives
are inserted into the plans and a supervisor process created to handle
synchronization.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 314\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: PROCEDURAL EXPERT SYSTEMS\\
AUTHORS: MICHAEL GEORGEFF and UMBERTO BONOLLO (MONASH U., AUSTRALIA)\\
DATE: DECEMBER 9, 1983\\[0.01in]
ABSTRACT: A scheme for explicitly representing and using expert
knowledge of a procedural kind is described. The scheme allows the
\underline{explicit} representation of both declarative and procedural knowledge
within a unified framework, yet retains all the desirable properties
of expert systems such as modularity, explanatory capability, and
extendability. It thus bridges the gap between the procedural and
declarative languages, and allows formal algorithmic knowledge to be
uniformly integrated with heuristic declarative knowledge. A version
of the scheme has been fully implemented and applied to the domain of
automobile engine fault diagnosis.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 315\hfill PRICE: \$16.00\\[0.01in]
\noindent TITLE: CHOOSING A BASIS FOR PERCEPTUAL SPACE\\
AUTHOR: STEPHEN T. BARNARD\\
DATE: JANUARY 3, 1984\\[0.01in]
ABSTRACT: If it is possible to interpret an image as a projection of
rectangular forms, there is a strong tendency for people to do so. In
effect, a mathematical basis for a vector space appropriate to the
world, rather than to the image, is selected. A computational
solution to this problem is presented. It works by backprojecting
image features into three-dimensional space, thereby generating
(potentially) all possible interpretations, and by selecting those
which are maximally orthogonal. In general, two solutions that
correspond to perceptual reversals are found. The problem of choosing
one of these is related to the knowledge of verticality. A measure of
consistency of image features with a hypothetical solution is defined.
In conclusion, the model supports an information-theoretic
interpretation of the Gestalt view of perception.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 316\hfill PRICE: \$12.00\\[0.01in]
\noindent TITLE: GENERATING EXPERT ANSWERS THROUGH GOAL INFERENCE\\
AUTHOR: MARTHA E. POLLACK\\
DATE: OCTOBER 1983\\[0.01in]
ABSTRACT: Automated expert systems have adopted a restricted view in
which the advice-seeker is assumed always to know what advice he
needs, and always to express in his query an accurate, literal request
for that advice. In fact, people often need to consult with an expert
precisely because they don't know what it is they need to know. It is
a significant feature of human expertise to be able to deduce, from an
incomplete or inappropriate query, what advice is actually needed.
This paper develops a framework for enabling automated experts to
perform similar deductions, and thereby generate appropriate answers
to queries made to them.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 317\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: THE SRI ARTIFICIAL INTELLIGENCE CENTER--A BRIEF HISTORY\\
AUTHOR: NILS J. NILSSON\\
DATE: JANUARY 24, 1984\\[0.01in]
ABSTRACT: Charles A. Rosen came to SRI in 1957. I arrived in 1961.
Between these dates, Charlie organized an Applied Physics Laboratory
and became interested in learning machines and self-organizing
systems. That interest launched a group that ultimately grew into a
major world center of artificial intelligence research--a center that
has endured twenty-five years of boom and bust in fashion, has
graduated over a hundred AI research professionals, and has
generated ideas and programs resulting in new products and companies
as well as scientific articles, books, and this particular collection
itself.
The SRI Artificial Intelligence Center has always been an extremely
cohesive group, even though it is associated with many contrasting
themes. Perhaps these very contrasts are responsible for its
vitality. It is a group of professional researchers, but visiting
Ph.D. candidates (mainly from Stanford University) have figured
prominently in its intellectual achievements. It is not part of a
university, yet its approach to AI has often been more academic and
basic than those used in some of the prominent university
laboratories. For many years a vocal group among its professionals
has strongly emphasized the role of logic and the centrality of
reasoning and declarative representation in AI, but it is also home to
many researchers who pursue other aspects of the discipline. Far more
people have left it (to pursue careers in industry) than are now part
of it, yet it is still about as large as it has ever been and retains
a more or less consistent character. It is an American research
group, supported largely by the Defense Department, but, from the
beginning, it has been a melting pot of nationalities.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 318\hfill PRICE: \$15.00\\[0.01in]
\noindent TITLE: AN AI APPROACH TO INFORMATION FUSION\\
AUTHORS: THOMAS D. GARVEY and JOHN D. LOWRANCE\\
DATE: DECEMBER 1983\\[0.01in]
ABSTRACT: This paper discusses the use of selected artificial
intelligence (AI) techniques for integrating multisource information
in order to develop an understanding of an ongoing situation. The
approach takes an active, top-down view of the task, projecting a
situation description forward in time, determining gaps in the current
model, and tasking sensors to acquire data to fill the gaps.
Information derived from tasked sensors and other sources is combined
using new, non-Bayesian inference techniques.
This active approach seems critical to solve the problems posed by the
low emission signatures anticipated for near-future threats. Simula-
tion experiments lead to the conclusion that the utility of ESM system
operation in future conflicts will depend on how effectively onboard
sensing resources are managed by the system.
The view of AI that will underly the discussion is that of a tech
nology attempting to extend automation capabilities from the current
replace the human's hands approach to that of replacing or
augmenting the human's cognitive and perceptual capabilities.
Technology transfer issues discussed in the presentation are the
primary motivation for highlighting this view. The paper will
conclude with a discussion of unresolved problems associated with the
introduction of AI technology into real world military systems.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 319\hfill PRICE: \$15.00\\[0.01in]
\noindent TITLE: BELIEF AND INCOMPLETENESS\\
AUTHOR: KURT KONOLIGE\\
DATE: JANUARY 11, 1984\\[0.01in]
ABSTRACT: Two artificially intelligent (AI) computer agents begin to
play a game of chess, and the following conversation ensues:
\begin{itemize}
\item S1: Do you know the rules of chess?
\item S2: Yes.
\item S1: Then you know whether White has a forced initial win
or not.
\item S2: Upon reflection, I realize that I must.
\item S1: Then there is no reason to play.
\item S2: No.
\end{itemize}
Both agents are state-of-the-art constructions, incorporating the
latest AI research in chess playing, natural-language understanding,
planning, etc. But because of the overwhelming combinatorics of
chess, neither they nor the fastest foreseeable computers would be
able to search the entire game tree to find out whether White has a
forced win. Why then do they come to such an odd conclusion about
their own knowledge of the game?
The chess scenario is an anecdotal example of the way inaccurate
cognitive models can lead to behavior that is less than intelligent in
artificial agents. In this case, the agents' model of belief is not
correct. They make the assumption that an agent actually knows all
the consequences of his beliefs. S1 knows that chess is a finite
game, and thus reasons that, in principle, knowing the rules of chess
is all that is required to figure out whether White has a forced
initial win. After learning that S2 does indeed know the rules of
chess, he comes to the erroneous conclusion that S2 also knows this
particular consequence of the rules. And S2 himself, reflecting on
his own knowledge in the same manner, arrives at the same conclusion,
even though in actual fact he could never carry out the computations
necessary to demonstrate it.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 320\hfill PRICE: \$15.00\\[0.01in]
\noindent TITLE: A FORMAL THEORY OF KNOWLEDGE AND ACTION\\
AUTHOR: ROBERT C. MOORE\\
DATE: FEBRUARY 1984\\[0.01in]
ABSTRACT: Most work on planning and problem solving within the field
of artificial intelligence assumes that the agent has complete
knowledge of all relevant aspects of the problem domain and problem
situation. In the real world, however, planning and acting must
frequently be performed without complete knowledge. This imposes two
additional burdens on an intelligent agent trying to act effectively.
First, when the agent entertains a plan for achieving some goal, he
must consider not only whether the physical prerequisites of the plan
have been satisfied, but also whether he has all the information
necessary to carry out the plan. Second, he must be able to reason
about what he can do to obtain necessary information that he lacks.
In this paper, we present a theory of action in which these problems
are taken into account, showing how to formalize both the knowledge
prerequisites of action and the effects of action on knowledge.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE: 321\hfill PRICE: \$10.00\\[0.01in]
\noindent TITLE: PROBABILISTIC LOGIC\\
AUTHOR: NILS J. NILSSON\\
DATE: FEBRUARY 6, 1984\\[0.01in]
ABSTRACT: Because many artificial intelligence applications require
the ability to deal with uncertain knowledge, it is important to seek
appropriate generalizations of logic for that case. We present here a
semantical generalization of logic in which the truth-values of
sentences are probability values (between 0 and 1). Our generaliza-
tion applies to any logical system for which the consistency of a
finite set of sentences can be established. (Although we cannot
always establish the consistency of a finite set of sentences of
first-order logic, our method is usable in those cases in which we
can.) The method described in the present paper combines logic with
probability theory in such a way that probabilistic logical entailment
reduces to ordinary logical entailment when the probabilities of all
sentences are either 0 or 1.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
-------