[mod.techreports] st4.x tech reports

E1AR0002@SMUVM1.BITNET (11/15/86)

TECHNICAL NOTE:  280\hfill PRICE: \$16.00\\[0.01in]

\noindent TITLE:  FRACTAL-BASED DESCRIPTION OF NATURAL SCENES\\
AUTHOR:  ALEX P. PENTLAND\\
DATE:  FEBRUARY 1984\\[0.01in]

ABSTRACT: This  paper  addresses the  problems   of: (1)  representing
natural shapes such as mountains, trees, and clouds, and (2) computing
their description from image data.   To solve these  problems, we must
be able to relate  natural surfaces to  their images;  this requires a
good  model of natural surface shapes.    Fractal functions are a good
choice  for  modeling 3-D natural surfaces because  (1)  many physical
processes produce a  fractal surface  shape, (2) fractals  are  widely
used as a graphics tool for generating natural-looking shapes, and (3)
a survey  of natural  imagery has shown  that the  3-D fractal surface
model,  transformed  by the   image formation process,  furnishes   an
accurate description of both textured and shaded image regions.

The 3-D fractal model provides a characterization of 3-D surfaces and
their images for which the appropriateness of the model is verifiable.
Furthermore, this characterization is stable over transformations of
scale and linear transforms of intensity.

The 3-D fractal model has been successfully applied to the problems of
(1)  texture segmentation and  classification,  (2) estimation of  3-D
shape information,  and   (3)   distinguishing    between perceptually
smooth and perceptually textured surfaces in the scene.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  281\hfill PRICE: \$10.00\\[0.01in]

\noindent TITLE:  SENTENCE DISAMBIGUATION BY A SHIFT-REDUCE PARSING TECHNIQUE\\
AUTHOR:  STUART M. SHIEBER\\
DATE:  MARCH 1983\\[0.01in]

ABSTRACT: Native  speakers  of English   show definite  and consistent
preferences for certain readings of syntactically ambiguous sentences.
A user of a natural-language-processing  system would naturally expect
it to reflect the same preferences.  Thus,  such systems must model in
some  way the  \underline{linguistic performance}  as well  as the
 \underline{linguistic
competence} of the  native   speaker.  We have   developed   a parsing
algorithm--a   variant  of  the   LALR(1) shift-reduce algorithm--that
models the preference  behavior of native  speakers  for  a range   of
syntactic  preference   phenomena reported  in  the   psycholinguistic
literature,  including the  recent  data on  lexical preferences.  The
algorithm  yields   the  preferred   parse  deterministically, without
building  multiple parse trees  and choosing   among them.  As  a side
effect,  it displays   appropriate behavior  in   processing  the much
discussed  garden-path sentences.   The parsing  algorithm   has  been
implemented and has confirmed the feasibility  of our approach to  the
modeling of these phenomena.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  282\hfill PRICE: \$10.00\\[0.01in]

\noindent TITLE:  CAN DRAWING BE LIBERATED FROM THE VON NEUMANN STYLE?\\
AUTHOR:  FERNANDO C.N. PEREIRA\\
DATE:  JUNE 1983\\[0.01in]

ABSTRACT: Current graphics database tools  give the   user a view   of
drawing  that is too  constrained by the low-level machine  operations
used to implement the tools.  A new approach  to graphics databases is
proposed, based on the description of objects and their  relationships
in the restricted form  of logic embodied in  the programming language
Prolog.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  283\hfill PRICE: \$10.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE:  FORMAL CONSTRAINTS ON METARULES\\
AUTHORS:  \= STUART M. SHIEBER, SUSAN U. STUCKY, HANS USZKOREIT, and\\
      \> JANE J. ROBINSON\\
DATE:  APRIL 1983\\[-0.15in]
\end{tabbing}

ABSTRACT: Metagrammatical formalisms  that combine context-free phrase
structure rules and metarules  (MPS grammars) allow concise  statement
of generalizations     about   the   syntax  of    natural  languages.
Unconstrained  MPS  grammars,  unfortunately,  are not computationally
safe.  We evaluate  several proposals for constraining them,  basing
our assessment on computational tractability and explanatory adequacy.
We  show that none  of them satisfies both criteria,  and suggest  new
directions for research on alternative metagrammatical formalisms.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  284\hfill PRICE: \$10.00\\[0.01in]

\noindent TITLE:  SEMANTICAL CONSIDERATIONS ON NONMONOTONIC LOGIC\\
AUTHOR:  ROBERT C. MOORE\\
DATE:  JUNE 1983\\[0.01in]

ABSTRACT:  Commonsense reasoning is nonmonotonic in the sense that
we often draw, on the basis of partial information, conclusions that
we later retract when we are given more complete information.  Some of
the most interesting products of recent attempts to formalize
nonmonotonic reasoning are the nonmonotonic logics of McDermott and
Doyle [McDermott and Doyle, 1980; McDermott, 1982].  These logics,
however, all have peculiarities that suggest they do not quite succeed
in capturing the intuitions that prompted their development.  In this
paper we reconstruct nonmonotonic logic as a model of an ideally
rational agent's reasoning about his own beliefs.  For the resulting
system, called \underline{autoepistemic logic}, we define an intuitively based
semantics for which we can show autoepistemic logic to be both sound
and complete.  We then compare autoepistemic logic with the approach
of McDermott and Doyle, showing how it avoids the peculiarities of
their nonmonotonic logic.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  285\hfill PRICE: \$10.00\\[0.01in]

\noindent TITLE:  A FRAMEWORK FOR PROCESSING PARTIALLY FREE WORD ORDER\\
AUTHOR:  HANS USZKOREIT\\
DATE:  MAY 1983\\[0.01in]

ABSTRACT:  The partially free word order in
German belongs to the class of phenomena in
natural language that require a close
interaction between syntax and pragmatics.  Several  competing  principles,
 which
are based on  syntactic and on  discourse  information,  determine the
linear order of noun phrases.  A solution to problems of  this sort is
a  prerequisite for high-quality language  generation.  The linguistic
framework of Generalized   Phrase Structure Grammar   offers tools for
dealing with word order variation.  Some  slight  modifications to the
framework allow for an analysis  of the  German data that incorporates
just the right degree of  interaction between  syntactic and pragmatic
components and that can account for conflicting ordering statements.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  286\hfill PRICE: \$10.00\\[0.01in]

\noindent TITLE:  THEORY RESOLUTION:  BUILDING IN NONEQUATIONAL THEORIES\\
AUTHOR:  MARK E. STICKEL\\
DATE:  MAY 1983\\[0.01in]

ABSTRACT: Theory resolution constitutes  a  set of complete procedures
for building nonequational theories into  a resolution theorem-proving
program  so that axioms  of the theory need never  be   resolved upon.
Total theory resolution uses  a decision procedure that  is capable of
determining inconsistency of any  set of clauses using predicates   in
the theory.   Partial theory resolution   employs  a weaker   decision
procedure  that can  determine potential inconsistency  of  a  pair of
literals.  Applications include  the building in  of both mathematical
and special decision procedures, such as for the taxonomic information
furnished by a knowledge representation system.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  287\hfill PRICE: \$16.00\\[0.01in]

\noindent TITLE:  SHAPE FROM SHADING:  AN ASSESSMENT\\
AUTHOR:  GRAHAME B. SMITH\\
DATE:  MAY 1983\\[0.01in]

ABSTRACT:  We review previous efforts  to  recover surface shape  from
image irradiance   in  order to  assess   what  can and    cannot   be
accomplished.   We   consider   the  informational    requirements and
restrictions of these  approaches.  In dealing with the  question   of
what surface parameters can be recovered  locally from image  shading,
we show that, at most,  shading determines relative surface curvature,
i.e., the ratio  of  surface  curvature measured   in orthogonal image
directions.   The relationship between relative surface  curvature and
the  second  derivatives of image  irradiance is independent of  other
scene parameters, but  insufficient to determine  surface shape.  This
result  places in perspective the  difficulty  encountered in previous
attempts to recover surface orientation from image shading.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  288\hfill PRICE: \$15.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE: \= THE GHOUGH GENERALIZED HOUGH TRANSFORM PACKAGE:\\
         \> DESCRIPTION AND EVALUATION\\
AUTHOR:  KENNETH I. LAWS\\
DATE:  DECEMBER 1982\\[-0.15in]
\end{tabbing}

ABSTRACT:  GHOUGH is  a computer program for  detecting instances of a
given shape within an image.  It may be used for  cueing, counting, or
mensuration.  GHOUGH can find  instances that are displaced, rescaled,
rotated, or  incomplete  relative to  the  shape template.    They are
detected by  computing a generalized Hough transform of the  image
edge elements.  Each edge element votes for all those instances of the
shape that  could  contain it; the  votes   are tallied and  the  best
supported instances are reported as likely matches.

GHOUGH was contributed to the DARPA Image Understanding Testbed at SRI
by the University of  Rochester.  This report summarizes  applications
for which GHOUGH is suited, the history and  nature of  the algorithm,
details of the Testbed  implementation, the manner  in which GHOUGH is
invoked and controlled, the types of results that can be expected, and
suggestions for further development.  The  scientific contributions of
this technical note  are  the analysis of  GHOUGH's parameter settings
and performance characteristics.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  289\hfill PRICE: \$15.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE: \= THE PHOENIX IMAGE SEGMENTATION SYSTEM: \\
         \> DESCRIPTION AND EVALUATION\\
AUTHOR:  KENNETH I. LAWS\\
DATE:  DECEMBER 1982\\[-0.15in]
\end{tabbing}

ABSTRACT: PHOENIX is  a computer  program  for segmenting  images into
homogeneous closed regions.  It uses histogram analysis, thresholding,
and connected-components analysis  to produce a  partial segmentation,
then resegments  each  region until   various  stopping  criteria  are
satisfied.   Its major  contributions over  other recursive segmenters
are a sophisticated  control interface, optional use  of more than one
histogram-dependent intensity threshold during  tentative segmentation
of each region, and spatial analysis of resulting subregions as a form
of look-ahead for  choosing between promising  spectral features  at
each step.

PHOENIX was contributed  to the DARPA Image Understanding  Testbed  at
SRI by   Carnegie-Mellon    University.    This    report   summarizes
applications for which PHOENIX  is  suited, the history and nature  of
the  algorithm, details of the Testbed  implementation, the manner  in
which PHOENIX is invoked and controlled, the type of results that  can
be expected,    and  suggestions for  further development.    Baseline
parameter sets are  given   for producing reasonable segmentations  of
typical imagery.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  290\hfill PRICE: \$25.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE: \= APPLIED LOGIC--ITS USE AND IMPLEMENTATION AS A \\
         \> PROGRAMMING TOOL\\
AUTHOR:  DAVID H.D. WARREN\\
DATE:  JUNE 1983\\[-0.15in]
\end{tabbing}

ABSTRACT: The first part of the thesis  explains from first principles
the concept of logic  programming and its  practical  application in
the programming  language Prolog.  Prolog   is a simple but   powerful
language which encourages  rapid,  error-free  programming  and clear,
readable, concise  programs.  The  basic computational mechanism  is a
pattern matching process (unification) operating  on general  record
structures (terms of logic).

The ideas are illustrated by describing  in detail  one sizable Prolog
program  which  implements a  simple   compiler.   The advantages  and
practicability of using Prolog  for real compiler implementation are
discussed.

The  second part of the  thesis describes techniques  for implementing
Prolog  efficiently.  In particular, it  is  shown how  to compile the
patterns   involved in  the matching process   into  instructions of a
low-level language.  This  ideas  has actually been implemented  in  a
compiler (written in  Prolog)  from Prolog  to  DECsystem-10  assembly
language.   However, the principles  involved    are   explained  more
abstractly in terms  of a  Prolog Machine.  The  code generated is
comparable in speed with  that   produced  by   existing DEC10    Lisp
compilers.  Comparison is possible since pure Lisp can be viewed as  a
(rather restricted) subset of Prolog.

It is argued  that structured data objects,  such  as lists and trees,
can be  manipulated  by pattern matching using a structure sharing
representation as  efficiently   as   by conventional selector     and
constructor functions operating on linked records in heap storage.
Moreover,   the  pattern  matching  formulation  actually  helps   the
implementor to produce a better implementation.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  291\hfill PRICE: \$10.00\\[0.01in]

\noindent TITLE:  DIRECT PARSING OF ID/LP GRAMMARS\\
AUTHOR:  STUART M. SHIEBER\\
DATE:  AUGUST 1983\\[0.01in]

ABSTRACT: The Immediate Dominance/Linear  Precedence (ID/LP) formalism
is a recent extension of  Generalized  Phrase Structure Grammar (GPSG)
designed to  perform   some  of  the tasks  previously    assigned  to
metarules--for example, modeling  the  word-order  characteristics  of
so-called free-word-order languages.  It allows a simple specification
of  classes of rules  that differ only in   constituent order.   ID/LP
grammars (as well as metarule grammars) have been  proposed for use in
parsing by expanding them into an equivalent context-free grammar.  We
develop a parsing algorithm,  based  on the algorithm  of Earley,  for
parsing ID/LP grammars  directly, circumventing  the initial expansion
phase.  A proof of correctness of the algorithm is supplied.   We also
discuss some aspects of the time complexity of the  algorithm and some
formal   properties  associated   with   ID/LP  grammars and     their
relationship to context-free grammars.\\
--------------------------------------------------------------------------------
-------------------------------------------------\\
TECHNICAL NOTE:  292\hfill PRICE: \$10.00\\[-0.15in]
\begin{tabbing}
\noindent TITLE: \= PROVIDING A UNIFIED ACCOUNT OF DEFINITE NOUN\\
         \> PHRASES IN DISCOURSE\\
AUTHORS:  BARBARA J. GROSZ, et al\\
DATE:  JUNE 1983\\[-0.15in]
\end{tabbing}

ABSTRACT: Linguistic   theories typically assign    various linguistic
phenomena  to  one of  the  categories,  \underline{syntactic},
 \underline{semantic},  or
\underline{pragmatic},  as if the  phenomena  in each  category were  relatively
independent of those in  the  others.  However, various  phenomena  in
discourse do  not seem to yield  comfortably  to  any account that  is
strictly a syntactic or semantic or pragmatic one.  This paper focuses
on particular  phenomena of this  sort--the use  of  various referring
expressions  such as definite noun phrases  and pronouns--and examines
their interaction   with mechanisms   used  to    maintain   discourse
coherence.

Even a casual survey  of the literature on  definite descriptions  and
referring expressions  reveals  not  only  defects  in  the individual
accounts provided by  theorists (from  several different disciplines),
but also deep confusions about the roles that syntactic, semantic, and
pragmatic   factors play  in   accounting  for  these phenomena.   The
research  we have undertaken  is an attempt to sort  out some of these
confusions and to create  the basis for  a theoretical framework  that
can account for a variety of  discourse  phenomena in  which all three
factors of  language  use interact.  The   major premise on  which our
research  depends  is that  the   concepts  necessary for an  adequate
understanding of the phenomena in question  are not exclusively either
syntactic or semantic or pragmatic.

The  next  section of this  paper   defines  two levels of   discourse
coherence  and describes their  roles in  accounting  for  the  use of
singular definite noun   phrases.  To illustrate   the integration  of
factors in explaining the uses of referring expressions, their use  on
one of these levels, i.e., the  local one, is discussed in  Sections 3
and 4.  This account requires introducing the notion of the centers of
a sentence in a discourse, a notion that cannot be defined in terms of
factors that are exclusively syntactic or  semantic or pragmatic.   In
Section 5, the  interactions of the two levels  with these factors and
their effects on the  uses of referring expressions in  discourse  are