nl-kr-request@CS.ROCHESTER.EDU (NL-KR Moderator Brad Miller) (04/26/88)
NL-KR Digest (4/25/88 18:49:50) Volume 4 Number 45
Today's Topics:
How Language Structures its Concepts
Talk on knowledge compilation
A Theory of Justified Reformulations
Encoding "Here" and "There" in the Visual Field
From CSLI Calendar, April 21, 3:25
Theoretical and Computational Issues in Lexical Semantics (TCILS)
Seminar - AI Revolving -- Alan Bawden (MIT)
A Parallel Model of Sentence Processing
Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------
Date: Thu, 14 Apr 88 14:11 EDT
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Lang. & Cognition Seminar
BBN Science Development Program
Language & Cognition Seminar Series
HOW LANGUAGE STRUCTURES ITS CONCEPTS: THE ROLE OF GRAMMAR
Leonard Talmy
Program in Cognitive Science
University of California, Berkeley
BBN Laboratories Inc.
10 Moulton Street
Large Conference Room, 2nd Floor
10:30 a.m., Wednesday, April 20, 1988
Abstract: A fundamental design feature of language is that it has two
subsystems, the open-class (lexical) and the closed-class (grammatical).
These subsystems perform complementary functions. In a sentence, the
open-class forms together contribute most of the *content* of the
total meaning expressed, while the closed-class forms together determine
the majority of its *structure*. Further, across the spectrum of
languages, all closed-class forms are under great semantic constraint:
they specify only certain concepts and categories of concepts, but not
others. These grammatical specifications, taken together, appear to
constitute the fundamental conceptual structuring system of language.
I explore the particular concepts and categories of concepts that
grammatical forms specify, the properties that these have in common
and that distinguish them from lexical specifications, the functions
served by this organization in language, and the relations of this
organization to the structuring systems of other cognitive domains such
as visual perception and reasoning. The greater issue, toward which this
study ultimately aims, is the general character of conceptual structure
in human cognition.
------------------------------
Date: Thu, 14 Apr 88 14:32 EDT
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Lang. & Cognition Seminar
BBN Science Development Program
Language & Cognition Seminar Series
PALENQUE: AN INTERACTIVE DISCOVERY-BASED LEARNING
EXPERIENCE FOR CHILDREN
Kathleen Wilson
Bank Street College
BBN Laboratories Inc.
10 Moulton Street
Large Conference Room, 2nd Floor
2:00 p.m., Thursday, April 21, 1988
Abstract: Educational technology designers have recently been exploring
the potential uses of new interactive video technologies and, in particular,
experimenting with a variety of structures and metaphors for the information
on a disk.
Palenque is a DVI (Digital Video Interactive) pilot application that uses
both spatial and thematic structures to provide students with a surrogate
travel experience through Palenque, an ancient Mayan site. It is designed to
be used with the second season of the Voyage of the Mimi project, an
interdisciplinary curriculum that includes broadcast TV shows, software and
classroom activities for grades 4 through 8.
The talk will include a description of the Mimi project (including a tape of
a TV excerpt), a discussion of design and pedagogical principles underlying
Palenque, and a description of its use in classrooms.
------------------------------
Date: Thu, 14 Apr 88 20:56 EDT
From: Chris Tong <ctong@lightning.rutgers.edu>
Subject: Talk on knowledge compilation
The following thesis proposal defense will be held at 10am, Mar. 29,
in Hill Center, room 423, Busch Campus, Rutgers University, New
Brunswick, NJ., and will be chaired by Chris Tong.
CONSTRAINT INCORPORATION
USING CONSTRAINED REFORMULATION
Wesley Braudaway
wes@aramis.rutgers.edu
ABSTRACT. The goal of this research is to develop knowledge
compilation techniques to produce a problem-solving system from a
declarative solution description. It has been shown that a
Generate-and-Test problem-solver can be compiled from a declarative
language that represents solutions as instances of a (hierarchically
organized) solution frame; the generator systematically constructs
instances of the solution frame, until one is found that meets all the
tests. However, this Generate-and-Test architecture is
computationally infeasible as a problem-solver for all but trivial
problems. Optimization techniques must be used to improve the
efficiency of the resulting problem-solving system. Test
Incorporation is one such optimization technique that moves testers,
which test the satisfaction of the problem constraints, back into the
generator sequence to provide early pruning.
This proposal defines a special kind of test incorporation called
Constraint Incorporation. This technique modifies the generators so
they enumerate only those generator values that satisfy the problem
constraints defined by the tests. Because of this complete
incorporation, the tests defining the incorporated constraints can be
removed from the Generate-and-Test architecture. This results in a
significant increase of problem-solving efficiency over test
incorporation when the test cannot be partitioned into subtests that
affect a single generator. These cases seem to occur when a mismatch
exists between the language used to represent (and construct)
solutions and the language used to define the problem constraints. To
incorporate these constraints, the representations of solutions and
problem constraints should be shifted (i.e., reformulated) so as to
bridge the gap between them.
One method for bridging the gap is to search the space of solution and
problem representations until incorporation is enabled. However,
because of the difficulties encountered (e.g., the space is large and
difficult to generate), an alternative method is proposed that will
constrain the reformulation process. This method incorporates
constraints by compiling an abstract solution description into a
problem-solver. By using an abstract solution description, the system
does not commit prematurely to a detailed and biased representation of
the solution description. The problem constraints are refined into
procedural specifications and merged to form a partial specification
of the problem-solver. The problem-solver is partial in that it only
generates those solution details mentioned in the constraints. In
this way, the compiler is focusing on just those details of the
solution language that are relevant to incorporating the constraints.
The partial problem-solver is then extended into a complete one by
adding generators for the remaining details. Any such extension is
guaranteed to have successfully incorporated all the constraints.
This method has been applied to a house floorplanning domain, using
extensive paper traces. It is currently being implemented, and will be
applied to a second domain.
------------------------------
Date: Tue, 12 Apr 88 13:04 EDT
From: Mary E. Spollen <SPOLS%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: A Theory of Justified Reformulations
A THEORY OF JUSTIFIED REFORMULATIONS
Devika Subramanian
Department of Computer Science
Stanford University
Tuesday, April 19, 1988
Refreshments...4:00 p.m.
Lecture...4:15 p.m.
NE43-512A
ABSTRACT
Present day systems, intelligent or otherwise, are limited by the
conceptualizations of the world given to them by their designers.
This research explores issues in the construction of adaptive
systems that can incrementally reformulate their conceptualizations
to achieve computational efficiency or descriptional adequacy.
In this talk, a special case of the reformulation problem is
presented: we reconceptualize a knowledge base in terms of new
abstract objects and relations in order to make the computation
of a given class of queries more efficient.
Automatic reformulation will not be possible unless a reformulator
can justify a shift in conceptualization. We present a new class of
meta-theoretical justifications for a reformulation, called
irrelevance explanations. A logical irrelevance explanation proves
that certain distinctions made in the formulation are not necessary
for the computation of a given class of problems. A computational
irrelevance explanation proves that some distinctions are not
useful with respect to a given problem solver for a given class of
problems. Inefficient formulations make irrelevant distinctions and
the irrelevance principle logically minimizes a formulation by
removing all facts and distinctions in it that are not needed for
the specified goals. The automation of the irrelevance principle
is demonstrated with the generation of abstractions from first
principles. We also describe the implementation of an irrelevance
reformulator and outline experimental results that confirm our theory.
Host: Prof. Gerald Jay Sussman
------------------------------
Date: Mon, 18 Apr 88 09:35 EDT
From: William J. Rapaport <rapaport@cs.Buffalo.EDU>
Subject: Encoding "Here" and "There" in the Visual Field
STATE UNIVERSITY OF NEW YORK AT BUFFALO
The Steering Committee of the
GRADUATE STUDIES AND RESEARCH INITIATIVE IN
COGNITIVE AND LINGUISTIC SCIENCES
PRESENTS
ZENON PYLYSHYN
Center for Cognitive Science
University of Western Ontario
ENCODING "HERE" AND "THERE" IN THE VISUAL FIELD:
A Sketch of the FINST Indexing Hypothesis and Its Implications
I introduce a distinction between encoding the location of a feature
within some frame of reference, and individuating or indexing a feature
so later processes can refer to and access it. A resource-limited
indexing mechanism called a FINST is posited for this purpose. FINSTs
have the property that they index features in a way that is (in most
cases) transparent to their retinal location, and hence "point to" scene
locations. The basic assumption is that no operations upon sets of
features can occur unless all the features are first FINSTed.
A number of implications of this hypothesis will be explored in this
talk, including its relevance to phenomena such as the spatial stability
of visual percepts, the ability to track several independently moving
targets in parallel, the ability to detect a class of spatial relations
requiring the use of "visual routines", and various mental imagery
phenomena. I will also discuss one of the main reasons for postulating
FINSTs: the possibility that such indexes might be used to bind per-
ceived locations to arguments in motor commands, thereby serving as a
step towards perceptual-motor coordination.
Monday, April 25, 1988
4:00 P.M.
110 Knox, Amherst Campus
^^^^^^^^
There will also be an informal evening discussion at Judy Duchan's home,
130 Jewett Parkway, at 8:00 P.M. Call Bill Rapaport (Dept. of Computer
Science, 636-3193 or 3180) for further information.
------------------------------
Date: Wed, 20 Apr 88 20:25 EDT
From: Emma Pease <emma@russell.stanford.edu>
Subject: From CSLI Calendar, April 21, 3:25
[Excerpted ...]
NEXT WEEK'S CSLI TINLUNCH
Reading: "Constraints, Meaning and Information"
by Ian Pratt, (then at) Princeton University
Discussion led by Keith Devlin
(devlin@csli.stanford.edu)
April 28
The paper claims to establish a fatal flaw in situation semantics as
developed in "Situations and Attitudes" by Barwise and Perry, arguing
that the meaning of a declarative sentence, whatever it is, cannot be
a constraint. As a mathematician trying to help build a theory around
the ideas developed in S&A, this claim, needless to say, bothers me.
My own reading of the paper leads me to believe that Pratt bases his
argument on two basic misreadings of S&A. But maybe I am misreading
him. I am hoping that the linguists and philosophers at CSLI (but not
Koll Construction) will be able to help me out and reassure me that
all is not built on sand.
--------------
NEXT WEEK'S CSLI SEMINAR
Types and Tokens in Linguistics
Sylvain Bromberger
(sylvain@csli.stanford.edu)
April 28
This paper takes as its point of departure three widely -- and rightly
-- accepted truisms: (1) that linguistic theorizing rests on
information about types, that is, word types, phrase types, sentence
types, and the like; (2) that linguistics is an empirical science and
that the information about types on which it rests is empirical
information, that is, information obtained by attending with ones
senses to something -- normally tokens (utterances); (3) that the
facts that linguistics seeks to uncover -- for instance, facts about
the lexicon or the range of sentences in a given language -- follow,
in part at least, from facts about people's mental makeup. It (the
paper) seeks to reconcile (1) with (2) and with (3). Few, if any,
practitioners feel the need for such a reconciliation, but wide-eyed
philosophers like me who think (rightly) that types are abstract
entities, that is, nonspatial, nontemporal, unobservable, causally
impotent entities, have trouble seeing how information about such
entities can be obtained by attending to spatial, temporal, observable
entities, though they cannot deny that it can; and they have even
greater trouble seeing how features of minds (some misguided souls
would say brains) can have repercussions in the realm of abstract
entities.
The reconciliation proposed in the paper is based on two
conjectures. First, that tokens of a type (for instance all the
utterances of `cat', or all the utterances of `Mary sank a ship') form
what I call a "quasi-natural kind," a grouping like that formed by all
the samples of a chemical substance (for instance, all the samples of
mercury). Second, that tokens of different types form what I call
"categories," a grouping like that formed by the samples of different
chemical substances (mercury, water, gold, sulfuric acid, etc.). The
possibility of inferring facts about types from facts about tokens
follows from these two conjectures like day follows night. And so does
the conclusion that linguistics is grounded on mental realities. The
conjectures, if true, also reveal that the subject matter of
linguistics, like the subject matter of any natural science, is
defined by configurations of questions as well as by the makeup of the
world.
The paper is an essay in the philosophy of science as it applies
to linguistics.
------------------------------
Date: Wed, 20 Apr 88 20:42 EDT
From: James Pustejovsky <jamesp@brandeis.csnet>
Subject: Theoretical and Computational Issues in Lexical Semantics (TCILS)
THEORETICAL AND COMPUTATIONAL ISSUES IN LEXICAL SEMANTICS (TCILS)
A Workshop with Support from AAAI
Computer Science Department
Brandeis University
April 21-24, 1988
FINAL SCHEDULE
THURSDAY EVENING, April 21, 8:00 - 12:00, Welcoming Reception.
``A Cambridge House'', Cambridge. (617)-491-6300.
FRIDAY, April 22. Brandeis University, Usdan Student Center.
8:45-9:15 Coffee and Bagels
9:15-9:30 Opening Remarks, Pustejovsky
SESSION 1
9:30-10:00 Jackendoff ``X'-Semantics''
10:15-10:45 Talmy ``The Lexicalization of Aspect and Result''
11:00-11:30 Pustejovsky ``Structured Representations for the Lexicon''
11:45-2:00 LUNCH. Cold Buffet.
SESSION 2
2:00-2:30 Grimshaw ``On the Representation of Two Different Nominal''
2:45-3:15 Williams ``Nominals and Binding''
3:30-4:00 Ingria ``Adjectives, Nominals, and the Status of Arguments''
4:15-5:30 SPIRITS
Evening FREE
SATURDAY, April 23. Brandeis University, Usdan Student Center.
8:45 Coffee and Bagels
SESSION 3
9:15-9:45 Wilks ``Dictionary Texts as Tractable Resources''
10:00-10:30 Calzolari ``Deriving Lexical and World Knowledge''
10:45-11:00 Coffee and Pastries
SESSION 4
11:00-11:30 Hobbs ``Commonsense Knowledge and Lexical Semantics''
11:45-12:15 Sowa ``Lexical Inference vs. Commonsense Inference''
12:30-2:00 LUNCH. Cold Buffet.
SESSION 5
2:00-2:30 Zaenen ``Thematic Roles between Syntax and Semantics''
2:45-3:15 Rapoport ``Lexical Subordination''
3:30-3:45 Coffee and Cookies
SESSION 6
3:45-4:15 Palmer ``The Status of Verb Representations in Pundit''
4:30-5:00 Fawcett ``A Way to Model Probabilities in Relations''
5:15-6:00 SPIRITS
Evening FREE
SUNDAY, April 24, Brandeis University, Usdan Student Center.
8:45 Coffee and Bagels
SESSION 7
9:15-9:45 Kegl ``The Interface between Lexical Semantics and Syntax''
10:00-10:30 Tenny ``Aspectual Interface Hypothesis''
10:45-11:00 Coffee and Pastries
SESSION 8
11:00-11:30 Sondheimer ``How to Realize a Concept''
11:45-12:15 Nirenburg ``How Lexical Semantics Can Best Contribute''
12:30-1:00 Closing Remarks
Speaker List with Complete Titles
Nicoletta Calzolari ``Deriving Lexical and World Knowledge from
On-line Dictionaries''
Robin Fawcett ``A way to model probabilities in relations between main
verbs, participants roles and the semantic features of things''
Jane Grimshaw ``On the Representation of Two Different Kinds of
Nominals''
Jerry Hobbs ``Commonsense Knowledge and Lexical Semantics''
Robert Ingria ``Adjectives, Nominals, and the Status of Arguments'' (Response)
Ray Jackendoff ``X'-Semantics''
Judy Kegl ``The Interface between Lexical Semantics and Syntax''
Sergei Nirenburg ``How Lexical Semantics Can Best Contribute to Natural
Language Application'' (Response)
Martha Palmer ``The Status of Verb Representations in Pundit''
James Pustejovsky ``Structured Semantic Representations for the
Lexicon'' (Response)
T.R. Rapoport ``Lexical Subordination''
Norm Sondheimer ``How to Realize a Concept: Lexical Selection and the
Conceptual Network in Text Generation''
John Sowa ``Lexical Inference vs. Commonsense Inference''
Leonard Talmy ``The Lexicalization of Aspect and Result Typologically
Parallels that of Motion''
Carol Tenny ``Aspectual Interface Hypothesis and Lexical
Decomposition''
Yorick Wilks ``Dictionary texts as tractable resources for
computational semantics''
Edwin Williams ``Nominals and Binding'' (Response)
Annie Zaenen ``Thematic Roles between Syntax and Semantics''
------------------------------
Date: Mon, 25 Apr 88 12:06 EDT
From: Peter de Jong <DEJONG%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Seminar - AI Revolving -- Alan Bawden (MIT)
Thursday, 28 April 4:00pm Room: NE43- 8th floor Playroom
The Artificial Intelligence Lab
Revolving Seminar Series
Thinking About State
Alan Bawden (Alan@ai.mit.edu)
MIT AI Lab
It is generally agreed that the unrestricted use of state can make a
program hard to understand, hard to compile, and hard to execute, and that
these difficulties increase in the presence of parallel hardware. This
problem has led some to suggest that constructs that allow state should be
banished from programming languages. But state is also a very useful
phenomenon: some tasks are extremely difficult to accomplish without it,
and sometimes the most perspicuous expression of an algorithm is one that
makes use of state. Instead of outlawing state, we should be trying to
understand it, so that we can make better use of it.
In this talk I will propose a way of modeling systems in which the
phenomenon of state occurs. Using this model I will characterize those
systems in which some components of a system perceive other components as
having state. I will propose that systems that exhibit state-like
behavior are those systems that must rely on their own nonlocal structure
in order to function correctly, and I will make this notion of nonlocal
structure precise.
This characterization offers some new insights into why state seems to
cause the problems that it does. I will suggest how these insights might
lead us towards better ways of thinking about state, and make our
programming languages more expressive when we program with state.
------------------------------
Date: Thu, 14 Apr 88 17:52 EDT
From: KASH@OZ.AI.MIT.EDU
Subject: A Parallel Model of Sentence Processing
MIT Center for Cognitive Science
Parsing Seminar
A PARALLEL MODEL OF SENTENCE PROCESSING
Robin Clark
Department of Philosophy
Carnegie Mellon University
In this talk, I will describe the Constrained Parallel Parser (CPP)
currently under development at CMU. The model uses grammatical constraints
(e.g., Case theory and thematic theory) to constrain the hypotheses
considered by the processor while analyzing a string. As a result of the
interaction between grammatical constraints and constraints on memory used
by the processor, the CPP is subject to garden path effects. Furthermore,
the number of hypotheses considered by the CPP while processing will vary
depending on the interaction between lexical ambiguity and the constraints;
the model therefore predicts that the relative complexity of processing will
vary during the parse. I will relate garden path effects and relative
complexity to some of the relevant psycholinguistic literature.
Wednesday, April 20, 2:00 p.m.
Eighth Floor Playroom
Building NE43
MIT
------------------------------
End of NL-KR Digest
*******************