[net.ai] AIList Digest V3 #59

LAWS@SRI-AI.ARPA (05/06/85)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest             Monday, 6 May 1985       Volume 3 : Issue 59

Today's Topics:
  Seminars - Artificial Language Learning (SU) &
    Understanding Text with Diagrams (UTexas) &
    Semantics and Metaphysics (CSLI) &
    Diagram Understanding (SRI) &
    Simple Description of the World (CSLI) &
    Illocutionary Acts (UCB) &
    A Computational Model of Skill Acquisition (SU) &
    Marker-Passing during Problem Solving (UToronto)

----------------------------------------------------------------------

Date: Tue, 9 Apr 85 18:23:01 pst
From: gluck@SU-PSYCH (Mark Gluck)
Subject: Seminar - Artificial Language Learning (SU)

              Morphological & prosodic cues in the learning
                of a miniature phrase-structure language

                            RICHARD MEIER
                        (Stanford University)

      I will claim that the input to language learning is a grouped
and structured sequence of words and that learning operates most
successfully on such structures, and not on mere word strings.  After
briefly reviewing evidence for such groupings in natural language, this
claim will be supported by three experiements in artificial language
learning.  These experiments allow rigorous control of the input to the
learner.  Prior work had argued that, in such experiments, adult subjects
can learn complex syntactic rules only with extensive semantic mediation.
In the current experiments, subjects fully learned complex aspects of
syntax if they viewed, or heard, sentences (paired with an uninformative
semantics) containing one of three grouping cues for constituent structure:
prosody, function words, or agreement suffixes on the words within a
constituent.  Absent such cues, subjects learned only limited aspects of
syntax.  These results suggest that, in natural languages, such grouping
cues may subserve syntax learning.

April 12th                      3:15pm               Jordan Hall; Rm. 100

------------------------------

Date: Wed, 10 Apr 85 13:24:21 cst
From: briggs@ut-sally.ARPA (Ted Briggs)
Subject: Seminar - Understanding Text with Diagrams (UTexas)


         Understanding Text with an Accompanying Diagram

                               by
                           Bill Bulko


                      noon  Friday April 12
                            PAI 5.60


We are investigating the mechanisms by which a physics problem
specified jointly by English text and graphics images can be
understood.  The investigation is guided by the study of the
following subproblems:

   (1)  What kinds of rules and knowledge would it take to understand
        the information contained in a picture model and a block of
        related English text?

   (2)  What kind of control structure is required?

   (3)  How can information contained in the picture but not in the
        text, and vice versa, be recognized and understood?  That is,
        how can coreference between text and a picture be handled?

------------------------------

Date: Wed 3 Apr 85 16:26:36-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Semantics and Metaphysics (CSLI)

         Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


           CSLI ACTIVITIES FOR *NEXT* THURSDAY, April 11, 1985


 ``Semantics for Natural Language:  Metaphysics for the Simple-minded?''

                        Chris Menzel, CSLI

      What, exactly, is the connection between semantics and metaphysics?
   A semantical theory gives an account of the meaning of certain
   expressions in natural language, and, intuitively, the meaning of an
   expression has to do with the connection between the expression (or an
   utterance of it) and the world.  Thus, a simple-minded view might be
   that (as far as it goes) a correct semantical theory ipso facto yields
   the sober metaphysical truth about what there is.
      To the contrary, implicit in much work in semantics is the idea
   that all we should expect of a good theory is that it be, in Keenan's
   terms, descriptively adequate: it should provide a theoretical
   structure which preserves our judgments of logical truth and
   entailment, never mind the question of the literal metaphysical
   details of the structure (e.g., that the denotations of singular terms
   are complex sets of sets rather than individuals).
      For next week's TINlunch I will provide a framework for discussion
   by laying out the simple-minded view and its chief rival in somewhat
   more detail.  Being rather simple-minded myself, I'll attempt to
   defend a reasonable version of the former.  As grist for both
   philosophical mills I will draw upon recent work in intensional logic,
   Montague grammar, generalized quantifiers, the semantics of plurals,
   and situation semantics.                             --Chris Menzel

------------------------------

Date: Mon 8 Apr 85 11:19:34-PST
From: PENTLAND@SRI-AI.ARPA
Subject: Seminar - Diagram Understanding (SRI)

Area P1 Talk --
WHERE: SRI Int'l Room EK242 (conference room)
WHEN: Tues April 9 at 2:30


                       DIAGRAM UNDERSTANDING:
      THE INTERSECTION OF COMPUTER GRAPHICS AND COMPUTER VISION

                          Fanya S. Montalvo
               MIT, Artificial Intelligence Laboratory

                               ABSTRACT

A problem common to Computer Vision and Computer Graphics is
identified.  It deals with the representation, acquisition, and
validation of symbolic descriptions for visual properties.  The
utility of treating this area as one is explained in terms of
providing the facility for diagrammatic conversations with systems.  I
call this area "Diagram Understanding", which is analogous to Natural
Language Understanding.  The recognition and generation of visual
objects are two sides of the same symbolic coin.  A paradigm for the
discovery of higher-level visual properties is introduced, and its
application to Computer Vision and Computer Graphics described.  The
notion of denotation is introduced in this context.  It is the map
between linguistic symbols and visual properties.  A method is
outlined for associating symbolic descriptions with visual properties
in such a way that human subjects can be brought into the loop in
order to validate (or specify) the denotation map.  Secondly, a way of
discovering a natural set of visual primitives is introduced.

------------------------------

Date: Wed 3 Apr 85 16:26:36-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Simple Description of the World (CSLI)

         Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


           CSLI ACTIVITIES FOR *NEXT* THURSDAY, April 11, 1985

             ``What if the World Were Really Quite Simple?''

                         Alex Pentland, CSLI


      One of the major stumbling blocks for efforts in AI has been the
   apparent overwhelming complexity of the natural world; for instance,
   when an AI program tries to decide on a course of action (or the
   meaning of a sentence) it is often defeated by the incredible number
   of alternatives to consider.  Results such as those of Tversky,
   however, argue that people are able to use characteristics of the
   current situation to somehow "index" directly into the two or three
   most likely alternatives, so that deductive reasoning per se plays a
   relatively minor role.
      How could people accomplish such indexing?  One possibility is that
   the structure of our environment is really quite a bit simpler that it
   appears on the surface, and that people are able to use this structure
   to constrain their reasoning much more tightly than is done in current
   AI research.
      Is it possible that the world is really relatively simple?  In
   forming a scientific theory we may trade the size and complexity of
   description against the amount of error.  Because modern scientific
   endeavors have placed great emphasis on increasingly accurate
   description, very little effort has gone toward discovering a grain
   size of description at which the world may be relatively simply
   described while still maintaining a useful level of accuracy.
      I will argue that such a simple description of the world is
   plausible, discuss progress in discovering such a descriptive
   vocabulary, and comment on how knowledge of such a vocabulary might
   have a profound impact on AI and psychology.         --Alex Pentland

------------------------------

Date: Wed, 24 Apr 85 17:34:14 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Illocutionary Acts (UCB)

               BERKELEY COGNITIVE SCIENCE PROGRAM
              Cognitive Science Seminar -- IDS 237B

      TIME:                Tuesday, April 30, 11 - 12:30
      PLACE:               240 Bechtel Engineering Center

SPEAKER:        Herbert H. Clark, Department of Psychology, Stan-
                ford University

TITLE:          ``Illocutionary   acts,   illocutionary   perfor-
                mances''

     From John Austin on, theorists have said a good  deal  about
what it is to be a question, assertion, promise, or other illocu-
tionary act.  But in their characterizations they have  generally
assumed a rather strong idealization about how illocutionary acts
are performed.  Among other things, they have  taken  these  four
points  for  granted:  (1)  An  illocutionary act is a preplanned
event.  (2) It is performed by the speaker acting alone.  (3) The
speaker acts with certain definite intentions about affecting his
addressee.  And  (4)  the  speaker  discharges  these  intentions
merely by issuing a sentence (or sentence surrogate) in the right
circumstances.   As  with  any  idealization,  these  assumptions
aren't  quite  right.  Indeed, I will document that illocutionary
acts in conversation are not preplanned events but processes that
the  participants  may  alter midcourse for various purposes, and
that they are accomplished by the speaker and  addressees  acting
together.   Once the traditional assumptions are replaced by more
realistic ones, we are led to quite a different notion of illocu-
tionary act.

     The view I will develop  is  that  performing  illocutionary
acts  in  conversation is a collaborative process between speaker
and addressees.  One of the goals of  these  participants  is  to
establish the mutual belief, roughly by the beginning of each new
contribution, that the addressees have understood  the  speaker's
meaning  well  enough  for  current  purposes.   The  speaker and
addressees have systematic  linguistic  techniques  for  reaching
this  goal.   In  support  of  this view I will report a study by
Deanna Wilkes-Gibbs and myself on  how  definite  references  get
made in conversation and another study by Edward F.  Schaefer and
myself on what it is, more generally, to make  certain  contribu-
tions to conversation.

------------------------------

Date: Thu, 25 Apr 85 05:01:59 pst
From: gluck@SU-PSYCH (Mark Gluck)
Subject: Seminar - A Computational Model of Skill Acquisition (SU)

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

                Psych. Dept. Friday Cognitive Seminar
April 26th                      3:15pm               Jordan Hall; Rm. 100

                      A Computational Model of
                          Skill Aquisition

                      KURT VAN LEHN (Xerox PARC)

   A theory will be presented that describes how people learn certain
procedural skills, such as the written algorithms of arithmetic and
algebra, from multi-lesson curricula.  There are two main hypotheses.
(1) Teachers enforce, perhaps unknowingly, certain constraints that
relate the structure of the procedure to the structure of the lesson
sequence, and moreover, students employ these constraints, perhaps
unknowingly, as they induce a procedure from the lesson sequence.  (2)
As students follow the procedure they have induced, they employ a
certain kind of meta-level problem solving to free themselves when their
interpretation of the procedure gets stuck.  The theory's predictions,
which are generated by a computer model of the putative learning and
problem solving processes, have been tested against error data from
several thousand students.  The usual irrefutability of computer
simulations of complex cognition has been avoided by a linguistic style
of argumentation that assigns empirical responsibility to individual
hypotheses.

------------------------------

Date: Wed, 10 Apr 85 13:04:45 est
From: Voula Vanneli <voula%toronto.csnet@csnet-relay.arpa>
Subject: Seminar - Marker-Passing during Problem Solving (UToronto)


                   UNIVERSITY OF TORONTO
               DEPARTMENT OF COMPUTER SCIENCE
         (GB = Galbraith Bldg., 35 St. George St.)

ARTIFICIAL INTELLIGENCE SEMINAR - Wednesday, April 10, 4 pm,
GB 244


                        Jim Hendler
        Dept. of Computer Science, Brown University

   Studies of Marker Passing in Knowledge Representation
                and Problem Solving Systems.


A standard problem in Artificial Intelligence  systems  that
do   planning  or  problem  solving  is  called  the  "late-
information, early-decision paradox." This occurs  when  the
planner makes a choice as to which action to consider, prior
to encountering information that could  either  identify  an
optimal  solution or that would present a contradiction.  As
the decision is made in the absence of this  information  it
is often the wrong one, leading to much needless processing.

In this talk I describe how the technique known as  "marker-
passing"  can  be used by a problem-solver.  Marker-passing,
which has been shown in the past to be useful for such  cog-
nitive  tasks as story comprehension and word sense disambi-
guation, is a parallel,  non-deductive,  "spreading  activa-
tion"  algorithm.   By combining this technique with a plan-
ning system the paradox described above can often be circum-
vented.   The marker-passer can also be used by the problem-
solver during "meta-rule" invocation and for finding certain
inherent  problems  in  plans.   An implementation of such a
system is discussed as are the  design  "desiderata"  for  a
marker-passer.

------------------------------

End of AIList Digest
********************