[net.ai] AIList Digest V3 #150

AIList-REQUEST@SRI-AI.ARPA (AIList Moderator Kenneth Laws) (10/21/85)

AIList Digest            Sunday, 20 Oct 1985      Volume 3 : Issue 150

Today's Topics:
  Seminars - Program Logics (UPenn) &
    Meaning, Information and Possibility (UCB) &
    Machine Learning and Knowledge Representation (NU) &
    Intelligent Mail Manipulation (MIT) &
    A Logic for Defeasible Rules (Buffalo) &
    Learning From Multiple Analogies (GTE) &
    Computational Discourse Analysis Using DEREDEC (MIT) &
    RESEARCHER and Patent Analogies (CMU)

----------------------------------------------------------------------

Date: Mon, 14 Oct 85 19:26 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Program Logics (UPenn)


    REASONING ABOUT PROGRAMS: CONCEPTUAL AND METHOLOGICAL DISTINCTIONS

                   DANIEL LEIVANT, COMPUTER SCIENCE, CMU

                     3:00 pm Tuesday, October 15, 1985
                   216 Moore, University of Pennsylvania

Reasoning about programs can be done explicitly, in first-order or higher-order
mathematical theories, or implicitly, in modal logics of programs (Hoare Logic,
Dynamic  Logic...).  One wants the latter, but the former are better suited for
metamathematical analysis (semantics, calibration of proof-theoretic strength).
However, modal logics are interpretable in explicit theories, so we can eat the
cake and have it.

In particular, we can distinguish in modal logics of programs a purely  logical
component  and  an  analytical  component.  For example, Hoare's Logic captures
exactly   logical   reasoning   about   partial-correctness   assertions   over
WHILE-programs.    We  argue that this type of completeness is more informative
than relative completeness.

------------------------------

Date: Wed, 16 Oct 85 14:22:50 PDT
From: admin@ucbcogsci.Berkeley.EDU (Cognitive Science Program)
Subject: Seminar - Meaning, Information and Possibility (UCB)

                      BERKELEY COGNITIVE SCIENCE PROGRAM
                     Cognitive Science Seminar - IDS 237A

                      Tuesday, October 22, 11:00 - 12:30
                        240 Bechtel Engineering Center
                 Discussion: 12:30 - 1:30 in 200 Building T-4

                   ``Meaning, Information and Possibility''
                                  L. A. Zadeh
                   Computer Science Division, U.C. Berkeley

        Our approach to the connection between meaning and  information
        is  in  the  spirit  of  the Carnap--Bar-Hillel theory of state
        descriptions.  However, our point of departure is  the  assump-
        tion that any proposition, p, may be expressed as a generalized
        assignment statement of the form X isr C, where X is a variable
        which  is  usually implicit in p, C is an elastic constraint on
        the values which X can take in a universe of discourse  U,  and
        the  suffix  r  in  the  copula  isr is a variable whose values
        define the role of C in relation to X.  The principal roles are
        those  in  which  r is d, in which case C is a disjunctive con-
        straint; and r is c, p and g, in which cases C is  conjunctive,
        probabilistic  and  granular,  respectively.   In the case of a
        disjunctive constraint, isd is written for short as is,  and  C
        plays the role of a graded possibility distribution which asso-
        ciates with each point  (or,  equivalently,  state-description)
        the  degree  to which it can be assigned as a value to X.  This
        possibility distribution, then, is interpreted as the  informa-
        tion  conveyed by p.  Based on this interpretation, we can con-
        struct a set of rules of inference which allow the  possibility
        distribution of a conclusion to be deduced from the possibility
        distributions of the premises.   In  general,  the  process  of
        inference  reduces to the solution of a nonlinear program.  The
        connection between the solution of a nonlinear program and  the
        traditional  methods  of  deduction  in  first-order  logic are
        explained and illustrated by examples.

        ELSEWHERE ON CAMPUS

        William Clancy of Stanford University will speak on ``Heuristic
        Classification''  at  the SESAME Colloquium on Monday, Oct. 21,
        4:00pm, 2515 Tolman Hall.

        Ruth Maki of North Dakota State University will speak on ``Meta-
        comprehension: Knowing that you understand''  at the Cognitive
        Psychology Colloquium, Friday, October 25, 4:00pm, Beach Room,
        3105 Tolman Hall.

------------------------------

Date: Thu, 17 Oct 85 15:02 EDT
From: Carole D Hafner <HAFNER%northeastern.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Machine Learning and Knowledge Representation (NU)

                        Northeastern University
                College of Computer Science Colloquium

                      4p.m. Wednesday, October 30

          Brittleness, Tunnel Vision, Machine Learning and
                      Knowledge Representation

                          Prof. Steve Gallant
                        Northeastern University


A system is brittle if it fails when presented with slight deviations from
expected input.  This is a major problem with knowledge representation schemes
and particularly with expert systems which use them.

This talk defines the notion of Tunnel Vision and shows it to be a major
cause of brittleness.  As a consequence it will be claimed that commonly
used schemes for machine learning and knowledge representation are pre-
disposed toward brittle behavior.  These include decision trees, frames,
and disjunctive normal form expressions.

Some systems which are free from tunnel vision will be described.

Place: 405 Robinson Hall
       Northeastern University
       360 Huntington Ave.
       Boston MA

------------------------------

Date: Sun, 13 Oct 1985  16:53 EDT
From: Peter de Jong <DEJONG%MIT-OZ at MIT-MC.ARPA>
Reply-to: Cog-Sci-Request%MIT-OZ
Subject: Seminar - Intelligent Mail Manipulation (MIT)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


Thursday 17, October  4:00pm  Room: NE43- 8th floor Playroom

                    The Artificial Intelligence Lab
                        Revolving Seminar Series


                         "The Information Lens:
           An Intelligent System for Finding, Filtering, and
                      Sorting Electronic Messages"


                            Thomas W. Malone

                     MIT Sloan School of Management



This talk will describe an intelligent system to help people share,
filter, and sort information communicated by computer-based messaging
systems.  The system exploits concepts from artificial intelligence such
as frames, production rules, and inheritance networks, but it avoids the
unsolved problems of natural language understanding by providing users
with a rich set of semi-structured message templates.  A consistent set
of "direct manipulation" editors simplifies the use of the system by
individuals, and an incremental enhancement path simplifies the adoption
of the system by groups.

The talk will also include an overview of the other projects and
research goals in the Organizational Systems Laboratory at MIT.

------------------------------

Date: Fri, 18 Oct 85 08:30:03 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - A Logic for Defeasible Rules (Buffalo)

                                UNIVERSITY AT BUFFALO
                            STATE UNIVERSITY OF NEW YORK

                                    DEPARTMENT OF
                                  COMPUTER SCIENCE

                                     COLLOQUIUM

                                     DONALD NUTE

                        Advanced Computational Methods Center
                            and Department of Philosophy
                                University of Georgia

                            A LOGIC FOR DEFEASIBLE RULES

          Humans reason using defeasible and  sometimes  conflicting  rules
          like  `Matches  burn when struck' and `Wet things don't burn'.  A
          formal language for  representing  sentential  versions  of  such
          rules is presented together with a derivability relation for this
          language.  The resulting system, LDR, is non-monotonic.  Inspired
          by  work  in  conditional  logic,  the non-monotonic rules of LDR
          correspond  to  simple  subjunctive  and  `might'   conditionals.
          Chaining  of these rules is restricted in LDR just as the transi-
          tivity of the conditional is restricted  in  conditional  logics.
          Several notions of consistency and coherency are defined.  LDR is
          of special importance for research in automated reasoning,  since
          its  language is PROLOG-like and its derivability relation can be
          implemented in PROLOG.

                             Thursday, November 7, 1985
                                      3:30 P.M.
                              Bell 337, Amherst Campus

             Wine and cheese will be served at 4:30 P.M., 224 Bell Hall

                    For further information, contact:

                                William J. Rapaport
                                Assistant Professor

Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260
(716) 636-3193, 3181
uucp:   ...{allegra,decvax,watmath}!sunybcs!rapaport
        ...{cmc12,hao,harpo}!seismo!rochester!rocksvax!sunybcs!rapaport
cs/arpanet:  rapaport%buffalo@csnet-relay

------------------------------

Date: Fri, 18 Oct 85 23:44:39 EDT
From: Bernard Silver <SILVER@MIT-MC.ARPA>
Subject: Seminar - Learning From Multiple Analogies (GTE)


                        GTE LABS INCORPORATED
                        MACHINE LEARNING SEMINAR

Title:                 Learning from Multiple Analogies

Speaker:                      Mark H. Burstein
                                 BBN Labs.

Date:                   Monday October 21, 10am

Place:                  GTE Labs
                        40 Sylvan Rd, Waltham MA 02254


Students learning about an unfamiliar new subject under the guidance
of a teacher or textbook, are often taught basic concepts by analogies
to things that they are more familiar with.  Although this seems to
be a very powerful form of instruction, the process by which students
make use of this kind of instruction has been little studied by AI
learning theorists.  A cognitive process model of how students make
use of such analogies will be presented.  The model was motivated by
examples of the behavior of several students who were tutored on the
programming language BASIC, and focusses in detail on the development
of knowledge about the concept of a program variable, and its use in
assignment statements.  It suggests how several analogies can be used
together to form new concepts where no one analogy would have been
sufficient.  Errors produced by one reasoning from one analogy can
be corrected by another.

As an illustration of the main principles of the model, a computer
program, CARL, is presented that learns to use variables in BASIC
assignment statements.  While learning about variables, CARL generates
many of the same erroneous hypotheses seen in the recorded protocols
of students learning the same material given the same set of analogies.
The learning process results in a single target model that retains
some aspects of each of the analogies presented.

For more information, contact Bernard Silver (617) 576-6212

------------------------------

Date: 11am 10/22/85
From: Alker@mc
Subject: Seminar - Computational Discourse Analysis Using DEREDEC (MIT)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


           Computational Discourse Analysis Using DEREDEC:
                  An Analysis of Balzac's Sarrasine


                Jaqueline Leon and Jean-Marie Marandin

             Centre National de la Recherche Scientifique
                            Paris, France


We present research in computational discourse analysis and discuss an
example for the case of Balzac's Sarrasine.  We use P. Plante's
DEREDEC programming system in this work because of its suitability for
natural language processing.  After a bottom-up syntactic parser for
French grammar produces a syntactic derivation, we perform pattern
matching on the output to achieve a linguistic and literary
interpretation.  We describe how we use these programs to capture two
different aspects of a text: the thematic segmentation and density.


Time: 11-12:30, Tuesday, October 22, 1985
Place: Millikan Room, E53-482
Host: Professor Hayward R. Alker, Jr., Department of Political Science, MIT

------------------------------

Date: 18 Oct 85 10:14:51 EDT
From: Jeanne.Bennardo@CMU-RI-ISL1
Subject: Seminar - RESEARCHER and Patent Analogies (CMU)

Topic:    Presentation of RESEARCHER project.
Speaker:  John C. Akbari
Place:    DH3313
Date:     Wednesday, Oct. 23
Time:     10:00am - 11:00am

Speaker:
John C. Akbari is a Masters student at Columbia University's Department of
Computer Science.  He is interested in joining the Intelligent Systems
Laboratory's Phoenix project.  Below is a description of his artificial
intelligence research.

Both projects described below investigate different aspects of RESEARCHER, a
prototype intelligent information system being developed at Columbia
University under the direction of Professor Michael Lebowitz.  The domain of
investigation is disc drive patents.  The result of this research is being
implemented in LISP as a component of RESEARCHER.

MS Thesis

                 Research involves generating "catalogue descriptions" of
                 hierarchical objects, determining salience as a function of
                 similarity between an instance of an object and the
                 prototype of the object.  This will be used in generating
                 information to be passed to a case grammar generator to
                 produce the actual text.  We hope to develop a method of
                 determining importance of static information (via "filtering
                 through" the prototype) relative to context.  We are studying
                 the interaction of structural, attributive, and functional
                 information on the quality of the description.  Further work
                 will investigate the need for different prototypes for
                 different users as an aspect of user modelling, so that a
                 patent lawyer would receive a different description from an
                 engineer, given the same instance.

                 Thesis advisor: Prof. Michael Lebowitz

Natural language

                 We are enhancing RESEARCHER's parser to utilize syntactic
                 aspects of relations that cause focus of attention to shift
                 within sentences.  This involves modifying memory-based
                 parsing to determine when syntax cues are sufficiently
                 strong to over-ride the need to search memory.

                 Supervisor: Prof. Michael Lebowitz

------------------------------

End of AIList Digest
********************