[net.ai] AIList Digest V2 #160

LAWS@SRI-AI.ARPA (11/25/84)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest           Saturday, 24 Nov 1984     Volume 2 : Issue 160

Today's Topics:
  Plan Recognition
  Hardware - Uses of Optical Disks,
  Linguistics - Language Simplification & Natural Languages,
  Seminars - Intention in Text Interpretation (Berkeley) &
    Cooperative Distributed Problem Solving (CMU) &
    A Shape Recognition Illusion (CMU)
----------------------------------------------------------------------

Date: 21 Nov 1984 15:55:02-EST
From: kushnier@NADC
Subject: Plan Recognition


                               WANTED

We are interested in any information, papers, reports, or titles of same
dealing with AI PLAN RECOGNITION that can be supplied to the government
at no cost (they made me say that!). We are presently involved in an
R&D effort requiring such information.

                                          Thanks in advance,

                                          Ron Kushnier
                                          Code 5023
                                          NAVAIRDEVCEN
                                          Warminster Pa. 18974

kushnier@nadc.arpa

------------------------------

Date: 19 Nov 84 16:55:09 EST
From: DIETZ@RUTGERS.ARPA
Subject: Are books obsolete?

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

Sony has recently introduced a portable compact optical disk player.
I hear they intend to market it as a microcomputer peripheral for
$300.  I'm not sure what its capacity will be, so I'll estimate it at
50 megabytes per side.  That's 25000 ascii coded 8 1/2x11 pages, or
1000 compressed page images, per side.  Disks cost about $10, for a
cost per word orders of magnitude less than books.

Here's an excellent opportunity for those concerned with the social
impact of computer technology to demonstrate their wisdom.  What will
the effect be of such inexpensive read-only storage media?  How will
this technology affect the popularity of home computers?  What
features should a home computer have to fully exploit this technology?
How should text be stored on the disks?  What difference would
magneto-optical writeable/erasble disks make?  How will this
technology affect 
Date: Tue, 20 Nov 84 22:26:16 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Re: Language Simplification,  V2 #157

On Gillam's comments on simplification:

1. In the South U.S., there is a raising of the vowels. "Pen" becomes
"pin." This results in homophony between the words "pen" and "pin."
Thus, in these dialects, the word "pin" becomes something like "peeun,"
with the vowel raised even more. The lesson is that an ostensible sim-
plification complicates the system further by requiring a dif-
ferentiation between certain phonological forms. This is an instance of
supposed regularity causing complication.

------------------------------

Date: Sun, 18 Nov 84 17:45:34 PST
From: "Dr. Michael G. Dyer" <dyer@UCLA-LOCUS.ARPA>
Subject: what language 'is' (?)


re:  what natural language 'is'

While it's fun to make up criteria and then use those criteria to judge
one natural language as 'superior' to another, or decide that a given NL
has 'degenerated' etc, I don't really see this approach as leading
anywhere (except, perhaps, for 'phylogenetic' studies of language
'speciation', just as pot shards are examined in archeology for cultural
contacts...  We could also spend our time deciding which culture is
'better' by various criteria, e.g. more weapons, less TV, etc).

It's also convenient to talk about natural language as if it's something
"on its own".  However, I view this attitude as scientifically
unhealthy, since it leads to an overemphasis on linguistic structure.
Surely the interesting questions about NL concern those cognitive
processes involved in getting from NL to thoughts in memory and back out
again to language.  These processes involve forming models of what the
speaker/listener knows, and applying world knowledge and context.  NL
structure plays only a small part in these overall processes, since the
main ones involve knowledge application, memory interactions, memory
search, inference, etc.

e.g. consider the following story:

     "John wanted to see a movie.  He hopped on his bike
     and went to the drugstore and bought a paper.
     Then he went home and called the theater to get the
     exact time."

now we could have said this any number of ways, eg.

     "John paid for a paper at the drugstore.  He'd gotten
     there on his bike.  Later, at home,  he used the number
     in the paper to call the theater,  since he wanted
     to see a movie and needed to know the exact time."

The reason we can handle such diverse versions -- in which the goals and
actions appear in different order -- is that we can RECONSTRUCT John's
complete plan for enjoying a movie from our general knowledge of what's
involved in selecting and getting to a movie.  It looks something like
this:

     enjoy movie
          need to know what's playing
           --> read newspaper (ie one way to find out)
                  need newspaper
                  --> get newspaper
                        possess newspaper
                           need $  to buy it (ie one way to get it)
                        need to be where it's sold
                           need way to get there
                             --> use bike (ie one way to travel)
          need to know time
            --> call theater (ie one way to find out)
                   need to know phone number
                     --> get # out of newspaper

          need to physically watch it
            need to be there
              --> drive there (ie one way to get there)
            need to know how to get there
               etc

We use our pre-existing knowledge (e.g.  of how people get to a movie of
their choice) to help us understand text about such things.  Once we've
formed a conceptual model of the planning involved (from our knowledge
of constraints and enablement on plans and goals), then we can put the
story 'in the right order' in our minds.

In fact, the notion of goals, plans, and enablements should be universal
among all humans (the closest thing to a 'universal grammar', for people
who insist on talking about things in terms of 'grammars').  Given this
fact, EVERY natural language should allow sparse and somewhat
mixed-order renditions of plan-related stories.  Is this a feature,
then, of one or more NATURAL LANGUAGEs, or is it really a feature of
general INTELLIGENCE -- i.e. planning, inference etc.

Clearly the interesting problems here are:  how to represent goal/plan
knowledge, how this knowledge is referred to in a given language, and
how these knowledge sources interact to instantiate a representation of
what the reader knows after reading about John's movie trip.

(Of course, other types of text will involve other kinds of conceptual
constructs -- e.g. editorial text involves reasoning and beliefs).

Wittgenstein expressed the insight -- i.e. that natural languages are
fundamentally different from formal languages -- in terms of his notion
of "language games".  He argued that speakers are like the players of a
game, and to the extent that the players know the rules, they can do
all sorts of communication 'tricks' (since they know another player
can use HIS knowledge of the "game" to extract the most appropriate
meaning from an utterance, gesture, text...).  As a result, Wittgenstein
felt it was quite misguided to argue that formal languages are 'better'
because they're unambiguous.

Now this issue is reappearing in a slightly different guise as a number
of ancient natural(?) languages are offered as 'the answer' to our
representational problems, based on the claim that they are unambiguous.
Two favorites currently seem to be sastric sanskrit and a Bolivian
language called "Aymara".

(Quote from news article in LA Times, Nov.  7, '84 p 12:  "...  wisemen
constructed the language [Aymara] from scratch, by logical, premeditated
design, as early as 4,000 years ago")

I suspect ancient and exotic languages are being chosen since fewer
people know enough about them to dispute any claims made.  Of course
this isn't done on purpose:  it's simply that the better known NLs that
get proposed are more quickly discarded since more people will know, or
can find, counter-examples for each claim.

By the way, the kinds of discussions we have here at UCLA on NL are very
different from those I see on AIList.  Instead of arguing about what
language 'is' (i.e.  the definitional approach to science that Minksy and
others have  criticized on earlier AILists), we try to represent ideas
(e.g.  "Religion is the opiate of the masses", "self-fulfilling
prophecy", "John congratulated Mary", etc) in terms of abstract
conceptual data structures, where the representation chosen is judged in
terms of its usefulness for inference, parsing, memory search, etc.
Discussions include how a conceptual parser would take such text and map
it into such constructs; how knowledge of these constructs and
inferential processes can aid in the parsing process; how the resulting
instantiated structures would be searched during:  Q/A, advice
giving, paraphrasing, summarization, translation, and so on.

It's fun to BS about NL, but I wouldn't want my students to think that
what appears on AIList (with a few exceptions) re: NL is the way NL
research should be conducted or specifies what the important research
issues in NL are.

I hope I haven't insulted anyone.  (If I have, then you know who you
are!)  I'm guessing that most readers out there actually agree with me.

------------------------------

Date: Wed, 21 Nov 84 14:02:39 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Intention in Text Interpretation (Berkeley)

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, November 27, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Walter Michaels and  Steven  Knapp,  English
                Department, UC Berkeley

TITLE:          ``Against Theory''

ABSTRACT:       A discussion of the role of intention in the
                interpretation   of  text.   We  argue  that
                linguistic meaning  is  always  intentional;
                that   linguistic   forms  have  no  meaning
                independent of  authorial  intention;   that
                interpretative disagreements are necessarily
                disagreements about what a particular author
                intended  to  say;  and that recognizing the
                inescapability of intention has fatal conse-
                quences  for  all  attempts  to  construct a
                theory of interpretation.

------------------------------

Date: 21 Nov 84 15:24:46 EST
From: Steven.Shafer@CMU-CS-IUS
Subject: Seminar - Cooperative Distributed Problem Solving (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Victor Lesser, from U. Mass., is coming to CMU on Tuesday to present
the AI Seminar.  He will be speaking about AI techniques for use on
distributed systems.  3:30 pm on Tuesday, November 27, in WeH 5409.


COOPERATIVE DISTRIBUTED PROBLEM SOLVING

   This research topic is part of a new research area that has
recently emerged in AI, called Distributed AI.  This new area
combines research issues in distributed processing and AI by
focusing on the development of distributed networks of
semi-autonomous nodes that cooperate interactively to solve a
single task.
   Our particular emphasis in this general research area has
been on how to design such problem-solving networks so that
they can function effectively even though processing nodes have
inconsistent and incomplete views of the data bases necessary for
their computations.  An example of the type of application that
this approach is suitable for is a distributed sensor network.
   This lecture will discuss our basic approach called Functionally-
Accurate Cooperative Problem-Solving, the need for sophisticated
network-wide control and its relationship to local node control, and
[end of message -- KIL]

------------------------------

Date: 21 November 1984 1639-EST
From: Cathy Hill@CMU-CS-A
Subject: Seminar - A Shape Recognition Illusion (CMU)

Speaker:  Geoff Hinton and Kevin Lang (CMU)
Title:    A Strange property of shape recognition networks.

Date:     November 27, l984
Time:     12 noon - 1:30 p.m.
Place:    Adamson Wing in Baker Hall

Abstract: We shall describe a parallel network that is capable of
          recognizing simple shapes in any orientation or position
          and we will show that networks of this type are liable to
          make a strange kind of error when presented with several
          shapes that are followed by a backward mask.  The error
          involves perceiving one shape in the position of another.
          Anne Treisman has shown that people make errors of just
          this kind.

------------------------------

End of AIList Digest
********************