[net.ai] AIList Digest V2 #162

LAWS@SRI-AI.ARPA (11/29/84)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest           Wednesday, 28 Nov 1984    Volume 2 : Issue 162

Today's Topics:
  AI Tools - ML to Interlisp Translator & SYMBOLICS 3670 Software,
  Representation - Nonverbal Meaning Representation,
  Databases - Obsolete Books,
  Publicity - New Scientist AI Series,
  Brain Theory - PBS Series on the Brain & Minsky Quote,
  Linguistics - Language Simplification & Natural Language Study,
  Seminars - The Structures of Everyday Life  (MIT) &
    Language Behavior as Distributed Processing  (Stanford) &
    Full Abstraction and Semantic Equivalence  (MIT)
----------------------------------------------------------------------

Date: 27 Nov 84 12:54:44 EST
From: DIETZ@RUTGERS.ARPA
Subject: ML to Interlisp Translator Wanted

I'd like to get a translator from ML to Interlisp.  Does anyone have one?

Paul Dietz (dietz@rutgers)

------------------------------

Date: Tue, 27 Nov 84 12:59:42 pst
From: nick pizzi <pizzi%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: SYMBOLICS 3670 software

     Would anyone happen to know whether or not the SYMBOLICS machines
(specifically, the 3670) have PROLOG and/or C as available language
options?

     Furthermore, does the 3670 have any available software packages
for image processing (especially, symbolic image processing)?

     Thank-you in advance for any information which you might provide!

                                                Sincerely,
                                                nick pizzi

------------------------------

Date: Wed, 28 Nov 84 09:59:31 pst
From: Douglas young <young%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: Nonverbal meaning

  Is there anyone out there working on completely nonverbal meaning
representations of words and sentences?  Although I have been working
on this problem for a very substantial time, and have reached some
significant solutions ( which I expect to have published in the form
of a book , the draft ms for which is already completed, and in
several papers }, during 1985 ), I have not been able to date to
discover anyone else who is working on this specific aspect of NLU.
However, it is impossible to believe that there are no others working
on this, and a newly acquired membership of the AIList appears to be
an invaluable way of finding out who is involved and where they are.
If you are working in this area, or if you know of anyone who is,
please would you send me a message ( network address as in header )
with a short note of what is being done, and include a postal address;
alternatively, write or call me.

      Douglas Young
      Dept. of Computer Science,
      University of Manitoba,
      Winnipeg,
      Manitoba, R3T 2N2
      Canada                  Tel: (204) 474 8366  (lab)
                                         474 8313  (messages)
 PS: {Two original papers describing some of the principles of the techniques
     I employ, that were published in the medical literature during 1982-83,
     are largely out of date in almost every respect ( except for some of the
     neurological arguments, that form the foundation of the principles ),so
     I am not including their references here.

------------------------------

Date: Tue, 27 Nov 84 18:05:24 mst
From: jlg@LANL (Jim Giles)
Subject: obsolete books?

> Sony has recently introduced a portable compact optical disk player.
> I hear they intend to market it as a microcomputer peripheral for
> $300.  I'm not sure what its capacity will be, so I'll estimate it at
> 50 megabytes per side.  That's 25000 ascii coded 8 1/2x11 pages, or
> 1000 compressed page images, per side.  Disks cost about $10, for a
> cost per word orders of magnitude less than books.

The capacity of a normally formatted compact disc (audio people spell it
with a 'c') is about 600 megabytes.  That's without counting the error
correcting information.  The number is for about one hour of music sampled
with two 16-bit channels at a rate of 44.1 kHz.  Furthermore, some companies
are already demonstrating 'write once' disks with about 500 megabytes
for use as computer peripherals.  I've even seen one proposal for an
erasable disk using magneto-optical technology.

It has already been suggested that the advent of very cheap mass storage
devices will soon replace dictionaries, encyclopepias, catalogues, etc.
There has also been talk of software (such as spelling checkers) which
require very large data bases being either cheap or public domain.  I
think it will be a while before books are replaced, though.  Nobody wants
to carry video monitor in their briefcase just to catch up on their
favorite science fiction interests.  Besides, paperback books are still
cheaper than compact discs by about a factor of 4 or more.

I'm holding off buying new drives for my home computer for a while.  This
new stuff seems to be worth waiting for.

------------------------------

Date: 27 Nov 84 17:00:07 EST
From: DIETZ@RUTGERS.ARPA
Subject: New Scientist AI Series

The British magazine New Scientist is running a three part series on AI.
The first article, in the Nov. 15 issue, has the title "AI is stark naked
from the ankles up".  It has some very interesting quotes from John McCarthy,
W. Bledsoe, Lewis Branscomb at IBM and others.  The article is critical
of the way AI has been oversold, of the quality (too low) and quantity
(too little) of AI research, and of the US reaction to the Japanese new
generation project, especially Feigenbaum and McCorduck's book.

------------------------------

Date: Wed 28 Nov 84 11:53:16-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: PBS Series on the Brain

The PBS series on the brain has focussed each week on specific neural
systems and their effects on behavior.  The last show concentrated on
hearing and speech centers, and had a particularly enlightening
example.  It showed a lawyer who had suffered damage to his hearing or
linguistic centers.  (Sorry, I don't remember exactly where.)  He
still had a normal vocabulary and could understand most sentences,
although slowly and with great difficulty.  He was unable to parse or
store function words, however.  When asked "A leopard was killed by a
lion.  Which died?", he was unable to answer.  (He also knew that he
had no way of determining the answer.)  When asked "My uncle's sister
..., is it a man or a woman?" he was similarly unable to know.

Another example was a woman who could not recognize faces, even when
she was presented with a picture of her interviewer and told who it
was.  She could describe the face in detail, but there was no flash
of recognition.  She lives in a world of strangers.

A previous show desribed various forms of amnesia, and the role of the
hippocampus in determining which events are to be stored in long-term
memory.  Or rather, in the conscious LTM.  One subject was repeatedly
trained on the Tower of Hanoi puzzle; each time it was completely
"new" to him, but he retained strategy skills learned in each session.

The question was raised why no one can remember events prior to the
age of five.  I suppose that we create a mental vocabulary during the
first years, and later record our experiences in terms of that
vocabulary.  (It would be awkward, wouldn't it, if the vocabulary
changed as we got older?  Memories would decay as we lost the ability
to decode them.)  This suggests that we might be unable to learn
concepts such as gravity, volume, and cooperation if we do not learn
them early enough.  I'm sure there must be evidence of such phenomena.

The last two shows in the series will be shown Saturday (in the San
Francisco area).

                                        -- Ken Laws

------------------------------

Date: Mon, 26 Nov 1984  03:27 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Re: Quote, V2 #161

I certainly have suggested that the human brain is a kludge, in the
sense that it consists of many complex mechanisms, accumulated over
the course of evolution, a lot of which are for correcting the bugs in
others.

However, this is not a useful quotation for public use, because
outside of engineering, the word "kludge" is not in the general
language.  There isn't even any synonym for it.  The closest phrase
might have been "Rube Goldberg device" -- but that, too, is falling
out of use.  Anyway, a Rube Goldberg device did not have the right
sense, because that cartoonist always drew machines which were
complicated serial devices with no loops and, hence, no way to correct
bugs.  My impression is that a "kludge" is a device which actually
usually works, but not in accord with neat principles but because all
or most of its bugs have been fixed by adding ad hoc patches and
accessories.

By the way, the general language has no term for "bug" either.
Programmers mean by "bug" the mechanism responsible for an error,
rather than the surface error itself.  The lack of any adequate such
word suggests that our general culture does not consider this an
important concept.  It is no wonder, then, that our culture has so
many bugs.

------------------------------

Date: Mon, 26 Nov 84  8:20:27 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Language Simplification

On Frawley on Gillam on simplification:

You needn't go so far south for pen/pin homophony, it occurs in certain
midwestern dialects and I believe even in New Jersey, as merger pure and
simple.  And of course you are talking not about homophony but about shifted
contrast such that `pin' of your dialect is "homophonous" with `pen' of the
southern dialect.  (Is English `showed' "homophonous" with the French word
for `hot'?)

Phonological systems do change in the ways that you deny, as
witness for example the falling together of many vowels to i in modern
Greek (classical i, ei, oi, y, long e (eta), yi all become high front i),
and the merger of several Indo-European vowels in Sanskrit a.

I have not seen Gillam's comments (just joined the list), so let me say
too that languages do preserve systematic contrasts while shifting their
location, and that the observation about southern dialects of US English
is correct.  Whether the result of change is merger or relocated contrast
depends on sociological as well as physiological and psychoacoustic factors,
and no simple blanket statement fits all cases.

        Bruce Nevin

------------------------------

Date: Mon, 26 Nov 1984  03:12 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Re: Natural Language Study, V2 #160


Bravo, Dyer!  As you suggest, there is indeed much to learn from the
study of natural language -- but not about "natural language itself";
we can learn what kinds of manipulations and processes occur in the
under-mind with enough frequency and significance that it turns out to
be useful to signify them with surface language features.

For example, why do all languages have nouns and adjectives?  Because
the brain has some way to aggregate the aspects of "objects" and
retrieve these constellations of partial states of mind.  Why
adjectives?  To change particular properties of noun-objects.  Why put
adjectives near the nouns?  So that it is easy to recognize which
properties of what to modify.  Now, if we consider the which surface
relations are easiest to recognize by machinery, the near-ness of
words is surely among the easiest of all -- so we can expect that
human societies will find an important use for this.  Thus, if
adjective-noun relations are "universal" in human languages. it need
not be because of any mysterious syntactic apriori built into some
innate language-organ; it could be because that underlying cognitive
operation -- of modifying part of a representation without wrecking
the rest of it -- is a "cognitive universal".  Similarly, the study of
how pronouns work will give us clues about how we link together
different frames, scripts, plans, etc.

All that is very fine.  We should indeed study languages.  But to
"define" them is wrong.  You define the things YOU invent; you study
the things that already exist.  Then, as in Mathematics, you can also
study the things you define.  But when one confuses
the two situations, as in the subjects of generative linguistics
or linguistic competence -- ah, a mind is a terrible thing to waste,
as today's natural language puts it.

------------------------------

Date: 27 Nov 1984 11:13-PST (Tuesday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Natural Language


        The reason why it is important to study natural languages
"on their own" and to understand language degredation etc. is because
language influences how its speakers think.  This idea, known commonly
as the "Whorf hypothesis" has its correlate in computer languages
and in potential interlingua.  The usual examples include AmerIndian
languages which have little concept of time.
        If you have only Fortran to program in, many elegant programming
solutions simply will not present themselves.  The creation of
higher level languages allows the programmer to make use of complex
data structures such as 'predicates' and 'lists'  instead of addresses.
        These higher level data structures correspond to the concepts
available in a natural language.  Primitive languages which exist mainly
for simple communication will not allow the kind of
thinking(programming) as a language with "higher level" concepts
(data structures).
        In the same way that a conceptually rich language(like Sanskrit)
allows greater expression that Haitian Creole does, and that
LISP vs. assembly does, Sastric Sanskrit functions as the ideal
interlingua because of the nature of its high level data structures
(i.e. is formal and yet allows expression of poetry and metaphor).
And in the same way that a particular programming language is chosen
over another for an application, Sastric Sanskrit should be chosen
(or at least evaluated) for those doing work in Machine Translation.

Rick Briggs

------------------------------

Date: 25 Nov 1984  22:38 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - The Structures of Everyday Life  (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                    The Structures of Everyday Life

                              Phil Agre

             Wednesday, November 28; 4:00pm  8th floor playroom



Computation can provide an observation vocabulary for gathering
introspective evidence about all manner of everyday reasoning.  Although
this evidence is anecdotal and not scientific in any traditional sense, it
can provide strong constraints on the design of the central systems of
mind.  The method is cyclical: attempts to design mechanisms to account
for the phenomenology of everyday activity suggest new classes of episodes
to look out for, and puzzling anecdotes show up weaknesses in designs and
suggest improvements.

I have been applying this method particularly to the study of routines,
the frequently repeated and phenomenologically automatic rituals of which
most of daily life is made.  Some common routines in the lives of people
like me include choosing the day's clothes, making breakfast, selecting a
turnstile in the subway, listening to a familiar piece of music, beginning
and ending conversations, picking up a coffee mug, and opening the day's
mail.  It is not reasonable to view a routine as an automated series of
actions, since people understand what they're doing when carrying out
routine actions at least well enough to recover sensibly if things don't
proceed in a routine way.

I propose to account for the phenomenology of the development of mental
routines in terms of the different stages of processing that arise in the
interaction of a few fairly simple mechanisms.  These stages appear vaguely
to recapitulate the stages of development of cognition in children.

This talk corresponds roughly to my thesis proposal.



COMING SOON: Jonathan Rees [Dec 5], Alan Bawden [Dec 12]

------------------------------

Date: Tue, 27 Nov 1984  23:52 PST
From: KIPARSKY@SU-CSLI.ARPA
Subject: Seminar - Language Behavior as Distributed Processing 
         (Stanford)


Jeff Elman (Department of Linguistics, UCSD)
"Parallel  distributed  processing:   New  explanations  for
                        language behavior"

        Dec. 11, 1984, 11.00 A.M.
        Stanford University, Ventura Hall Conference Room

Abstract:

Many students of human behavior  have  assumed  that  it  is
fruitful  to  think  of the brain as a very powerful digital
computer.  This metaphor  has  had  an  enormous  impact  on
explanations  of  language  behavior.   In  this talk I will
argue that the metaphor is  incorrect,  and  that  a  better
understanding  of  language  is gained by modelling language
behavior with parallel distributed processing (PDP) systems.
These  systems offer a more appropriate set of computational
operations, provide richer insights into behavior, and  have
greater biological plausibility.

I will focus on three specific areas  in  which  PDP  models
offer  new explanations for language behavior: (1) the abil-
ity to simulate rule-guided behavior without explicit rules;
(2)  a  mechanism  for analogical behavior; and (3) explana-
tions for the effect of context on  interpretation  and  for
dealing with variability in speech.

Results from a PDP model  of speech perception  will be pre-
sented.

------------------------------

Date: 27 November 1984 09:21-EST
From: Arline H. Benford <AH @ MIT-MC>
Subject: Seminar - Full Abstraction and Semantic Equivalence  (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]


       APPLIED MATHEMATICS AND THEORY OF COMPUTATION COLLOQUIUM


                  "FULL ABSTRACTION AND SEMANTIC EQUIVALENCE"

                                Ketan Mulmuley
                          Carnegie Mellon University


                       DATE:  TUESDAY, DECEMBER 4, 1984
                       TIME:  3:30PM  REFRESHMENTS
                              4:00PM  LECTURE
                      PLACE:  2-338

A denotational semantics is said to be fully abstract if denotations of two
language constructs are equal whenever these constructs are operationally
equivalent in all programming contexts and conversely.  Plotkin showed that the
classical model of continuous functions was not a fully abstract model of typed
lambda calculus with recursion.  We show that it is possible to construct a
fully abstract model of typed lambda calculus as a submodel of the classical
lattice theoretic model.

The existence of "inclusive" predicates on semantical domains play a key role
in establishing semantic equivalence of operational and denotational
semantics.  We give a mechanizable theory for proving such existences.  In
fact, a theorem proving has been implemented which can almost automatically
prove the existence of most of the inclusive predicates which arise in
practice.


HOST:  Professor Michael Sipser

------------------------------

End of AIList Digest
********************