[net.ai] AIList Digest V3 #149

AIList-REQUEST@SRI-AI.ARPA (AIList Moderator Kenneth Laws) (10/19/85)

AIList Digest            Friday, 18 Oct 1985      Volume 3 : Issue 149

Today's Topics:
  Projects - University of Aberdeen & CSLI,
  Literature - New Complexity Journal,
  AI Tools - Lisp vs. Prolog,
  Opinion - AI Hype & Scaling Up,
  Cognition & Logic - Modus Ponens,
  Humor - Dognition

----------------------------------------------------------------------

Date: Thu 17 Oct 85 12:44:41-PDT
From: Derek Sleeman <SLEEMAN@SUMEX-AIM.ARPA>
Subject: University of Aberdeen Program

                UNIVERSITY of ABERDEEN
                Department of Computing Science


The University of Aberdeen is now making a sizeable committment to
build a research group in Intelligent Systems/Cognitive Science.
Following the early work of Ted Elcock and his co-workers, the
research work of the Department has been effectively restricted to
databases. However, with the recent appointment of Derek Sleeman
to the faculty from summer 1986, it is anticipated that a sizeable
activity will be (re)established in AI.

        In particular we are anxious to have a number of visitors at
any time - and funds have been set aside for this. So we would be
particularly interested to hear from people wishing to spend Sabbaticals,
short-term Research fellowships etc.

        Please contact Derek Sleeman at 415 497 3257 or SLEEMAN@SUMEX
for further details.

------------------------------

Date: Wed 16 Oct 85 17:12:46-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Projects

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                              CSLI PROJECTS

   The following is a list of CSLI projects and their coordinators.

    AFT Lexical Representation Theory.          Julius Moravcsik
        (AFT stands for Aitiuational Frame Theory)
    Computational Models of Spoken Language.    Meg Withgott
    Discourse, Intention, and Action.           Phil Cohen.
    Embedded Computation Group.                 Brian Smith (3 sub groups)
        sub 1: Research on Situated Automata.   Stan Rosenschein
        sub 2: Semantically Rational
               Computer Languages.              Curtis Abbott
        sub 3: Representation and Reasoning.    Brian Smith
    Finite State Morphology.                    Lauri Karttunen
    Foundations of Document Preparation.        David Levy.
    Foundations of Grammar.                     Lauri Karttunen
    Grammatical Theory and Discourse
        Structures.                             Joan Bresnan
    Head-driven Phrase Structure Grammar.       Ivan Sag and Thomas Wasow
    Lexical Project.                            Annie Zaenen
    Linguistic Approaches to Computer
        Languages.                              Hans Uszkoreit
    Phonology and Phonetics.                    Paul Kiparsky
    Rational Agency.                            Michael Bratman
    Semantics of Computer Language.             Terry Winograd
    Situation Theory and Situation
        Semantics (STASS).                      Jon Barwise
    Visual Communication.                       Sandy Pentland

   In addition, there are some interproject working groups.  These
   include:

    Situated Engine Company.                    Jon Barwise and Brian Smith
    Representation and Modelling.               Brian Smith and Terry Winograd

------------------------------

Date: Wed 16 Oct 85 09:56:32-EDT
From: Susan A. Maser <MASER@COLUMBIA-20.ARPA>
Subject: NEW JOURNAL


                        JOURNAL OF COMPLEXITY

                            Academic Press

               Editor: J.F. Traub, Columbia University


                       FOUNDING EDITORIAL BOARD

                    K. Arrow, Stanford University
            G. Debreu, University of California, Berkeley
                    Z. Galil, Columbia University
                 L. Hurwicz, University of Minnesota
                J. Kadane, Carnegie-Mellon University
             R. Karp, University of California, Berkeley
                        S. Kirkpatrick, I.B.M.
                H.T. Kung, Carnegie-Mellon University
          M. Rabin, Harvard University and Hebrew University
             S. Smale, University of California, Berkeley
                         S. Winograd, I.B.M.
               S. Wolfram, Institute for Advanced Study
    H. Wozniakowski, Columbia University and University of Warsaw


 YOU ARE INVITED TO SUBMIT YOUR MAJOR RESEARCH PAPERS TO THE JOURNAL.
                  See below for further information.

Publication Information and Rates:
Volume 1 (1985), 2 issues, annual institutional subscription rates:
In the US and Canada: $60
All other countries: $68
Volume 2 (1986), 4 issues, annual institutional subscription rates:
In the US and Canada: $80
All other countries: $93

Send your subscription orders to:   Academic Press, Inc.
                                    1250 Sixth Avenue
                                    San Diego, CA 92101
                                    (619) 230-1840


Contents of Volume 1, Issue 1:

"A 71/60 Theorem for Bin Packing" by Michael R. Garey & David S. Johnson

"Monte-Carlo Algorithms for the Planar Multiterminal Network
 Reliability Problem" by Richard M. Karp & Michael Luby

"Memory Requirements for Balanced Computer Architectures" by H.T. Kung

"Optimal Algorithms for Image Understanding: Current Status and
 Future Plans" by D. Lee

"Approximation in a Continuous Model of Computing" by K. Mount & S. Reiter

"Quasi-GCD Computations" by Arnold Schonhage

"Complexity of Approximately Solved Problems" by J.F. Traub

"Average Case Optimality" by G.W. Wasilkowski

"A Survey of Information-Based Complexity" by H. Wozniakowski



                         SUBMISSION OF PAPERS

        The JOURNAL OF COMPLEXITY is a multidisciplinary journal which
covers complexity as broadly conceived and which publishes research
papers containing substantial mathematical results.

        In the area of computational complexity the focus is on
problems which are approximately solved and for which optimal
algorithms or lower bound results are available.  Papers which provide
major new algorithms or make important progress on upper bounds are
also welcome.  Papers which present average case or probabilistic
analyses are especially solicited.  Of particular interest are papers
involving distributed systems or parallel computers for which only
approximate solutions are available.

        The following is a partial list of topics for which
computational complexity results are of interest: applied mathematics,
approximate solution of hard problems, approximation theory, control
theory, decision theory, design of experiments, distributed computation,
image understanding, information theory, mathematical economics,
numerical analysis, parallel computation, prediction and estimation,
remote sensing, seismology, statistics, stochastic scheduling.

        In addition to computational complexity the following are
among the other complexity topics of interest: physical limits of
computation; chaotic behavior and strange attractors; complexity in
biological, physical, or artificial systems.

        Although the emphasis is on research papers, surveys or
bibliographies of special merit may also be published.

To receive a more complete set of authors' instructions (with format
specifications), or to submit a manuscript (four copies please),
write to:
                          J.F. Traub, Editor
                        JOURNAL OF COMPLEXITY
                    Department of Computer Science
                    450  Computer Science Building
                         Columbia University
                       New York, New York 10027

------------------------------

Date: Tue, 15 Oct 85 22:15 EDT
From: Hewitt@MIT-MC.ARPA
Subject: Lisp vs. Prolog (reply to Pereira)

I would like to reply to Fernando Pereira's message in which he wrote:

    It is a FACT that no practical Prolog system is written entirely
    in Lisp: Common, Inter or any other. Fast Prolog systems have
    been written for Lisp machines (Symbolics, Xerox, LMI) but their
    performance depends crucially on major microcode support (so
    much so that the Symbolics implementation, for example, requires
    additional microstore hardware to run Prolog). The reason for
    this is simple: No Lisp (nor C, for that matter...) provides the
    low-level tagged-pointer and stack operations that are critical
    to Prolog performance.

It seems to me that the above argument about Prolog not REALLY being
implemented in Lisp is just a quibble.  Lisp implementations from the
beginning have provided primitive procedures to manipulate the likes
of pointers, parts of pointers, invisible pointers, structures, and
stack frames.  Such primitve procedures are entirely within the spirit
and practice of Lisp.  Thus it is not surprising to see primitive
procedures in the Lisp implementations of interpreters and compilers
for Lisp, Micro-Planner, Pascal, Fortran, and Prolog.  Before now no
one wanted to claim that the interpreters and compilers for these
other languages were not written in "Lisp".  What changed?

On the other hand primitive procedures to manipulate pointers, parts
of pointers, invisible pointers, structures, and stack frames are
certainly NOT part of Prolog!  In FACT no one in the Prolog community
even professes to believe that they could EVER construct a
commercially viable (i.e. useful for applications) Common Lisp in
Prolog.

I certainly realize that interesting research has been done using
Planner-like and Prolog-like languages.  For example Terry Winograd
implemented a robot world simulation with limited natural language
interaction using Micro-Planner (the implementation by Sussman,
Winograd, and Charniak of the design that I published in IJCAI-69).
Subsequently Fernando did some interesting natural language research
using Prolog.

My chief chief concern is that some AILIST readers might be misled by
the recent spate of publicity about the "triumph" of Prolog over Lisp.
I simply want to point out that the emperor has no clothes.

------------------------------

Date: Thu, 10 Oct 85 11:03:00 GMT
From: gcj%qmc-ori.uucp@ucl-cs.arpa
Subject: AI hype

A comment from Vol 3 # 128:-
``Since AI, by definition, seeks to replicate areas of human cognitive
  competence...''
This should perhaps be read in the context of the general discussion which
has been taking place about `hype'. But it  is still slightly off the mark
in my opinion.
I suppose this all rests on what one means by human cognitive competence.
The thought processes  which make  us human are far  removed from the cold
logic of algorithms which are the basis for *all* computer software, AI or
otherwise.  There is  an element  in all human  cognitive  processes which
derives from the emotional part of our psyche. We reach decisions not only
because we `know' that they are right, but also because  we `feel' them to
be correct. I think really  that AI must be seen as an important extension
to the thinking process, as a way of augmenting an expert's scope.

Gordon Joly     (now gcj%qmc-ori@ucl-cs.arpa
                (formerly gcj%edxa@ucl-cs.arpa

------------------------------

Date: Fri 18 Oct 85 10:13:10-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Scaling up AI solutions


>From: Gary Martins <GARY@SRI-CSL.ARPA>
>Subject: Scaling Up

>Mr. Wyland seems to think that finding problem solutions which "scale up"
>is a matter of manufacturing convenience, or something like that.  What
>he seems to overlook is that the property of scaling up (to realistic
>performance and behavior) is normally OUR ONLY GUARANTEE THAT THE
>"SOLUTION" DOES IN FACT EMBODY A CORRECT SET OF PRINCIPLES.  [...]


The problem of "scaling up" is not that our solutions do not work
in the real world, but that we do not have general, universal
solutions applicable to all AI problems.  This is because we only
understand *parts* of the problem at present.  We can design
solutions for the parts we understand, but cannot design the
universal solution until we understand *all* of the problem.
Binary vision modules provide sufficient power to be useful in
many robot assembly applications, and simple word recognizers
provide enough power to be useful in many speech control
applications.  These are useful, real-world solutions but are not
*universal* solutions: they do not "scale up" as universal
solutions to all problems of robot assembly or understanding
speech, respectively.

I agree with you that scientific theories are proven in the lab
(or on-the-job) with real world data.  The proof of the
engineering is in the working.  It is just that we have not
reached the same level of understanding of intelligence that
Newton's Laws provided for mechanics.

Dave Wyland

------------------------------

Date: Tue 15 Oct 85 13:48:28-PDT
From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
Subject: modus ponens

Seems to me that McGee is the one guilty of faulty logic.  Consider the
following example:

    Suppose a class consists of three people, a 6 ft boy (Tom), a 5 ft girl
(Jane), and a 4 ft boy (John).  Do you believe the following statements?

    (1) If the tallest person in the class is a boy, then if the tallest is
        not Tom, then the tallest will be John.
    (2) A boy is the tallest person in the class.
    (3) If the tallest person in the class is not Tom then the tallest
        person in the class will be John.

 How many readers believe (1) and (2) imply the truth of (3)?
                                                 -  Mike

------------------------------

Date: Thu, 17 Oct 85 21:22:26 pdt
From: cottrell@nprdc.arpa (Gary Cottrell)
Subject: Seminar - Parallel Dog Processing


                                       SEMINAR

                              Parallel Dog Processing:
                   Explorations in the Nanostructure of Dognition

                                Garrison W. Cottrell
                              Department of Dog Science
                Condominium Community College of Southern California


               Recent advances in neural network modelling have led to  its
          application  to  increasingly  more trivial domains.  A prominent
          example of this line of research has  been  the  creation  of  an
          entirely new discipline, Dognitive Science[1], bringing  together
          the  insights  of  the  previously  disparate fields of obedience
          training, letter carrying, and vivisection on such questions  as,
          "Why  are  dogs  so  dense?"  or,  "How many dogs does it take to
          change a lightbulb?"[2]

               This talk will focus on the first question.   Early  results
          suggest   that  the  answer  lies  in  the  fact  that  most  dog
          information processing occurs in their brains.   Converging  data
          from various fields (see, for example, "A vivisectionist approach
          to dog sense manipulation", Seligman, 1985) have shown that  this
          "wetware"  is  composed  of  a  massive  number  of  slow,  noisy
          switching elements, that are  too  highly  connected  to  form  a
          proper  circuit.  Further, they appear to be all trying to go off
          at the same time like  popcorn,  rather  than  proceeding  in  an
          orderly fashion.  Thus it is no surprise to science that they are
          dumb beasts.

               Further  impedance  to   intelligent   behavior   has   been
          discovered  by  learning  researchers.   They have found that the
          connections between the elements have  little  weights  on  them,
          slowing   them   down  even  more  and  interfering  with  normal
          processing. Indeed, as the dog grows, so do these weights,  until
          the processing elements are overloaded.  Thus it is now clear why
          you can't teach an old dog new  tricks,  and  also  explains  why
          elderly  dogs  tend  to  hang their heads.  Experience with young
          dogs appears to bear this out.  They seem  to  have  very  little
          weight  in  their  brains,  and  their behavior is thus much more
          laissez faire than older dogs.

               We have  applied  these  constraints  to  a  neural  network
          learning  model  of  the dog brain.  To model the noisy signal of
          the actual dog neurons, the units of the model are restricted  to
          communicate by barking to one another.  As these barks are passed
          from one unit to another, the weights on the units are  increased
          by  an amount proportional to the loudness of the bark.  Hence we
          ____________________
             [1]A flood of researchers finding Cognitive Science  too  hard
          are switching to this exciting new area.  It appears that trivial
          results in this unknown field will beget journal papers and  TR's
          for several years before funding agencies and reviewers catch on.
             [2]Questions from the Philosophy of dognitive science (dogmat-
          ics),  such  as  "If a dog barks in the condo complex and I'm not
          there to hear it, why do the neighbors claim it makes  a  sound?"
          are beyond the scope of this talk.






          term this learning mechanism bark propagation.  Since the weights
          only  increase,  just  as  in  the  normal  dog, at asymptote the
          network has only one stable state, which we  term  the  dead  dog
          state.   Our model is validated by the fact that many dogs appear
          to achieve this state while still breathing.  We will demonstrate
          a live simulation of our model at the talk.

------------------------------

End of AIList Digest
********************