[mod.ai] AIList Digest V4 #3

AIList-REQUEST@SRI-AI.ARPA (AIList Moderator Kenneth Laws) (01/12/86)

AIList Digest            Sunday, 12 Jan 1986        Volume 4 : Issue 3

Today's Topics:
  Bindings - AI-Related Lists,
  Definition - Paradigm,
  Logic - New CSLI Reports,
  Reviews - Spang Robinson Report 2/1 &
    Rational Agency Seminars (CSLI)

----------------------------------------------------------------------

Date: Fri 10 Jan 86 12:18:09-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: AI-Related Lists


[...]

To add to your list of AIList-related lists, Info-1100@SUMEX and
Bug-1100@SUMEX are DL's concerning the Xerox 1100 series lisp machines
and Interlisp, and Info-TI-Explorer@SUMEX and Bug-TI-Explorer@SUMEX are
DL's concerning the TI Explorers and associated software.

--Christopher

------------------------------

Date: Wed, 8 Jan 86 16:38:34 EST
From: Bruce Nevin <bnevin@bbncch.ARPA>
Subject: paradigm

The term paradigm was specialized in philosophy of science by Thomas Kuhn in
his 1965(?) book _The_Structure_of_Scientific_Revolutions and subsequent
works.  I would question whether AI is a mature enough
field to have a paradigm in the sense that Kuhn intends for a mature science.
Instead, there appears to be a fair selection of more or less divergent
examples/models/agendas for each area of investigation.  Many of these are
associated with the more prominent investigators in AI.


        Bruce Nevin
        bn@bbncch.arpa

        BBN Communications
        33 Moulton Street
        Cambridge, MA 02238
        (617) 497-3992

[Disclaimer:  my opinions may reflect those of many, but no one else
need take responsibility for them, including my employer.]

------------------------------

Date: Wed, 8 Jan 1986  19:37 EST
From: MINSKY%OZ.AI.MIT.EDU@MC.LCS.MIT.EDU
Subject: AIList Digest   V4 #2

about "paradigm" -- the dictionary is out of date because this word
now almost universally refers to the notion in Thomas Kuhn's
"Structure of Scientific Revolutions."  It seems to mean powerful and
influential idea, or something.

------------------------------

Date: Thu, 9 Jan 86 11:04:40 GMT
From: Mmaccall%cs.ucl.ac.uk@cs.ucl.ac.uk
Subject: Re: AI Paradigm

An approximate meaning for the word `paradigm' is `template'.

Gordon Joly
gcj%qmc-ori@ucl-cs.arpa

------------------------------

Date: Fri, 10 Jan 86 16:53:46 GMT
From: Mmaccall%cs.ucl.ac.uk@cs.ucl.ac.uk
Subject: Re: AI Paradigm

As an afterthought. The place where I first saw the term "paradigm"
was in "Games People Play" by Eric Berne. Here, he has a model of the
(transactional) relationship between two people, with three states of
parent-adult-child. They are then put side by side with the parent above
adult and the adult above child, each being represented by a circle. Lines
are drawn to indicate which relationships are active in a given "game".
The Chambers 20th Century Dictionary, as well as the Random House, gives
the notion of "side by side". I hope this has a meaning for the "AI Paradigm"!

Gordon Joly,
gcj%qmc-ori@ucl-cs.arpa

------------------------------

Date: Thu 9 Jan 86 12:09:33-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: Paradigm

Your dictionary is correct about "paradigm".   This word has been used
extensively in the Ai literature in an incorrect way.  People incorrectly
use it to mean "methodology" or "school of thought" or some such.
David

------------------------------

Date: Thu 9 Jan 86 15:29:34-PST
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: ai paradigm?

        If you have a paradigm, there's always a chance that you'll get a
paradigm shift, in which case people will fund your research for the next
20 years. On the other hand, if you say your example shifts, they'll think
you're fudging your data.

                        Mike

------------------------------

Date: Wed 8 Jan 86 16:53:32-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: New CSLI Reports on Logic


                            NEW CSLI REPORTS

     Report No. CSLI-85-41, ``Possible-world Semantics for Autoepistemic
   Logic'' by Robert C. Moore and Report No.  CSLI-85-42, ``Deduction
   with Many-Sorted Rewrite'' by Jose Meseguer and Joseph A. Goguen, have
   just been published.  These reports may be obtained by writing to
   Trudy Vizmanos, CSLI, Ventura Hall, Stanford, CA 94305 or
   Trudy@SU-CSLI.

------------------------------

Date: Fri, 10 Jan 86 17:28:42 cst
From: Laurence Leff <leff%smu.csnet@CSNET-RELAY.ARPA>
Subject: Spang Robinson Report, Volume 2 No 1


Summary of Spang Robinson Report, Volume 2 Number 1, January 1986
featuring AI Hardware

Vendors state that the biggest problem in marketing AI hardware
is educating both internal people and the market place.

An interview with a gentleman who evaluated AI type machines for use
in developing software for silicon compilation research at Philips
Labs.

Discussion of various ways to enhance IBM PC's for AI (or other
development needs) and the use of the Macintosh and Commodore's Amiga
for AI research.

C. J. Petrie of MCC described a system to parse text from a "how to"
book into rules.

Interview with Dag Tellefsen of Glenwood Management, a venture
capitalist.  They have funded Natural Language Products and AION.

Kurzweil Applied Intelligence, that develops voice recognition hardware,
has signed a joint marketing agreement with FutureNet which supplies
electronic engineering work stations.

Reasoning Systems has signed an agreement with Lockheed Missiles and
Space Corporation to develop knowledge based systems for
communications.  (Reasoning Systems is involved with the commercialization
of some of the techniques from the University of Southern California work
in automating software development.  See the IEEE Transactions on
Software Engineering November 1985 Special Issue on AI and Software
Engineering for more info.)

"Logicware Inc. and Releations Ltd., both in Canada, have signed a
long-term agreement to design an Artificial Intelligence language
leading to a computer system which will emulate the thinking process
of the human brain.  It will be  the first AI language designed for
vector-processing by a super computer."

Composition Systems has released two Artificial Intelligence kit that
links VAX Lisp with such DEC product as FMS, RDB, GKS and DECNET."

Review of the IEEE Computer Society Second Conference on Artificial
Intelligence.


------------------------------

Date: Wed 8 Jan 86 16:53:32-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Review - Rational Agency Seminars (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                          RATIONAL AGENCY GROUP
                        Summary of Fall 1985 Work

      The fall-quarter meetings of the Rational Agency Group (alias
   RatAg) have focused on the question: what must the architecture of a
   rational agent with serious resource limitations look like?  Our
   attempts to get at answers to this question have been of two kinds.
   One approach has been to consider problems in providing a coherent
   account of human rationality.  Specifically, we have discussed a
   number of philosophically motivated puzzles, such as the case of the
   Double Pinball Machine, and the problem of the Strategic Bomber,
   presented in a series of papers by Michael Bratman.  The second
   approach we have taken has been to do so-called robot psychology.
   Here, we have examined existing AI planning systems, such as the PRS
   system of Mike Georgeff and Amy Lansky, in an attempt to determine
   whether, and, if so, how these systems embody principles of rationality.

   Both approaches have led to the consideration of similar issues:

   1) What primitive components must there be in an account of
      rationality?  From a philosophical perspective, this is
      equivalent to asking what the set of primitive mental states
      must be to describe human rationality; from an AI perspective,
      this is equivalent to asking what the set of primitive mental
      operators must be to build an artificial agent who behaves
      rationally.  We have agreed that the philospher's traditional
      2-parameter model, containing just ``beliefs'' and ``desires'',
      is insufficient; we have further agreed that adding just a third
      parameter, say ``intentions'', is still not enough.  We are
      still considering whether a 4-parameter model, which includes a
      parameter we have sometimes called ``operant desires'', is
      sufficient.  These so-called operant desires are medial between
      intentions and desires in that, like the former (but not the
      latter) they control behavior in a rational agent, but like the
      latter (and not the former) they need not be mutually consistent
      to satisfy the demands of rationality.  The term ``goal'', we
      discovered in passing, has been used at times to mean
      intentions, at times desires, at times operant desires, and at
      times other things; we have consequently banished it from our
      collective lexicon.

   2) What are ``plans'', and how do they fit into a theory of
      rationality?  Can they be reduced to some configuration of
      other, primitive mental states, or must they also be introduced
      as a primitive?

   3) What are the combinatorial properties of these primitive
      components within a theory of rationality, i.e., how are they
      interrelated and how do they affect or control action?  We have
      considered, e.g., whether a rational agent can intend something
      without believing it will happen, or not intend something she
      believes will inevitably happen.  One set of answers to these
      questions that we have considered has come from the theory of
      plans and action being developed by Michael Bratman.  Another
      set has come come from work that Phil Cohen has been doing with
      Hector Levesque, which involves explaining speech acts as a
      consequence of rationality.  These two theories diverge on many
      points: Cohen and Levesque, for instance, are committed to the
      view that if a rational agent believes something to be inevitable,
      he also intends it; Bratman takes the opposite view.  In recent
      meetings, interesting questions have arisen about whether there
      can be beliefs about the future that are `not' beliefs that
      something will inevitably happen, and, if so, whether
      concomitant intentions are guaranteed in a rational agent.

      The RatAg group intends to begin the new quarter by considering how
   Cohen and Levesque's theory can handle the philosphical problems
   discussed in Bratman's work.  We will also be discussing the work of
   Hector-Neri Castaneda in part to explore the utility of Castaneda's
   distinction between propositions and practitions for our work on
   intention, belief and practical rationality.  Professor Castaneda will
   be giving a CSLI colloquium in the spring.
      RatAg participants this quarter have been Michael Bratman (project
   leader), Phil Cohen, Todd Davies, Mike Georgeff, David Israel, Kurt
   Konolige, Amy Lansky, and Martha Pollack.            --Martha Pollack

------------------------------

End of AIList Digest
********************