[net.ai] AIList Digest V2 #144

LAWS@SRI-AI.ARPA (11/12/84)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest           Wednesday, 24 Oct 1984    Volume 2 : Issue 144

Today's Topics:
  Courses - Decision Systems & Introductory AI,
  Journals - Annotated AI Journal List,
  Automatic Programming - Query,
  AI Tools - TI Lisp Machines & TEK AI Machine,
  Administrivia - Reformatting AIList Digest for UNIX,
  Humor - Request for Worst Algorithms,
  Seminars - Metaphor & Learning in Expert Systems &
      Representing Programs for Understanding
----------------------------------------------------------------------

Date: Tue 23 Oct 84 13:33:06-PDT
From: Samuel Holtzman <HOLTZMAN@SUMEX-AIM.ARPA>
Subject: Responses to Decision Systems course.

Several individuals have requested further information on the course
in decision systems I teach at Stanford (advertised in AILIST a few
weeks ago).  Some of the messages I received came from non-ARPANET
sites, and I have had trouble replying electronically.  I would
appreciate getting a message from anyone who has requested information
from me and has not yet received it.  Please include a US (paper) mail
address for my reply.

Thanks,
Sam Holtzman
(HOLTZMAN@SUMEX or P.O. Box 5405, Stanford, CA  94305)

------------------------------

Date: 22 Oct 1984 22:45:40 EDT
From: Lockheed Advanced Software Laboratory@USC-ISI.ARPA
Subject: Request for information

A local community college is considering adding an introductory course in
AI to its curriculum.  Evening courses would be of benefit to a large
community of technical people interested in the subject.  The question
is what will be the benefit to first and second year students.

If anyone knows of any lower division AI courses taught anywhere, could
you please drop me a line over the net.

Also, course descriptions on introductory AI classes, either lower or
upper division, would be appreciated.

Comments on the usefulness or practicality of such a course at this level
are also welcome.

                                Thank You,
                                Michael A. Moran
                                Lockheed Advanced Software Laboratory

                                address: HARTUNG@USC-ISI

------------------------------

Date: Tue, 23 Oct 84 11:34 CDT
From: Joseph_Hollingsworth <jeh%ti-eg.csnet@csnet-relay.arpa>
Subject: annotated ai journal list


I am interested in creating an annotated list of the AI related journals list
that was published in AIList V1 N43.  I feel that this annotated list would be
beneficial for those persons who do not have easy access to the journals
mentioned in the previously published list, but who feel that some of them may
apply to their work.

I solicit information about each journal in the following form, (which I will
compile and release to the AIList if there is enough interest shown).

1) Journal Name
2) Subjective opinion of the type of articles that frequently appear in that
   journal (short paragraph or so).
3) Keywords and phrases that characterize the articles/journal, (don't let
   formalized keyword lists hinder your imagination).
4) The type of scientist, engineer, technician, etc. that the journal
   would benefit.
5) Address of journal for subscription correspondence, (include price too,
   if possible).

Please send this information to
Joe Hollingsworth at
  jeh%ti-eg@csnet-relay  (if you are on the ARPANET)
  jeh@ti-eg              (if you are on the CSNET; I am on the CSNET)


The following is the aforementioned list of journals:

AI Magazine
AISB Newsletter
Annual Review in Automatic Programming
Artificial Intelligence
Artificial Intelligence Report
Behavioral and Brain Sciences
Brain and Cognition
Brain and Language
Cognition
Cognition and Brain Theory
Cognitive Pshchology
Cognitive Science
Communications of the ACM
Computational Linguistics
Computational Linguistics and Computer Languages
Computer Vision, Graphics, and Image Processing
Computing Reviews
Human Intelligence
IEEE Computer
IEEE Transactions on Pattern Analysis and Machine Intelligence
Intelligence
International Journal of Man Machine Studies
Journal of the ACM
Journal of the Assocation for the Study of Perception
New Generation Computing
Pattern Recognition
Robotics Age
Robotics Today
SIGART Newsletter
Speech Technology

------------------------------

Date: 23 October 1984 22:28-EDT
From: Herb Lin <LIN @ MIT-MC>
Subject: help needed on automatic programming information

I need some information on automatic programming.

1.  How complex a problem can current automatic programming systems
handle?  The preferred metric would be complexity as measured by the
number of lines of code that a good human programmer would use to
solve the same problem.

2.  How complex a problem will future automatic programming systems be
able to handle?  Same metric, please.  Of course, who can predict the
future?  More precisely, what do the most optimistic estimates
predict, and for what time scale?

3.  In 30 years (if anyone is brave enough to look that far ahead),
what will automatic programming be able to do?

Please provide citable sources if possible.

Many thanks.

------------------------------

Date: 22 Oct 1984 12:07:39-PDT
From: William Spears <spears@NRL-AIC>
Subject: TI Lisp machines


     The AI group at the Naval Surface Weapons Center is interested in the new
TI Lisp Machine. Does anyone have any detailed information about it? Thanks.

                                       "Always a Cosmic Cyclist"
                                        William Spears
                                        Code N35
                                        Naval Surface Weapons Center
                                        Dahlgren, VA 22448

------------------------------

Date: 22 Oct 84 08:10:32 EDT
From: Robert.Thibadeau@CMU-RI-VI
Subject: TEK AI Machine

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

I have good product literature on the Tektronix 4404 Artificial
Intelligence System (the workbook for their people).  This appears
to be a reasonable system which supports Franz Lisp, Prolog,
and Smalltalk-80.  It uses a 68010 with floating point hardware
and comes standard with a 1024^2 bit map, 20mb disk, floppy,
centronics 16 bit port, RS232, 3-button mouse, ethernet interface,
1 mbyte RAM, and a Unix OS.  The RAM upgrades at least 1 more mbyte
and you can have a larger disk and streaming tape. The major thing
is that the price (retail without negotiation) is $14,950 complete.
It is apparently real, but I don't know this system first hand.
The product description is all I have.

------------------------------

Date: Sat, 20 Oct 84 23:10:53 edt
From: Douglas Stumberger <des%bostonu.csnet@csnet-relay.arpa>
Subject: reformatting AILIST digest for UNIX


        For those of you on Berkeley  UNIX  installations,  there  is  a
program  available  which does the slight modifications to ailist digest
necessary so it is in the correct format for  a  "mail  -f  ...".   This
allows  using the UNIX mail system functionality to maintain your ailist
digest files.

For a copy of the program, net to:

douglas stumberger
csnet:  des@bostonu

------------------------------

Date: Mon 22 Oct 84 10:30:00-PDT
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: worst algorithms as programming jokes

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

After reading the recent complaint(s) about those people who slow down the
system with their silly programs to sort a 150-element list, and talking with
a friend, I came up with the following dumb idea :

A lot of emphasis is understandably put on good, efficient algorithms, but
couldn't we learn also from bad, terrible algorithms ? I have heard that Dan
Friedman at Indiana collects elegant LISP programs that he calls LISP poems.
To turn things upside down, how about LISP jokes (more generally, programming
jokes) ? I'm pretty sure most if not all of programmers have some day (night)
burst into laughter when encountering an algorithm that is particularly dumb,
and funny for the same reason.

I don't know whether anyone ever collected badgorithms (sorry, that was the
worst name I could find), so I suggest that you bright guys send me your
favorite entries.

To qualify as a badgorithm, the following conditions should be met:
(if you don't like them, send me your suggestions for a better definition)

1. It *is* an algorithm in the sense described by Knuth Vol 1.
2. It *does* solve the problem it addresses. Entering the Knuth-Bendix
   algorithm as a badgorithm for binary addition is illegal (though I admit it
   is somewhat funny).
3. Though it solves the problem, it must do so in an essentially clumsy way.
   Adding loops to slow down the algorithm is cheating. In some sense a
   badgorithm should totally miss the right structure to approach the problem.
4. The hopeless off-the-track-ness of a badgorithm should be humorous for
   someone knowledgeable with the problem addressed. We are not interested
   in alborithms, right ? Just being the second or third best algorithm for
   a problem is not enough to qualify (think of the "common sense" algorithm
   for finding a word in a text as opposed to the Boyer-Moore algorithm, or of
   the numerous ways to evaluate a polynomial as opposed to Horner's rule;
   there is nothing to laugh at in those cases). There is nothing funny in just
   being a O(n^(3/(pi^3)-1/e)) algorithm, I think.
5. It should be described in a simple, clear way. Remember that the best jokes
   are the shortest ones. I'm sure there are enough badgorithms for well-known
   problems (classical list manipulation, graph theory, arithmetic,
   cryptography, sorting, searching, etc). Please don't enter algorithms
   to solve NP problems unless you have good reasons to think they are
   interesting in our sense.




If anyone out there is willing to send me an entry, please send the following:

* a simple description of the problem (the name is enough if it's a well-known
  problem).
* a verbal description of the badgorithm if possible.
* a programmed version of the badgorithm (in LISP preferably). this is not
  necessary if your verbal description makes it clears enough how to write
  such a program, but still it would be nice.
* a description of a good algorithm for the same problem in case most people
  are not expected to be familiar with one. Comparing this to the badgorithm
  should help us in seeing what's wrong with the latter, and I would say that
  this could have good educational value.


To start things, let me enter my favorite badgorithm (I call it "stupid-sort"):

* the problem is to sort a list, according to some "order" predicate.
* well, that's easy. just generate all permutations of the list, and then
  check whether they are "order"ed. would you bet that someone in CS105
  does actually use this one ?

  [I once had to debug an early version of the BMD nonparametric
  package.  It found the min and max of a vector by sorting the
  elements ...   (Presumably most users would also request the
  median and other sort-related statistics.)  For a particularly
  slow sort routine see the Hacker's Dictionary definition of JOCK,
  quoted in Jon Bentley's April Programming Pearls in CACM.  -- KIL]


I understand perfectly that some people/organizations do not wish to have their
names associated with badgorithms, but please don't refrain from entering
something because of that. I swear that if you request it there will be no
trace of the origin of the entry if I ever compile a list of them for personal
or public use (you know, "name withheld by request" is the usual trick).

jean-luc

------------------------------

Date: 17 Oct 1984 16:25-EDT
From: Andrew Haas at BBNG.ARPA
Subject: Seminar - Metaphor

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Next week's BBN AI seminar is on Thursday, October 25th at 10:30
AM in the 3rd floor large conference room.  Bipin Indurkhya of
the University of Massachusetts at Amherst will speak on "A
Computational Theory of Metaphor Comprehension and Analogical
Reasoning".  Abstract follows.

   Though the pervasiveness and importance of metaphors in
natural languages is widely recognised, not much attention has
been given to them in the fields of Artificial Intelligence and
Computational Linguistics.  Broadly speaking, a metaphor can be
characterized as application of terms belonging to source domain
in describing target domain.  A large class of such metaphors are
based on structural analogy between the two domains.

   A computational model of metaphor comprehension was proposed
by Carbonell which required an explicit representation of a
mapping which maps terms of the source domain to the terms of the
target domain.  In our research we address ourselves to the
question of how one can characterize this mapping in terms of the
knowledge of the source and the target domains.

       In order to answer this question, we start from Gentner's
theory of Structure-Mapping.  We show limitations of Gentner's
theory and propose a theory of Constrained Semantic Transference
[CST] that allows part of the structure of the source domain to
be transferred to the target domain coherently.  We will then
introduce two recursive operators, called Augmentation and
Positing Symbols, that make it possible to create new structure
in the target domain constrained by the structure of the source
domain.

     We will show how CST captures several cognitive properties
of metaphors and then discuss its limitations with regard to
computability and finite representability.  If time permits, we
will use CST as a basis to develop a theory of Approximate
Semantic Transference which can be used to develop computational
models of the cognitive processes involved in metaphor
comprehension, metaphor generation, and analogical reasoning.

------------------------------

Date: Tue 23 Oct 84 10:45:51-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Learning in Expert Systems

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


DATE:        Friday, October 26, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Li-Min Fu
             Electrical Engineering

ABSTRACT:    LEARNING OBJECT-LEVEL AND META-LEVEL KNOWLEDGE IN EXPERT SYSTEMS

A high performance expert system can be built by exploiting machine
learning techniques.  A learning method has been developed that is
capable of acquiring new diagnostic knowledge, in the form of rules,
from a case library.  The rules are designed to be used in a
MYCIN-like diagnostic system in which there is uncertainty about data
as well as about the strength of inference and in which the rules
chain together to infer complex hypotheses.  These features greatly
complicate the learning problem.

In machine learning, two issues that can't be overlooked are
efficiency and noise.  A subprogram, called "Condenser," is designed
to remove irrelevant features during learning and improve the
efficiency.  It works well when the number of features used to
characterize training instances is large.  One way of removing noise
associated with a learned rule is seeking a state with minimal
prediction error.

Another subprogram has been developed to learn meta-rules which guide
the invocation of object-level rules and thus enhance the performance
of the expert system using the object-level rules.

By embodying all the ideas developed in this work, an expert program
called JAUNDICE is built, which can diagnose the likely cause and
mechanisms of a patient with jaundice.  Experiments with JAUNDICE show

the developed theory and method of learning are effective in a complex
and noisy environment where data may be inconsistent, incomplete, and
erroneous.

Paula

------------------------------

Date: Tue, 23 Oct 84 00:08:10 cdt
From: rajive@ut-sally.ARPA (Rajive Bagrodia)
Subject: Seminar - Representing Programs for Understanding

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

                      Graduate Brown Bag Seminar:

                Representing Programs For Understanding
                                  by
                              Aaron Temin

                         noon  Friday Oct. 26
                               PAI 3.36


        Automatic help systems would be much easier to generate than
        they are now if the same code used to create the executable
        version of a program could be used as the major database for
        the help system.  The desirable properties of such a program
        representation will be discussed.  An overview of MIRROR,
        our implementation of those properties, will be presented with
        an explanation of why MIRROR works.  It will also be argued
        that functional program representations are inadequate for the
        task.


If you are interested in receiving mail notifications of graduate brown bag
seminars in addition to the bboard notices, please send a note to
                            briggs@ut-sally

------------------------------

End of AIList Digest
********************