[net.ai] AIList Digest V3 #45

LAWS@SRI-AI.ARPA (04/03/85)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest           Wednesday, 3 Apr 1985      Volume 3 : Issue 45

Today's Topics:
  Administrivia - Short Vacation,
  Query - PRIME Installations,
  Games - GO,
  Expert Systems - Process Control & Database Systems,
  Opinion - Living Programs,
  Humor (if any) - Subjective C & Machine Forgetting,
  Seminars - Adaptive Algorithms (SU) &
    Generation of Expert Systems from Examples (Northeastern)
----------------------------------------------------------------------

Date: Sat 30 Mar 85 20:00:43-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Administrivia - Short Vacation

Expect about a ten-day interval between this digest and the next.
I'll be attending an SPIE conference, among other activities, and
I haven't yet trained my computer to issue the digest automatically.

                                        -- Ken Laws

------------------------------

Date: TUE 2 APR 85 1240 MEZ
From: U02F%CBEBDA3T.BITNET@WISCVM.ARPA  (Franklin A. Davis)
Subject: Any PRIME installations doing AI research?

We are interested in exchanging tools, compilers, & hints with any
PRIME users who see this note.  Our research includes computer
vision using a CAD model, robotics & collision detection, and
expert systems.

Please contact me directly.  Thanks.

-- FAD <U02F@CBEBDA3T.BITNET>
Institute for Informatics and Applied Mathematics
University of Bern, Switzerland

------------------------------

Date: 2 Apr 85 09:25:08 PST (Tuesday)
From: LLi.ES@XEROX.ARPA
Reply-to: LLi.ES@XEROX.ARPA
Subject: Re:  game of GO

Steve,

Lester Buck at shell!buck@ut-sally.arpa sent out a fairly extensive
bibliography on computer Go to Usenet net.games.go.  I only have a
hardcopy of that message dated 1/14/85.  If you can't get in touch with
him and can't get someone to retrieve that file from Usenet, I can make
a copy for you.

Leonard Li.

------------------------------

Date: Tue 2 Apr 85 15:30:17-CST
From: David Throop <AI.THROOP@UTEXAS-20.ARPA>
Subject: Small Expert Sys & Process Control

  I just returned from the AIChE conference & Petroleum Exposition in
Houston.  I saw  a single-loop process controller from Foxboro there.  It's
advertised as the first AI technology to come out as a commercial process
control product.   It's a hardwired expert system for dynamically resetting the
loop control parameters.  It has seven rules.
  Now I'm currently taking an Expert System course in which the term project
involves building an ES with ~100 rules, and a 7 rules system doesn't seem
that impressive.  However, the competition at the show was not pooh-poohing
it, at it did seem to do some pretty good level control on the demo system
they'd rigged up.
  Research has focused on systems between 100 and 3000 rules.  But I remember
that working in a engineering office, people often wrote 30 line FORTRAN
programs for one shot calculations, throwing the code away after we'd
generated our one-time-use output.
  As we develop robust Expert System tools with ties to good editors and
explanation facilities we may find that there are many areas where the
control and decision process is best expressed (and developed) in an ES, even
though the knowledge in the program fits in less than a dozen rules.  These
will include systems where the "expert" knowledge is not expert at all, but
just well expressed in the production rule form.

  There was not much AI activity shown there.  The process industries are
very interested in the AI technology (the oil companies showed up in force at
AAAI 84) but almost nothing has been published, or come out as a product
package.  The most obvious areas of development are AI in process design
(engineering databases etc) and in process control.  What seems to be holding
things back?   The ES packages out there don't seem to be talking to any of
the engineering database formats already in place.  And there are some very
tough representational & computational problems to work on for process
control.   Expert systems would have a tough time matching the well developed
supervisory programs already in place out there.

------------------------------

Date: Tue 2 Apr 85 11:49:46-PST
From: MARCEL@SRI-AI.ARPA
Subject: Living Programs

It's no secret that what a body does for its living is sometimes obvious even
when the body is not working at its living. After an operating systems course
I was once caught behaving like a scheduler, starting longer jobs (a coffee pot)
earlier and at low priority, then time-sharing my CPU between my homework, my
TV and my brother. Obviously, that's a simplistic example; the effect can be
much more subtle. Programming brings with it:

1. an assumption that all problems are solvable (all bugs can be "fixed");
2. sharper programming "techniques" yield better programs sooner;
3. reason is adequate for all aspects of the task;
4. issues are polarized (the program works or it doesn't, this algorithm is
   or is not faster than that);
5. every functional goal can be achieved by correct analysis of conditionals
   in branches and loops.

In short, software gives us a world which may be very complex, but in which
we can still feel a sense of control denied us in day-to-day life. After all,
we created the machinery in the first place. I contend that a good many of us
(eg myself until a while ago) take the assumption of control away with us from
our programs. This applies especially to "hackers", meaning people who spend
much of their time programming as opposed to doing conceptual design first. It
might apply less to AIers faced with the problem of human intelligence, though
we tend to believe that machines will be intelligent someday (read: the problem
is solvable, we have better languages now, and sufficient analysis will tell
us what intelligence is).

Question: am I right about the false sense of power; have I misdiagnosed the
cause; and how much is AI the projection par excellence of this illusion?

Marcel@SRI-AI    (M.Schoppers)

------------------------------

Date: 1 Apr 1985 09:22-EST
From: leff%smu.csnet@csnet-relay.arpa
Subject: Recent Articles - Expert Database Systems

From  the 1985 SIGMOD Conference Program, May 28 to May 31, 1985 Austin
Texas

May 30, 1985 4-5:30 3DP1 Panel: Expert Database Systems (Workshop
Review) Chairperson Larry Herschberg

May 31, 1985 9:30 - 11:00
P. M. D. Gray "Efficient Prolog Access to CODASYL and FDM
Databases"
J. D. Ullman "Implementation of Logical Query Languages for Databases"

------------------------------

Date: 1 Apr 1985 16:05:34 EST (Monday)
From: PSN Development <psn@bbn-noc3.arpa>
Subject: subjective C

          [Forwarded by Susan Bernstein <slb@BBNCCP.ARPA>.]


            Subjective C, a new programming language.

Recently researchers in the computer language field have shown much
interest in subject oriented languages.  Subjective programming
languages draw upon concepts developed in the fields of subjective
probability and philosophical subjectivism to enrich the field of
programming semantics.  `Subjective C' is such a language based on the
programming language C.

Subjective C grew out of the AI concept of DWIM, or "do what I mean".
The subjective C compiler infers the mood of the author of the input
program based on clues contained in the comments in the program.  If no
comments (or verbose identifiers) are present, the programmer is judged
to have insufficiently thought out his problem, i.e.  to have
insufficiently specified the computation to be performed.  In this case
a subjective diagnostic is issued, depending on the compiler's own mood.
Assuming comments or other mood indicators are present, an amalgam of
inference techniques drawn from various reputed-to-be-successful expert
systems are used to infer the author's mood.

A trivial example of a mood revealing comment with accompanying program
text is the following:

        a = a - 1;      /* add one to a */

A too simple analysis of the dichotomy between apparent meaning of the
statement and accompanying comment is that one of them is in error.  A
more insightful analysis is that this program should not be allowed to
work, even if no syntax errors occur in it.  Accordingly, subjective
compiler should hang the system, thus inducing the programmer to quit
for the night.

More interesting cases occur when there is no conflict between program
text and commentary.  It is these cases where Subjective C is shown to
be a significantly richer language than normalcy.

Some examples of mood-implying comments found in actual programs are the
following:

        ; Here we do something perverse to the packet.  Beats me.

In this case, the comment reveals that the programmer does not care what
the code does, except that he wants it to be something that subsequent
programmers will be shocked by.  The compiler uses a variation of its
mood-inference techniques to generate code that is suitably perverse by
systematically generating actions and evaluating them against the
criteria it has synthesized.

        blt                     ; hold the mayo

The Subjective C compiler evaluates the indicatory content of this
comment to discern that the programmer is undoubtedly hungry.  Code will
be generated that will crash inexplicably, thus inducing the programmer
to go to the candy machine and pig out, which is what he wanted in the
first place.

Subjective C is neither a superset nor a subset of "normal" (if one can
apply the term) C, known in subjective parlance as normalcy.  However,
there is an extensive intersection, if meanings of programs are ignored.
The central thesis of research in the field of subjective languages is
that the meanings of programs are far more subtle than first appears to
the reader (or author).

Some examples of mood revealing comments in well known C programs
include the following:

        /* I've been powercoding much too much lately.  */ and,

        /* WARNING: modify the following at your own risk. */


Students of program complexity will be interested to note that the
algorithms used for mood inference are of greater complexity than NP
complete, which is of one of the first known practical applications of
this class of computations.  The exact characterization of this class of
problems is not yet fully explored, but some initial theoretical results
will be published by certain graduate students, real soon now, and no
later than next August when their fellowships run out.

The subjective C compiler, called "see", will be available (relatively)
shortly on all bbn unix systems.  Comments can be directed directly
to the compiler itself, in the usual fashion.

------------------------------

Date: 1 Apr 85 13:42:23 EST
From: CHOMICKI@RUTGERS.ARPA
Subject: Seminar - Machine Forgetting (Rutgers)


Jan Chomicki, a Ph.D. student in our department, agreed to give a talk
about MACHINE FORGETTING. His research interests in AI are pretty recent,
in fact they date from today morning. The talk starts at 2:50 pm and ends
at 1:30 pm. The place is Hill-402: if too many people arrive, we will move
into a smaller room. Considering many things, the topic of the talk among
others, you should not expect the speaker to be there. The abstract follows:

                Machine Forgetting

We argue that there is no learning without forgetting.
At least, by learning a man forgets how stupid he used to be.
Current research in Machine Learning, cf.(...), doesn't take
this phenomenon fully into account.

We develop a Theory of Forgetting Functor(TFF).
For a class of systems, called Sclerotic, forgetting is monotonic.
However, as our everyday experience indicates,
there also exist non-monotonic forgetting systems.
TFF is one of the variants of a more general Theory of Limited Resources (TLR).
Others include: Theory of Incompetence, Theory of Not Understanding etc.

We implemented a general program, the Forgetting Daemon, that makes any
other program forget about its original purpose, e.g. sorting numbers.
We conjecture that this program may provide a high degree of domain
independence in AI systems.
Take the expert system for diagnosing soybean diseases,
run it through the Forgetting Daemon and the expert system will totally
forget about soybeans.
However, it will also forget about everything else.
We plan to remove this problem in the second version of our system.
A facility of Selective Forgetting will be provided.
The user will define what to forget by means of production rules and/or
menus.

Methodologically, we see several avenues for further research:

1.Example- and Pattern-driven Forgetting.

2.Forgetting Without an Explanation and its relationship with Random
  Forgetting.

3.Forgetting Without a Trace vs.Reversible Forgetting.

4.Meta-forgetting: I forgot what I forgot what...

We would have formalized our concepts, but we are pretty certain that
some graduate student at MIT or Stanford has been working on it for a few
years already.

------------------------------

Date: Mon 1 Apr 85 14:17:46-PST
From: Carol Wright <WRIGHT@SUMEX-AIM.ARPA>
Subject: Seminar - Adaptive Algorithms (SU)

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


DATE:                   Friday, April 5, 1985
LOCATION:               Chemistry Gazebo, between Physical & Organic Chemistry
TIME:                   12:05

SPEAKER:                Lawrence J. (Dave) Davis, Ph.D.
                        Texas Instruments Computer Science Laboratory.
                        Knowledge Based Systems Branch. Dallas, Texas.

TITLE:                  Applying Adaptive Algorithms to Epistatic Domains



Abstract: In his 1975 book ADAPTATION IN NATURAL AND ARTIFICIAL
SYSTEMS, John Holland proposed a technique for carrying out search in
large solution spaces that is based on the process of natural
evolution.  Among the important points in the book is Holland's proof
that the search process can be greatly accelerated if certain sorts of
mutations (CROSSOVER mutations) are used.  Interest in probabilistic
search techniques, and the Holland techniques in particular, has grown
quite strong in the last two years.  The talk will begin by describing
procedures Holland and his students used in their early work, and then
will move to the topic of recent innovations.

Holland has shown that when adaptive algorithms are used to search
certain kinds of extremely large spaces, they will converge on a
"good" solution fairly quickly.  Such problem spaces are characterized
by a low degree of interaction between components of solutions.  A
host of classical search problems, however, are oriented toward
solutions that are highly interactive.  The talk will describe some
new techniques for applying adaptive algorithms to epistatic domains,
while retaining some of the strength of Holland's convergence proof.

------------------------------

Date: Tue, 2 Apr 85 13:54 EST
From: Ramesh Astik <rampan%northeastern.csnet@csnet-relay.arpa>
Subject: Seminar - Generation of Expert Systems from Examples
         (Northeastern)


             AUTOMATIC GENERATION OF EXPERT SYSTEMS FROM EXAMPLES


                                 Steve Gallant

            Wednesday, April 10, 4 p.m.  Botolph Bldg, First Floor



             A process for generating an Expert System from Training
         Examples (and/or Rules) will be described.  The Knowledge
         Base for such a system principally consists of a Matrix of
         integers called a Learning Matrix.

             Given a set of Training Examples, a Learning Matrix may
         be generated by various means including the Pocket
         Algorithm (a modification of Perceptron Learning).  The
         resulting Learning Matrix is then combined with a Matrix
         Controlled Inference Engine (MACIE) to produce a true Expert
         System.

              This talk will focus on how MACIE interprets the
         Learning Matrix in order to perform forward chaining,
         backward chaining, and likelihood estimates.


         Presented by:  The College of Computer Science
                        Northeastern University
                        Boston, Ma.  02115

         Information:  437-2462

------------------------------

End of AIList Digest
********************