[comp.ai.digest] Lenat's AM program

tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich) (10/21/87)

The exact reasons for the success of AM (and for its eventual failure
to continue making new discoveries) have not been established.  In
Lenat's dissertation, he speculated that the source of power was the
search heuristics, and that the eventual failure was caused by the
inability of the system to generate new heuristics.

Then, in a paper by Lenat and Brown, a different reason is given:
namely that the representation of concepts was the critical factor.
There is a close relationahip between mathematics concepts and lisp,
so that mathematical concepts can be represented very concisely as
lisp functions.  Simple syntactic mutation operations, when applied to
these concise functions, yield other interesting mathematical
concepts.  In new domains, such as those tackled by Eurisko, it was
necessary to engineer the concept representation so that the concepts
were concisely representable.

Finally, in a paper published this year by Lenat and Feigenbaum, yet
another explanation of AM's (and Eurisko's) success and failure is
given: "The ultimate limitation was not what we expected (cpu time),
or hoped for (the need to learn new representations), but rather
something at once surprising and daunting: the need to have a massive
fraction of consensus reality already in the machine."

The problem with all of these explanations is that they have not been
subjected to rigorous experimental and analytical tests, so at the
present time, we still (more than ten years after AM) do not
understand why AM worked!

I have my own pet hypothesis, which I am currently subjecting to an
experimental test.  The hypothesis is this: AM succeeded because its
concept-creation operators generated a space that was dense in
interesting mathematical concepts.  This hypothesis contradicts each
of the preceding ones.  It claims that heuristics are not important
(i.e., a brute force search using the concept-creation operators would
be only polynomially--not exponentially--more expensive).  It claims
that the internal representation of the concepts (as lisp functions)
was also unimportant (i.e., any other representation would work as
well, because mutation operators are very rarely used by AM).
Finally, it claims that additional world knowledge is irrelevant
(because it succeeds without such knowledge).

There is already some evidence in favor of this hypothesis.  At CMU, a
student named Weimin Shen has developed a set of operators that can
rediscover many of AM's concepts.  The operators are applied in brute
force fashion and they discover addition, doubling, halving,
subtraction, multiplication, squaring, square roots, exponentiation,
division, logarithms, and iterated exponentiation.  All of these are
discovered without manipulating the internal representation of the
starting concepts.

AM is a "success" of AI in the sense that interesting and novel
behavior was exhibited.  However, it is a methodological failure of
AI, because follow up studies were not conducted to understand causes
of the successes and failures of AM.  AM is not unique in this regard.
Follow-up experimentation and analysis is critical to consolidating
our successes and extracting lessons for future research.  Let's get
to work!

Tom Dietterich
Department of Computer Science
Oregon State University
Corvallis, OR 97331
tgd@cs.orst.edu
OR tgd%cs.orst.edu@relay.cs.net


References:

\item Lenat, D. B., (1980). AM: An artificial intelligence approach to
discovery in mathematics as heuristic search, In Davis, R., and Lenat,
D. B., {\it Knowledge-based systems in Artificial Intelligence}, 1980.

\item Lenat, D. B., and Brown, J. S. (1984). Why AM and EURISKO appear to work,
{\it Artificial Intelligence}, 23(3) 269--294.

\item Lenat, D. B., and Feigenbaum, E. A. (1987). On the thresholds of
knowledge.  In {\it IJCAI87, The Proceedings of the Tenth
International Joint Conference on Artificial Intelligence}, Milan, Los
Altos, CA: Morgan-Kaufmann.

\item Shen, W. (1987).  Functional transformations in AI discovery
systems.  Technical Report CMU-CS-87-117, Carnegie-Mellon University,
Department of Computer Science.

dwt@umich.UUCP.UUCP (10/23/87)

In article <8710211650.AA18715@orstcs.CS.ORST.EDU> tgd@ORSTCS.CS.ORST.EDU 
(Tom Dietterich) writes:
>The exact reasons for the success of AM (and for its eventual failure
>to continue making new discoveries) have not been established. [...]
>
>The problem with all of these explanations is that they have not been
>subjected to rigorous experimental and analytical tests, so at the
>present time, we still (more than ten years after AM) do not
>understand why AM worked!

Some possible contributing reasons for this sort of difficulty in AI:
1) The practitioners of AI routinely lack access at the nuts-and-bolts level 
   to the products of others' work. (At a talk he gave here three years ago,
   Lenat said that he was preparing a distribution version of AM.  Has
   anyone heard whether it is available? I haven't.)   Perhaps widespread
   availability and use of Common Lisp will change this.  Perhaps not.
2) The supporting institutions (and most practitioners) have little 
   patience for anything as unexciting and 'unproductive' as slow, 
   painstaking post-mortems.
3) We still have no fruitful paradigm for intelligence and discovery.
4) We are still, for the most part, too insecure to discuss difficulties
   and failures in ways that enable others as well as ourselves to learn
   from them.  (See an article on the front page of the NYTimes book review
   two or three weeks ago for a review of a book claiming that twentieth-
   century science writing in general is fundamentally misleading in this 
   respect.)

   David West                dwt@zippy.eecs.umich.edu

tgd@ORSTCS.CS.ORST.EDU.UUCP (10/28/87)

David West (dwt@zippy.eecs.umich.edu) writes:

    Some possible contributing reasons for this sort of difficulty in AI:

    1) The practitioners of AI routinely lack access at the nuts-and-bolts level 
    to the products of others' work. (At a talk he gave here three years ago,
    Lenat said that he was preparing a distribution version of AM.  Has
    anyone heard whether it is available? I haven't.)   Perhaps widespread
    availability and use of Common Lisp will change this.  Perhaps not.

In the biological sciences, publication of an article reporting a new
clone obligates the author to provide that clone to other researchers
for non-commercial purposes.  I think we need a similar policy in
computer science.  Publication of a description of a system should
obligate the author to provide listings of the system (a running
system is probably too much to ask for) to other researchers on a
non-disclosure basis. 

    2) The supporting institutions (and most practitioners) have little 
    patience for anything as unexciting and 'unproductive' as slow, 
    painstaking post-mortems.
    3) We still have no fruitful paradigm for intelligence and discovery.
    4) We are still, for the most part, too insecure to discuss difficulties
    and failures in ways that enable others as well as ourselves to learn
    from them.  (See an article on the front page of the NYTimes book review
    two or three weeks ago for a review of a book claiming that twentieth-
    century science writing in general is fundamentally misleading in this 
    respect.)

I disagree with these other points.  I think the cause of the problem
is lack of methodological training for AI and CS researchers.  Anyone
could have reimplemented an approximation of AM based on the published
papers anytime in the past decade.  I think the fact that people are
now beginning to do this is a sign that AI is becoming
methodologically healthier.  A good example is the paper Planning for
Conjunctive Goals by D. Chapman in Artificial Intelligence, Vol 32,
No. 3, which provides a critical review and rational reconstruction of
the NOAH planning system.  I encourage all students who are looking
for dissertation projects to consider doing work of this kind.

--Tom

bruce@VANHALEN.RUTGERS.EDU (Shane Bruce) (10/30/87)

In article <774@orstcs.CS.ORST.EDU> tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich) 
writes:
>
>In the biological sciences, publication of an article reporting a new
>clone obligates the author to provide that clone to other researchers
>for non-commercial purposes.  I think we need a similar policy in
>computer science.  Publication of a description of a system should
>obligate the author to provide listings of the system (a running
>system is probably too much to ask for) to other researchers on a
>non-disclosure basis. 
>

The policy which you are advocating, while admirable, is not practical.  No
corporation which is involved in state of the art AI research is going to
allow listings of their next product/internal tool to made available to the
general scientific community, even on a non-disclosure basis.  Why should
they give away what they intend to sell?

A more practical solution would be for all articles to include a section
on implementation which, while not providing listings, would at least provide
enough information that the project could be duplicated by another competent
researcher in the field.


-- 
Shane Bruce
HOME: (201) 613-1285                WORK: (201) 932-4714
ARPA: bruce@paul.rutgers.edu
UUCP: {ames, cbosgd, harvard, moss}!rutgers!paul.rutgers.edu!bruce