[comp.ai.digest] AIList Digest V5 #171

hendler@BRILLIG.UMD.EDU.UUCP (07/07/87)

While I have some quibbles with Don N.'s long statement on AI viz (or
vs.) science, I think he gets close to what I have felt a key point
for a long time -- that the move towards formalism in AI, while important
in the change of AI from a pre-science (alchemy was Drew McDermott's
term) to a science, is not enough.  For a field to make the transition
an experimental methodology is needed.  In AI we have the potential
to decide what counts as experimentation (with implementation being
an important consideration) but have not really made any serious
strides in that direction.  When I publish work on planning and
claim ``my system makes better choices than <name of favorite
planning program's>'' I cannot verify this other than by showing
some examples that my system handles that <other>'s can't.  But of 
course, there is no way of establishing that <other> couldn't do
examples mine can't and etc.  Instead we can end up forming camps of
beliefs (the standard proof methodology in AI) and arguing -- sometimes
for the better, sometimes for the worse.
 While I have no solution for this, I think it is an important issue
for consideration, and I thank Don for provoking this discussion.

 -Jim Hendler

MINSKY@OZ.AI.MIT.EDU.UUCP (07/07/87)

At the end of that long and angry flame, I think D.Norman unwittingly
hit upon what made him so mad:

>  Gedanken experiments are not accepted methods in science: they are
>  simply suggestive for a source of ideas, not evidence at the end.

And that's just what AI has provided these last thirty years - a
source of ideas that were missing from psychology in the century
before.  Representation theories, planning procedures, heuristic
methods, hundreds of such.  The history of previous psychology is ripe
with "proved" hypotheses, few of which were worth a damn, and many of
which were refuted by Norman himself.  Now "cognitive psychology" -
which I claim and Norman will predictably deny (see there: a testable
hypothesis!) is largely based on AI theories and experiments - is
taking over at last - as a result of those suggestions for ideas.

billmers@aiag.DEC.COM.UUCP (07/07/87)

Don Norman writes that "AI will contribute to the A, but will not
contribute to the I unless and until it becomes a science...".

Alas, since physics is a science and mathematics is not one, I guess the
latter cannot help contribute to the former unless and until mathematicians
develop an appreciation for the experimental methods of science. Ironic
that throughout history mathematics has been called the queen of sciences
(except, of course, by Prof. Norman).

Indeed, physics is a case in point. There are experimental physicists, but
there are also theoretical ones who formulate, posulate and hypothesize
about things they cannot measure or observe. Are these men not scientists?
And there are those who observe and measure that which has no theoretical
foundation (astrologists hypothesize about people's fortunes; would any
amount of experimentation turn astrology into a science?). I believe the
mix between theoretical underpinnings and scientific method makes for
science. The line is not hard and fast.

By my definition, AI has the right attributes to make it a science. There
are theoretically underpinnings in several domains (cognitive science,
theory of computation, information theory, neurobiology...) and yes, even an
experimental nature. Researchers postulate theories (of representation, of
implementation) but virtually every Ph.D. thesis also builds a working
program to test the theory.

If AI researchers seem to be weak in the disciplines of the scientific
method I submit it is because the phenomena they are trying to understand
are far more complex and elusive of definition that that of most science. 
This is not a reason to deny AI the title of science, but rather a reason
to increase our efforts to understand the field. With this understanding
will come an increasingly visible scientific discipline.

jbn@GLACIER.STANFORD.EDU.UUCP (07/10/87)

In article <8707062225.AA18518@brillig.umd.edu> hendler@BRILLIG.UMD.EDU
(Jim Hendler) writes:

>When I publish work on planning and
>claim ``my system makes better choices than <name of favorite
>planning program's>'' I cannot verify this other than by showing
>some examples that my system handles that <other>'s can't.  But of 
>course, there is no way of establishing that <other> couldn't do
>examples mine can't and etc.  Instead we can end up forming camps of
>beliefs (the standard proof methodology in AI) and arguing -- sometimes
>for the better, sometimes for the worse.

     Of course there's a way of "establishing that <other> couldn't do
examples mine can't and etc."  You have somebody try the same problems on
both systems.  That's why you need to bring the work up to the point that others
can try your software and evaluate your work.  Others must repeat your
experiments and confirm your results.  That's how science is done.

     I work on planning myself.  But I'm not publishing yet.  My planning
system is connected to a robot and the plans generated are carried out in the
physical world.  This keeps me honest.  I have simple demos running now;
the first videotaping session was last month, and I expect to have more
interesting demos later this year.  Then I'll publish.  I'll also distribute
the code and the video.

     So shut up until you can demo.

						John Nagle