[comp.ai.digest] Gilding the Lemon

Laws@KL.SRI.COM.UUCP (10/29/87)

Tom Dietterich suggests that AI students should consider doing
critical reviews and rational reconstructions of previous AI
systems.  [There, isn't a paraphrase better than a lengthy
quotation?]  I wouldn't discourage such activities for those
who relish them, but I disagree that this is the best way for
AI to proceed AT THE PRESENT TIME.

Rigorous critical analysis is necessary in a mature field where
deep understanding is needed to avoid the false paths explored
by previous researchers.  I don't claim that shallow understanding
is preferable in AI, but I do claim that it is adequate.

AI should not be compared to current Biology or Psychology, but
to the heyday of mechanical invention epitomized by Edison.  We
do need the cognitive scientists and logicians, but progress in
AI is driven by the hackers and the graduate students who "don't
know any better" than to attempt the unreasonable.

Progress also comes from applications -- very seldom from theory.
The "neats" have been worrying for years (centuries?) about temporal
logics, but there has been more payoff from GPSS and SIMSCRIPT (and
SPICE and other simulation systems) than from all the debates over
consistent point and interval representations.  The applied systems
are ultimately limited by their ontologies, but they are useful up to
a point.  A distant point.

Most Ph.D. projects have the same flavor.  A student studies the
latest AI proceedings to get a nifty idea, tries to solve all the
world's problems from his new viewpoint, and ultimately runs into
limitations.  He publishes the interesting behaviors he was able
to generate and then goes on the lecture circuit looking for his
next employment.  The published thesis illuminates a new corner of
mankind's search space, provided that the thesis advisor properly
steered the student away from previously explored territory.

An advisor who advocates duplicating prior work is cutting his
students' chances of fame and fortune from the discovery of the
one true path.  It is always true that the published works can
be improved upon, but the original developer has already gotten
80% of the benefit with 20% of the work.  Why should the student
butt his head against the same problems that stopped the original
work (be they theoretical or practical problems) when he could
attach his name to an entirely new approach?  

I am not suggesting that "artificial intelligence" will ever be
achieved through one graduate student project or by any amount
of hacking.  We do need scientific rigor.  I am suggesting that we
must build hand-crank phonographs before inventing information
theory and we must study the properties of atoms before debating
quarks and strings.  Only when we have exploited or reached impass
on all of the promising approaches will there be a high probability
that critical review of already explored research will advance the
field faster than will trying something new.


[Disclaimer:  The views expressed herein do not apply to my own
field of computer vision, where I'm highly suspicious of any youngster
trying to solve all our problems by ignoring the accumulated knowledge
of the last twenty years.  My own tendency is toward critical review
and selective integration of existing techniques.  But then, I'm not
looking for a hot new Ph.D. topic.]

					-- Ken Laws
-------

fishwick@fish.cis.ufl.EDU (Paul Fishwick) (10/30/87)

...From Ken Laws...
> Progress also comes from applications -- very seldom from theory.
> The "neats" have been worrying for years (centuries?) about temporal
> logics, but there has been more payoff from GPSS and SIMSCRIPT (and
> SPICE and other simulation systems) than from all the debates over
> consistent point and interval representations.  The applied systems
> are ultimately limited by their ontologies, but they are useful up to
> a point.  A distant point.

I'd like to make a couple of points here: both theory and practice are
essential to progress; however, too much of one without the other
creates an imbalance. As far as the allusion to temporal logics and
interval representations, I think that Ken has made a valuable point.
Too often an AI researcher will write on a subject without referencing
non-AI literature which has a direct bearing on the subject. An
illustration, in point, is the reference to temporal representations -
If one really wants to know what researchers have done with concepts
such as *time*, *process*, and *event* then one should seriously review work
in system modeling & control and simulation practice and theory. In doing 
my own research I am actively involved in both systems/simulation
methodology and AI methods so I found Ken's reference to GPSS and SPICE
most gratifying.

What I am suggesting is that AI researchers should directly reference
(and build upon) related work that has "non-philosophical" origins. Note
that I am not against philosophical inquiry in principle -- where would 
any of us be without it? The other direction is also important - namely,
that reseachers in more established areas such as systems theory and
simulation should look at the AI work to see if "encoding a mental model"
might improve performance or model comprehensibility.

Paul Fishwick
University of Florida
INTERNET: fishwick@fish.cis.ufl.edu


-------

tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich) (11/02/87)

Ken Laws says
   ...progress in
   AI is driven by the hackers and the graduate students who "don't
   know any better" than to attempt the unreasonable.

I disagree strongly.  If you see who is winning the Best Paper awards
at conferences, it is not grad students attempting the unreasonable.
It is seasoned researchers who are making the solid contributions.

I'm not advocating that everyone do rational reconstructions.  It
seems to me that AI research on a particular problem evolves through
several stages: (a) problem definition, (b) development of methods,
(c) careful definition and comparative study of the methods, (d)
identification of relationships among methods (e.g., tradeoffs, or
even understanding the entire space of methods relevant to a problem).

Different research methods are appropriate at different stages.
Problem definition (a) and initial method development (b) can be
accomplished by pursuing particular application problems, constructing
exploratory systems, etc.  Rational reconstructions and empirical
comparisons are appropriate for (c).  Mathematical analysis is
generally the best for (d).  In my opinion, the graduate students of
the past two decades have already done a great deal of (a) and (b), so
that we have lots of problems and methods out there that need further
study and comparison.  However, I'm sure there are other problems and
methods waiting to be discovered, so there is still a lot of room for
exploratory studies.  

--Tom Dietterich

tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich) (11/02/87)

Just a couple more points on this subject.

Ken Laws also says
	Progress also comes from applications -- very seldom from theory.

My description of research stages shows that progress comes from
different sources at different stages.  Applications are primarily
useful for identifying problems and understanding the important
issues.

It is particularly revealing that Ken is "highly suspicious
of any youngster trying to solve all our problems [in computer vision]
by ignoring the accumlated knowledge of the last twenty years."
Evidentally, he feels that there is no accumulated knowledge in AI.
If that is true, it is perhaps because researchers have not studied
the exploratory forays of the past to isolate and consolidate the
knowledge gained.

--Tom Dietterich

mob@MEDIA-LAB.MEDIA.MIT.EDU (Mario O Bourgoin) (11/03/87)

In article <12346288066.15.LAWS@KL.SRI.Com> Ken Laws wonders why a
student should cover the same ground as that of another's thesis and
face the problems that stopped the original work.  His objection to
re-implementations is that they don't advance the field, they
consolidate it.  He is quick to add that he does not object to
consolidation but that he feels that AI must cover more of its
intellectual territory before it can be done effectively.
	I know of many good examples of significant progress achieved
in an area of AI through someone's efforts to re-implement and extend
the efforts of other researchers.  Tom Dietterich mentioned one when
he talked about David Chapman's work on conjunctive planning.  Work on
dependency-directed backtracking for search is another area.  AM and
its relatives are good examples in the field of automated discovery.
Research in Prolog certainly deserves mention.
	I believe that AI is more than just ready for consolidation: I
think it's been happening for a while just not a lot or obviously.  I
love exploration and understand its place in development but it isn't
the blind stab in the dark that one might gather from Ken's article.
I think he agrees as he says:

	A student studies the latest AI proceedings to get a
	nifty idea, tries to solve all the world's problems
	from his new viewpoint, and ultimately runs into
	limitations.

	The irresponsible researcher is little better than a random
generator who sometimes remembers what he has done.  The repetitive
bureaucrat is less than a cow who rechews another's cud.  The AI
researcher learns both by exploring to extend the limits of his
experience and consolidating to restructure what he already knows to
reflect what he has learned.
	In other fields, Masters students emphasize consolidation and
PHD students emphasize exploration (creativity.)  But at MIT, the AI
program is an interdisciplinary effort which offers only a doctorate
and I don't know of a AI Masters elsewhere.  This has left the job of
consolidation to accomplished researchers who are as interested in
exploration as their students.  Maybe there would be a use for a more
conservative approach.

- --Mario O. Bourgoin

To Ken: The best paraphrase isn't a quote since quoting communicates
that you are interested in what the other said but not what you
understand of it.

smoliar@VAXA.ISI.EDU (Stephen Smoliar) (11/03/87)

In article <12346288066.15.LAWS@KL.SRI.Com> Laws@KL.SRI.COM (Ken Laws) writes:
>
>Progress also comes from applications -- very seldom from theory.

A very good point, indeed:  Bill Swartout and I were recently discussing
the issue of the respective contributions of engineering and science.
There is a "classical" view that science is responsible for those
fundamental principles without which engineering could "do its thing."
However, whence come those principles?  If we look at history, we see
that, in most fields, engineers are "doing their thing" long before
science has established those principles.  Of course things don't always
go as smoothly as one would like.  This pre-scientific stage of engineering
often involves sometimes-it-works-sometimes-it-doesn't experiences;  but
the engineering practices are still useful.  Often a major contribution
of the discovery of the underlying scientific principles is a better
understanding of WHEN "it doesn't work" and WHY that is so.  Then
engineering takes over again to determine what is to be done about
those situations in which things don't work.  At the risk of being
called on too broad a generality, I would like to posit that science
is concerned with the explanation of observed phenomena, while engineering
is concerned with achieving phenomena with certain desired properties.
From this point of view, engineering provides the very substance from
which scientific thought feeds.

I fear that what is lacking in the AI community is a respect for the
distinction between these two approaches.  A student is likely to get
a taste of both points of view in his education, but that does not
necessarily mean that he will develop an appreciation for the merits
of each or the ways in which they relate to each other.  As a consequence,
he may very well become very quickly channeled along a narrow path
involving the synthesis of some new artifact.  If he has any form
of success, then he assumes that all his thesis requires is that he
write up his results.

I hope there is some agreement that theses which arise from this process
are often "underwhelming" (to say the least).  There are usually rather
hefty tomes which devote significant page space to the twists and turns
in the path that leads to the student's achievement.  There is also usually
a rather heavy chapter which surveys the literature, so that the student
can demonstrate the front along which his work has advanced.  However,
such retrospective views tend to concentrate more on the artifacts of
the past than on the principles behind those artifacts.

Is it too much to ask that doctoral research in AI combine the elements
of both engineering and science?  I have nothing against that intensely
focused activity which leads up to a new artifact.  I just worry that
students tend to think the work is done once the artifact is achieved.
However, this is the completion of an engineering phase.  Frustrating
as it may sound, I do not think the doctoral student is done yet.  He
should now embark upon some fundamental portion of a scientific phase.
Now that he has something that works, he should investigate WHY it
works;  and THIS is where the literature search should have its true
value.  Given a set of hypothesized principles regarding the behavior
of his own artifact, how applicable are those principles to those
artifacts which have gone before?  Once such an investigation has been
pursued, the student can write a thesis which provides a balanced diet
of both engineering and science.

gary%roland@SDCSVAX.UCSD.EDU (Gary Cottrell) (11/03/87)

Note that the article Tom was referring to (David Chapman's "Planning
for Conjunctive Goals", AIJ 32 No. 3) is based on a MASTER's Thesis:
Even if Ken objects to PhD thesi being rational reconstructions, he may
be less inclined to object to Master's thesi in this vein. Of course,
this is probably equivalent to a PhD thesis at n-k other places, where
k is some small integer.

gary cottrell
cse deot
ucsd

gilbert@hci.hw.ac.UK (Gilbert Cockton) (11/05/87)

In article <12346288066.15.LAWS@KL.SRI.Com> Laws@KL.SRI.COM (Ken Laws) writes:
>......, but there has been more payoff from GPSS and SIMSCRIPT (and
>SPICE and other simulation systems)

e.g.?

>Most Ph.D. projects have the same flavor.  A student ...
>... publishes the interesting behaviors he was able to generate

e.g.?

> ... we must build hand-crank phonographs before inventing information
>theory and we must study the properties of atoms before debating
>quarks and strings.

Inadmissable until it can be established that such relationships exist
in the study of intelligence - there may be only information theory
and quarks, in which case you have to head right for them now.
Anything else is liable to be a social construct of limited generality.
Most work today in fact suggests that EVERYTHING is going to be a social
construct, even the quarks. Analogies with the physical world do not
necessarily hold for the mental world, anymore than does animism for the
physical world.

>An advisor who advocates duplicating prior work is cutting his
>students' chances of fame and fortune from the discovery of the
>one true path.  ....  Why should the student
>work (be they theoretical or practical problems) when he could
>attach his name to an entirely new approach?  

The aim of PhD studies is to advance knowledge, not individuals.
This amounts to gross self-indulgeance where I come from. I recognise
that most people in AI come from somewhere else though :-)

Perhaps there are no new approaches, perhaps the set of all imaginable
metaphysics, epistemology and ontology is closed. In the History of
Ideas, one rarely sees anything with no similar antecedents. More
problematic for AI, the real shifts of thinkers like Machiavelli, Bacon,
Hume, Marx and Freud did not involve PhD studies centred on computer
programming. I really do think that the *ABSENCE* of a computer is more
likely to produce new approaches, as the computational paradigm
severely limits what you can do, just as the experimental paradigm of
psychology puts many areas of study beyond the pale. 
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert