[comp.ai.digest] Exciting work in AI

reiter@HARVARD.HARVARD.EDU (Ehud Reiter) (04/07/88)

I was recently asked (by a psychology graduate student) if there was
any work being done in AI which was widely thought to be exciting and
pointing the way to further progress.  Specifically, I was asked for work
which:
	1) Was highly thought of by at least 50% of the researchers in
the field.
	2) Was a positive contribution, not an analysis showing problems
in previous work.
	3) Was in AI as narrowly defined (i.e. not in robotics or vision)

I must admit that I was (somewhat embarassingly) unable to think of
any such work.  All the things I could think of which have people excited
(ranging from non-monotonic logic to connectionism) seemed controversial
enough so that they could not be said to have the support of half of all
active AI researchers.

Any suggestions?  Please remember that I need things which are widely approved
of, not things which excite you personally.
						Ehud Reiter
						reiter@harvard.harvard.edu
						reiter@harvard	(BITNET,UUCP)

wray@nswitgould.OZ.AU (Wray Buntine) (04/25/88)

Ehud Reiter (V6#69) was eliciting the following (summarised by Spencer Star)
>  Exiting work in AI.  The three criteria are:
>     1. Highly thought of by at least 50% in the field.
>     2. Positive contribution
>     3. Real AI

Spencer Star made a number of suggestions of "exiting" work.
I disagree on some of them.  I mention only 1 below.

>  Another area involves classification trees of the sort generated by
>  Quinlan's ID3 program.
Ross's original ID3 work (and the stuff usually reported in Machine Learning
overviews) and much subsequent work by him and others (e.g. pruning)
actually fails the "real AI" test.  It was independently developed by
a group of applied statisticians in the 70's and is well known
	Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C,J. (1984)
	"Classification and Regression Trees", Wadsworth
Ross's more recent work does significantly improve on Breiman et al.s stuff.
To my knowledge, however, it is not yet widely known.  Try looking in IJCAI-87.
His latest program is actually called C4 (heard of it?), has been for years,
and I think it is closer to real AI (e.g. concern for comprehensibility),
though it still has an applied statistics flavour.  Perhaps this fails the
"highly thought of by 50%" test.  Another year maybe.

--------------
Wray Buntine
wray@nswitgould.oz
University of Technology, Sydney

stuart%warhol@ADS.COM (Stuart Crawford) (05/03/88)

Wray Buntine (wray@nswitgould.oz)  writes:
> Ross's original ID3 work (and the stuff usually reported in Machine Learning
> overviews) and much subsequent work by him and others (e.g. pruning)
> actually fails the "real AI" test.  It was independently developed by
> a group of applied statisticians in the 70's and is well known
>       Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C,J. (1984)
>       "Classification and Regression Trees", Wadsworth
> Ross's more recent work does significantly improve on Breiman et al.s stuff.
                          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

How?  If you mean his stuff on generating production rules from decision trees,
I think you're missing the point of CART.  It seems to me that simply
transforming decision trees into production rules is a rather uninteresting
exercise.  Quinlan tries to motivate the idea by suggesting that the generated
rules are an "improvement" over the induced tree because they are both easier
to interpret and more parsimonious.  I disagree that they are easier to
interpret, and they are more parsimonious only if your original induction
algorithm has not already pruned the tree.  Using production rule generation as
an alternative to tree pruning strikes me as the wrong approach.  I still feel
that CART is the induction procedure of choice because of the following:

1. generates parsimonious trees
2. handles noisy, incomplete data
3. strong, well understood, asymptotic properties
4. allows user-defined priors and cost-functions
5. delivers attribute-importance diagnostics
6. can induce rules incrementally
7. delivers low bias, low variance estimates of misclassification rate

For references on 1-5, see Brieman et al. (1984), and for 6,7 see Crawford, S.
"Extensions to the CART Algorithm", proceedings Knowledge Acquisition for
Knowledge-Based Systems workshop (1987).

I also find somewhat curious Buntine's suggestion that Quinlan's most recent
work, 

> is closer to real AI (e.g. concern for comprehensibility),
> though it still has an applied statistics flavour.

I would suggest that the work has an applied statistics flavor because it is
attempting to solve an applied statistics problem.

--------------------------------------
Stuart Crawford
stuart@ads.com
Advanced Decision Systems
1500 Plymouth Street
Mountain View, CA 94043


Stuart

AIList-REQUEST@AI.AI.MIT.EDU (AIList Moderator Nick Papadakis) (05/25/88)

Return-Path: <@AI.AI.MIT.EDU:ailist-request@ai.ai.mit.edu>
Date: 17 May 88 13:19:24 GMT
From: babbage!reiter@husc6.harvard.edu  (Ehud Reiter)
Organization: Aiken Computation Lab Harvard, Cambridge, MA
Subject: Exciting work in AI
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

About a month ago, I posted a note asking if any "exciting" work existed
in AI which:
	1) Was highly thought of by at least 50% of AI researchers.
	2) Was a positive contribution, not an analysis showing problems
in previous work.
	3) Was in AI as narrowly defined (i.e. not in robotics or vision)

Well, I'm still looking.  I have received some suggestions, but almost
all of them have seemed problematical.  The most promising were Spencer
Star's suggestions for exciting work in machine learning (published in
a previous AIList, including Valiant's theoretical analyses, Quinlan's
decision trees, and explanation-based learning).  However, after
looking at some books and course syllabi in machine learning, I was
forced to conclude that the topics mentioned by Spencer did not satisfy
condition (1), as the topics he mentioned had very little overlap with
the topics in the books and syllabi (which, incidentally, had very
little overlap with each other).

So, I'm still looking for work which meets the above criteria, and hoping
to thereby convince my friend that there is some cohesion to AI.  If anyone
has suggestions, please send them to me!

					Ehud Reiter
					reiter@harvard	(ARPA,BITNET,UUCP)
					reiter@harvard.harvard.EDU  (new ARPA)