[net.ai] Clarifying my "AI Challange"

DIETTERICH@SUMEX-AIM.ARPA (11/26/83)

From:  Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>

Although I've written three messages on this topic already, I guess
I've never really addressed Ralph Johnson's main question:

        My question, though, is whether AI is really going to change
        the world any more than the rest of computer science is
        already doing.  Are the great promises of AI going to be
        fulfilled?

My answer: I don't know.  I view "the great promises" as goals, not
promises.  If you are a physicalist and believe that human beings are
merely complex machines, then AI should in principle succeed.
However, I don't know if present AI approaches will turn out to be
successful.  Who knows?  Maybe the human brain is too complex to ever
be understood by the human brain.  That would be interesting to
demonstrate!

--Tom

rbanerji@sjuvax.UUCP (11/28/83)

I am ..allegra!astrovax!sjuvax!rbanerji.

	Although I have joined this fray before, this one so far has been
much more reasonable, so let me jump in. Breaking protocol for once: Hi,
Tom!
	I am reacting to Johnson, Helly and Dietterich. Who is KIL? I really
liked his technical evaluation of Knowledge-based programming. Basically
similar to what Tom also said in defense of Knowledge-based programming
but KIL said it much clearer.
	On one aspect, I have to agree with Johnson about expert systems
and hackery, though. The only place there is any attempt on the part of
an author to explain the structure of the knowledge base(s) is in the 
handbook. But I bet that as the structures are changed by later authors
for various justified and unjustified reasons, they will not be clearly
explained except in vague terms.
	I donot accept Dietterich's explanation that AI papers are hard
to read because of terminology; or because what they are trying to do
are so hard. On the latter point, we donot expect that what they are
DOING be easy, just that HOW they are doing it be clearly explained:
and that the definition of clarity follow the lines set out in classical
scientific disciplines. I hope that the days are gone when AI was 
considered some sort of superscience answerable to none. On the matter
of terminology, papers (for example) on algebraic topology have more
terminology than AI: terminology developed over a longer period of time.
But if one wants to and has the time, he can go back, back,back along
lines of reference and to textbooks and be assured he will have an answer.
In AI, about the only hope is to talk to the author and unravel his answers
carefully and patiently and hope that somewhere along the line one doesnot
get "well, there is a hack there..it is kind of long and hard to expalain:
let me show you the overall effect"
	In other sciences, hard things are explained on the basis of 
previously explained things. These explanantion trees are much deeper
than in AI; but they are so strong and precise that climbing them may
be hard, but never hopeless.
	I agree with Helly in that this lack is due to the fact that no
attempt has been made in AI to have workers start with a common basis in
science, or even in scientific methodology. It has suffered in the past
because of this. When existing methods of data representation and processing
in theorem proving was found inefficient, the AI culture developed this
self image that its needs were ahead of logic: notwithstanding the fact
that the techniques they were using were representable in logic and that 
the reason for their seeming success was in the fact that they were designed
to achieve efficiency at the cost (often high) of flexibility. Since
then, those words have been "eaten": but at considerable cost. The reason
may well be that the critics of logic didnot know enough logic to see this.
In some cases, their professors did: but nevercared to explain what the 
real difficulty in logic was. Or may be they believed their own propaganda.
	This lack of uniformity of background came out clear when Tom said
that because of AI work people now clearly understood the differrence between
the subset of a set and the element of a set. This difference has been well
known at least since early this century if not earlier. If workers in AI
didnot know it before, it is because of their reluctance to know the meaning
of a term before they use it. This has also often come from their belief
that precise definitions will rob their terms of their richness (not realising
that once they have interpreted their terms by a program, they have a precise
definition, only written in a much less comprehensible way: set theorists
never had any difficulty understanding the diffeence between subsets and
elements). If they were trained, they would know the techniques that are
used in Science for defining terms.
	I disagree with Helly that Computer Science in general is unscientific.
There has always been a precise mathematical basis of Theorem proving (AI,
actually) and in computation and complexity theory. It is true, however, that
the traditional techniques of experimental research have not been used in
AI at all: people have tried hard to use it in softwares, but seemes to
be having difficulties.
	Would Helly disagree with me if I say that Newell and Simon's work
in computer modelling of psychological processes have been carried out 
with at least the amount of scientific discipline that psychologists use?
I have always seen that work as one of the success stories in AI. And
at least some psychologists seem to agree.

	I agree with Tom that AI will have to keep going even if someone
proves that P=NP. The reson is that many AI problems are amenable to
N^2 methods already: except that N is too big. In this connection I have
a question, in case someone can tell me. I think Rabin has a theorem
that given any system of logic and any computable function, there is
a true statement which takes longer to prove than that function predicts.
What does this say aout the relation between P and NP, if anything?
	Too long already!