[comp.ai] Bad AI: A Clarification

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/30/88)

In article <1242@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) blurts:
>Mindless application of the computational paradigm to
>     a) problems which have not yielded to stronger methods
>     b) problems which no other paradigm has yet provided any understanding of.
This is poorly expressed and misleading.  Between "problems" and "which" insert
"concerning human existence".  As this stands, it looks like I want to withdraw
encouragement from ALL computer research.  Apologies to anyone who's taken this
seriously enough to follow-up, or was just annoyed (but you shouldn't be anyway)

Bad AI is research into human behaviour and reasoning, usually conducted by
mathematicians or computer scientists who are as well-qualified for the study
of humanity as is an archaeologist with a luminous watch for the study of
radiation (of course I understand radiation, I've got a luminous watch, 
haven't I? ;-))

AI research seems to fall into two groups:
	a) machine intelligence;
	b) simulation of human behaviour.
No problem with a), apart from the use of the now vacuous term "intelligence",
which psychometricians have failed miserably to pin down.  No problem with b)
if the researcher has a command of the study of humanity, hence the
respectability of computational modelling in psychology.  Also, mathematicians
and computer scientists have no handicaps, and many advantages when the human 
behaviour in b) is equation solving, symbolic mathematics, theorem proving and
configuring VAXES.  They are domain experts here.  Problems only arise when they
confuse their excellent and most ingenious programs with human reasoning.

   1) because maths and logic has little to do with normal everyday reasoning
      (i.e. most reasoning is not consciously mathematical, symbolic, 
      denotational, driven by inference rules).  Maths procedures are not
      equivalent to any human reasoning.  There is an overlap, but it's small
   
2)    because they have no training in the difficulties involved in studying
      human behaviour, unlike professional psychologists, sociologists,
      political scientists and economists.  At best, they are informed amateurs,
      and it is sad that their research is funded when research in established
      disciplines is not.  Explaining this political phenomena requires a simple
      appeal to the hype of "pure" AI and the gullibility of its sponsors, as
      well as to the honesty of established disciplines who know that coming to
      understand ourselves is difficult, fraught with methodological problems. 
      Hence the appeal of the boy scout enthusiasm of the LISP hacker.

So, the reason for not encouraging AI is twofold.  Firstly, any research which
does not address human reasoning directly is either pure computer science, or 
a domain application of computing. There is no need for a separate body of
research called AI (or cybernetics for that matter).  There are just
computational techniques.  Full stop.  It would be nice if they followed
good software engineering practices and structured development methods as
well.  Secondly, where research does address human reasoning directly, it
should be under the watchful eye of competent disciplines.  Neither mathematics
or computer science are competent disciplines.  Supporting "pure" AI research
by logic or LISP hackers makes as much sense as putting a group of historians,
anthropologists and linguists in charge of a fusion experiment.  The word is
"skill".  Research requires skill.  Research into humanity requires special
skills.  Computer scientists and mathematicians are not taught these skills.

When hardware was expensive, it made sense to concentrate research using
computational approaches to our behaviour. The result was AI jounals,
AI conferences, and a cosy AI community insulated from the intellectual
demands of the real human disciplines.  I hope, with MacLisp and all
the other cheap AI environments, that control of the computational
paradigm is lost by the technical experts and passes to those who
understand what it is to study ourselves.  AI will disappear, but the
work won't.  Indeed it will get better, and having to submit to an AI
conference rather than a psychology or related conference (for research
into ourselves), or a computing or application area conference (for
machine 'intelligence') will be a true reflection of the quality of the work.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/04/88)

In article <1299@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>  Research requires skill.  Research into humanity requires special
>skills.  Computer scientists and mathematicians are not taught these skills.
>
There is no questioning the premise of the first sentence.  I am even willing
to grant, further, that artificial intelligence (or at least aspects which
are of particularly interest to me) may be regarded as "research into
humanity."  However, after that, Cockton's argument begins to fall apart.
Just what are those "special skills" which such research "requires?"  Does
anyone have them?  Does Cockton regard familiarity with the humanistic
literature as such a skill?  I suspect there could be some debate as to
whether or not extensive literary backgroud is a skill, particularly when
the main virtue of such knowledge is that it provides one with a history
of how one's predecessors have failed on similar tasks.  There is no doubt
that it is valuable to know that certain paths lead to dead ends;  but when
there are so many forks in the road, it is not always easy to determine WHICH
fork was the one which ultimately embodied the incorrect decision.

Perhaps I am misrepresenting Cockton by throwing too much weight on "being
well read."  In that case, he can set the record straight by doing a better
job of characterizing those skills which he feels computer scientists and
mathematicians lack.  Then he can tell us how many humanists have those
skills and have exercised them in the investigation of intelligence with
a discipline which he seems to think the AI community lacks.  Let he who
is without guilt cast the first stone, Mr. Cockton!  (While we're at it,
is your house made of glass, by any chance?)

One final note on bad AI.  I don't think there is anyone reading this
newsgroup who would doubt that there is bad AI.  However, in another
article, Cockton seems quite willing to admit (as most of us knew already)
that there is bad sociology, too.  One of the more perceptive writers on
social behavior, Theodore Sturgeon (who had the good sense to articulate
his views in the palatable form of science fiction), once observed that
90% of X is crud, for any value of X . . . that can be AI, sociology, or
classical music.  Bad AI is easy enough to find and even easier to pick
on.  Rather than biting the finger of the bad stuff, why not take the
time to look where the finger of the good stuff is really pointing?

jbn@glacier.STANFORD.EDU (John B. Nagle) (06/06/88)

      On this subject, one should read Drew McDermott's "Artificial Intelligence
meets Natural Stupidity" (ACM SIGART newsletter, #57, April 1976.)  His
comments are all too apt today.  

					John Nagle

jeff@aiva.ed.ac.uk (Jeff Dalton) (06/08/88)

In article <1299@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>AI research seems to fall into two groups:
>	a) machine intelligence;
>	b) simulation of human behaviour.
>No problem with a), apart from the use of the now vacuous term "intelligence",

But later you say:

>So, the reason for not encouraging AI is twofold.  Firstly, any research which
>does not address human reasoning directly is either pure computer science, or 
>a domain application of computing.

Vision?  Robotics?  Everything that uses computers can be called pure or
applied CS.  So what?

>There is no need for a separate body of
>research called AI (or cybernetics for that matter).  There are just
>computational techniques.  Full stop.

What happened to "machine intelligence"?  It *is* a separate (but
not totally separate) body of research.  What is the point of arguing
about which research areas deserve names of their own?

BTW, there's no *need* for many things we nonetheless think good.

>It would be nice if they followed good software engineering practices and
>structured development methods as well.

Are you trying to see how many insults can fit into one paragraph?

Are you really trying to oppose "bad AI" or are you opportunistically
using it to attack AI as a whole?  Why not criticise specific work you
think is flawed instead of making largely unsupported allegations in
an attempt to discredit the entire field?

kww@amethyst.ma.arizona.edu (K Watkins) (06/11/88)

In article <1336@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>
>I do not think there is a field of AI.  There is a strange combination
>of topic areas covered at IJCAI etc.  It's a historical accident, not
>an epistemic imperative.
>
Of what field(s) is such a statement false?  An inventive imagination can
regroup the topics of study and knowledge in a great many ways.  Indeed, it
might be very useful to do so more often.  (Then again, the cross-tabulating
chore of making sure we lost a minimum of understanding in the transition
would be enormous.)

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/11/88)

In article <1336@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>But when I read misanthropic views of Humanity in AI, I will reply.

Do you mean that all your wholesale railing against AI over the last
several weeks (and it HAS been pretty wholesale) is just a response
to "misanthropic views of Humanity?"  Perhaps we may have finally
penetrated to the root of the problem.  I wish to go on record as
observing that I have yet to read a paper on AI which has passed
through peer review which embodies any sense of misanthropy whatsoever,
and that includes all those conference proceedings which Mr. Cockton
wishes to take as his primary source of knowledge about the field.
There is certainly a lot of OBJECTIVITY, but I have never felt that
such objectivity could be confused with misanthropy.  As I said before,
stop biting the fingers long enough to look where they are pointing!

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/06/88)

In article <1337@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:

I don't have time to respond to all of your articles that respond
to mine, but will try to say something.  I suggested that you give
specific criticism of specific research, but you have declined to do
so.  That's unfortunate, because as it is most people are just going
to ignore you, having heard such unsupported attacks before.

>In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>>>It would be nice if they followed good software engineering practices and
>>>structured development methods as well.

>>Are you trying to see how many insults can fit into one paragraph?

>No.

OK, I'll accept that.  But if so, you failed to make your intention
clear.  And of course it *would* be nice if they, etc., but do you
know "they" don't.  My experience is that appropriate software
engineering practices are followed in many cases.  That doesn't mean
they all use JSP (or eqivalent), but then it's not always appropriate
to do so.

>No-one in UK HCI research, as far as I know, objects to the criticism
>that research methodologies are useless until they are integrated
>with existing system development approaches.

That no one objects is not a valid argument.  They might all be wrong.

>On software engineering too, HCI will have to deliver its
>goods according to established practices.  To achieve this, some HCI
>research must be done in Computer Science departments in collaboration
>with industry.  There is no other way of finishing off the research
>properly.

There is a difference between research and delivering goods that can
be used by industry.  It is not the case that all research must be
delivered in finished form to industry.  Of course, the needs of
industry, including their desire to follow established practices, are
important when research will be so delivered, but in other cases such
needs are not so significant.

We must also consider that the results of research are not always
embodied in software.

>You've either missed or forgotten a series of postings over the last
>two years about this problem in AI.

Or perhaps I don't agree with those postings, or perhaps I don't agree
with your view of the actual state of affairs.

>Project managers want to manage IKBS projects like existing projects.

Of course, they do: that's what they know.  You've yet to give any
evidence that they're right and have nothing to learn.

>You must also not be talking to the same UK software houses as I am, as
>(parts of) UK industry feel that big IKBS projects are a recipe for
>burnt fingers, unless they can be managed like any other software project.

Big IKBS projects are risky regardless of how they're managed.  Part
of the problem is that AI research hasn't advanced far enough: it's
not just a question of applying some software engineering; and so
the difficulties with big IKBS projects are not necessarily evidence
that they must be managed like any other software project.

But this is all beside the point -- industrial IKBS projects and
AI research are not the same thing.

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/06/88)

In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
1 Are you really trying to oppose "bad AI" or are you opportunistically
1 using it to attack AI as a whole?  Why not criticise specific work you
1 think is flawed instead of making largely unsupported allegations in
1 an attempt to discredit the entire field?

In article <1336@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
2 No, I've made it clear that I only object to the comfortable,
2 protected privilege which AI gives to computational models of
2 Humanity.

If that is so, why don't you confine your remarks to that instead of
attacking AI's existence as a discipline?

2 Anything which could be called basic research is the province of other 
2 disciplines, who make more progress with less funding per investigation (no
2 expensive workstations etc.).

Have you considered the costs of equipment in, say, Medicine or
Physics?

2 I do not think there is a field of AI.  There is a strange combination
2 of topic areas covered at IJCAI etc.  It's a historical accident, not
2 an epistemic imperative.

So are the boundaries of the UK.  Does that mean it should not exist
as a country?

2 My concern is with the study of Humanity and the images of Humanity
2 created by AI in order to exist.  Picking on specific areas of work is
2 irrelevant.

The question will then remain as to whether there is any work for
which your criticism is valid.  

2 But when I read misanthropic views of Humanity in AI, I will reply.
2 What's the problem?

Perhaps you will have a better idea of the problem if you consider
that "responding to misanthropic views of Humanity in AI" is not an
accurate description of what you do.

-- Jeff