[comp.ai.digest] Arguments against AI are arguments against human formalisms

NICK@AI.AI.MIT.EDU (Nick Papadakis) (05/27/88)

Date: Mon, 9 May 88 17:31 EDT
From: Jeff Dalton <mcvax!ukc!its63b!aiva!jeff@uunet.uu.net>
Organization: Dept. of AI, Univ. of Edinburgh, UK
Subject: Re: Arguments against AI are arguments against human formalisms
References: <368693.880430.MINSKY@AI.AI.MIT.EDU>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

In article <1103@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
> BTW, Robots aren't AI. Robots are robots.

I'm reminded of the Lighthill report that caused a significant loss of
AI funding in the UK about 10 years ago.  The technique is to divide
up AI, attack some parts as useless or worse, and then say the others
are "not AI".  The claim then is that "AI", properly used, turns out
to encompass only things best not done at all.  All of the so-called
AI that's worth supporting (and funding) belongs to other disciplines
and so should be done there.

Another example of this approach can be found earlier in the message:

> Note how scholars like John Anderson restrict themselves to proper
> psycholgical data. I regard Anderson as a psychologist, not as an AI
> worker.

A problem with this attack is that it is not at all clear that
AI should be defined so narrowly as to exclude, for example, *all*
robotics.  That robots are robots does not preclude some of them
being programmed using AI techniques.  Nor would an artificial
intelligence embodied in a robot automatically fail to be AI.

The attack seeks to set the terms of debate so that the defenders
cannot win.  Any respectable result cited will turn out to be "not
AI".  Any argument that AI is possible will be sat on by something
like the following (from <1069@crete.cs.glasgow.ac.uk>):

   Before the 5th Generation scare, AI in the UK had been sat on for
   dodging too many methodological issues.  Whilst, like the AI pioneers,
   they "could see no reasons WHY NOT [add list of major controversial
   positions", Lighthill could see no reasons WHY in their work.

In short, the burden of proof would be such that it could not be met.
The researcher who wanted to persue AI would have to show the research
would succeed before undertaking it.

Fortunately, there is no good reason to accept the narrow definition
of AI, and anyone seeking to reject the normal use of the term should
accept the burden of proof.  AI is not confined to attempts at human-
level intelligence, passing the Turing test, or other similar things
now far beyond its reach.

Moreover, the actual argument against human-level AI, once we strip
away all the misdirection, makes claims that are at least questionable.

> The argument against AI depends on being able to use written language
> (physical symbol hypothesis) to represent the whole human and physical
> universe.  AI and any degree of literate-ignorance are incompatible.
> Humans, by contrast, may be ignorant in a literate sense, but
> knowlegeable in their activities.  AI fails as this unformalised
> knowledge is violated in formalisation, just as the Mona Lisa is
> indescribable.

The claim that AI requires zero "literate-ignorance", for example, is
far from proven, as is the implication that humans can call on
abilities of a kind completely inaccessable to machines.  For some
reasons to suppose that humans and machines are not on opposite sides
of some uncrossable line, see (again) Dennett's Elbow Room.