[comp.ai] STRONG AND WEAK AI

cam@aipna.ed.ac.uk (Chris Malcolm) (11/30/89)

It is often observed that the usual definitions of strong and weak AI,
while providing an excellent springboard for those with sarcastic
intentions, are rather difficult hats for AI researchers to wear. For
example, if asked "strong or weak?", AI researchers will sometimes answer
"neither", or "both", or, most often, explain what is silly about the
categories.

Can anyone offer a good set of definitions of the various stances in the
field to replace these rather worn scarecrows?
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

harnad@phoenix.Princeton.EDU (Stevan Harnad) (12/02/89)

Chris Malcolm asked for a definition:

Those who believe in Strong AI believe that thinking is computation
(i.e., symbol manipulation). Those who believe in Weak AI believe that
computation is a means of studying and testing theories of (among other
things) thinking, which need not be just computation (i.e., not just
symbol manipulation).

The typical error of believers in Strong AI is a misconstrual of
the Church-Turing Thesis: Whereas it may be true that every physical
process is "equivalent" to symbol manipulation, i.e., is simulable by
symbol manipulation, it is decidedly NOT true that every physical
process IS symbol manipulation. Flying, heating and transduction, for
example, are not. How does one fall into this error? By becoming lost
in the hermeneutic hall of mirrors created by the semantic
interpretations we cast onto symbol systems. We forget the difference
between what is merely INTERPRETABLE as X and what really IS X. We
confuse the medium with the message.

The chimpanzee language experiments (and, to a lesser degree, "Clever
Hans") fell into similar errors. Freudian interpretations of the
machinations of the unconscious and astrological interpretations of
what the heavans portend are more distant relatives...

References:

 Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
    and Theoretical Artificial Intelligence 1: 5 - 25.

(2) Harnad, S. (1990) The Symbol Grounding Problem. Physica D, in press.

(3) Harnad, S. (1990) Computational Hermeneutics. Social Epistemology,
    in press.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

arshad@lfcs.ed.ac.uk (Arshad Mahmood) (12/04/89)

In article <11870@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
>Chris Malcolm asked for a definition:
>
>Those who believe in Strong AI believe that thinking is computation
>(i.e., symbol manipulation). Those who believe in Weak AI believe that
>computation is a means of studying and testing theories of (among other
>things) thinking, which need not be just computation (i.e., not just
>symbol manipulation).

I suspect Chris already knew this! I thought his question was do you feel 
comfortable if asked which school you belong to, and if not what would be
your response.

Chris was perhaps hinting at a hierachy of possible definitions, where
each person can sit at the position at which they feel comfortable, 
(week-AI, strong-AI, strong-AI without thermostats, ....). 
There may well be such a hierarchy but I have seen no evidence of it,
but then again I am a neo-Strong AIite (well you have to be, among so many
disbelievers!!).

A. Mahmood
Laboratory fo Foundations of Computer Science
Edinburgh University
Scotland

mike@cs.arizona.edu (Mike Coffin) (12/09/89)

From article <11870@phoenix.Princeton.EDU> (Stevan Harnad):
> The typical error of believers in Strong AI is a misconstrual of
> the Church-Turing Thesis: Whereas it may be true that every physical
> process is "equivalent" to symbol manipulation, i.e., is simulable by
> symbol manipulation, it is decidedly NOT true that every physical
> process IS symbol manipulation. Flying, heating and transduction, for
> example, are not.

Not unless we are living inside a simulatation.  Since we have no
basis on which to dispute their physicality, we accept our perceptions
as ``reality.''  Just, I suppose, as an artificial intelligence living
in a (sub-)simulation on a Cray-9 would have no choice but to accept
the simulated flight to Istanbul to the AI conference as ``reality.''

-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

yamauchi@cs.rochester.edu (Brian Yamauchi) (12/10/89)

In article <16033@megaron.cs.arizona.edu> mike@cs.arizona.edu (Mike Coffin) writes:
>From article <11870@phoenix.Princeton.EDU> (Stevan Harnad):
>> The typical error of believers in Strong AI is a misconstrual of
>> the Church-Turing Thesis: Whereas it may be true that every physical
>> process is "equivalent" to symbol manipulation, i.e., is simulable by
>> symbol manipulation, it is decidedly NOT true that every physical
>> process IS symbol manipulation. Flying, heating and transduction, for
>> example, are not.
>
>Not unless we are living inside a simulatation.  Since we have no
>basis on which to dispute their physicality, we accept our perceptions
>as ``reality.''  Just, I suppose, as an artificial intelligence living
>in a (sub-)simulation on a Cray-9 would have no choice but to accept
>the simulated flight to Istanbul to the AI conference as ``reality.''

Sure, but this misses the point.  The symbol manipulation associated
with flying in this simulated world would take place in the
*simulator* not in the AI program.  As far as the AI was concerned, it
would be taking actions in the real world -- actions which affect its
perceptions.

My complaint about most AI programs is not the worlds are simulated,
but that the simulated worlds often are very unlike any type of
perceptual reality sensed by organic creatures.  It's a matter of
semantics to argue whether this is "intelligence", but I think it's
clear that if your entire world consisted of statements like block(a)
and on(a,b) without any sensory input, the type of "intelligence" you
would develop would be totally unlike any sort of human or animal
mentality.

If you're interested in AI as advanced problem solving techniques,
then this is fine, but if you're interested in building fully
autonomous system which can act in the real world or if you're
interested in building systems which can model human intelligence,
then it's not fine.

It seems that one interesting approach to AI would be to use the
virtual reality systems which have recently been developed as an
environment for artificial creatures.  Then they would be living in a
simulated world, but one that was sophisticated enough to provide a
convincing illusion for *human* perceptions.

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________