[comp.ai] An Alternative to Strong and Weak AI

yamauchi@cs.rochester.edu (Brian Yamauchi) (12/04/89)

In article <11870@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
>Chris Malcolm asked for a definition:
>
>Those who believe in Strong AI believe that thinking is computation
>(i.e., symbol manipulation). Those who believe in Weak AI believe that
>computation is a means of studying and testing theories of (among other
>things) thinking, which need not be just computation (i.e., not just
>symbol manipulation).
>
>The typical error of believers in Strong AI is a misconstrual of
>the Church-Turing Thesis: Whereas it may be true that every physical
>process is "equivalent" to symbol manipulation, i.e., is simulable by
>symbol manipulation, it is decidedly NOT true that every physical
>process IS symbol manipulation. Flying, heating and transduction, for
>example, are not. How does one fall into this error? By becoming lost
>in the hermeneutic hall of mirrors created by the semantic
>interpretations we cast onto symbol systems. We forget the difference
>between what is merely INTERPRETABLE as X and what really IS X. We
>confuse the medium with the message.

I think this points out a need for a third class of AI research:
research directed toward building intelligent systems which takes
account of the need for an intelligent system to act in the real world
-- not just think about acting in Blocks World.  For example: the
work of Brooks and Moravec would fall into this category.

This type of research seems to be emerging under a number of different
names, in a number of different fields: behavior-based robotics,
mobile robotics, reactive systems, artificial life, artificial
creatures, cybernetics.

I think the term Artificial Creatures, coined by Rodney Brooks, is the
most descriptive.  Traditional AI deals with high-level cognitive
abilities, Artificial Life deals with abstract populations of
extremely simple organisms, Artificial Creatures deals with building
autonomous organisms which are of intermediate complexity between
amoebas and logicians.

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

cam@aipna.ed.ac.uk (Chris Malcolm) (12/09/89)

In article <1989Dec3.185506.22039@cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>In article <11870@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
>>Chris Malcolm asked for a definition:

>> [defn of strong and weak AI omitted]

>>The typical error of believers in Strong AI is a misconstrual of
>>the Church-Turing Thesis: Whereas it may be true that every physical
>>process is "equivalent" to symbol manipulation, i.e., is simulable by
>>symbol manipulation, it is decidedly NOT true that every physical
>>process IS symbol manipulation. Flying, heating and transduction, for
>>example, are not. How does one fall into this error? By becoming lost
>>in the hermeneutic hall of mirrors created by the semantic
>>interpretations we cast onto symbol systems. We forget the difference
>>between what is merely INTERPRETABLE as X and what really IS X. We
>>confuse the medium with the message.

>I think this points out a need for a third class of AI research:
>research directed toward building intelligent systems which takes
>account of the need for an intelligent system to act in the real world
>-- not just think about acting in Blocks World.  For example: the
>work of Brooks and Moravec would fall into this category.

I agree. 

>This type of research seems to be emerging under a number of different
>names, in a number of different fields: behavior-based robotics,
>mobile robotics, reactive systems, artificial life, artificial
>creatures, cybernetics.

I agree. I'm delivering a paper at the IAS2 conference in Amsterdam next
week on just this topic: "A new emerging paradigm in robotics", but in
order not to excite too much controversy I don't mention the last few
categories :-)

>I think the term Artificial Creatures, coined by Rodney Brooks, is the
>most descriptive.  Traditional AI deals with high-level cognitive
>abilities, Artificial Life deals with abstract populations of
>extremely simple organisms, Artificial Creatures deals with building
>autonomous organisms which are of intermediate complexity between
>amoebas and logicians.

If you build artificial creatures like Brooks. Because my game is
assembly robotics, a task with a logical complexity which (IMHO) is
beyond smart reactive local decisions, but involves some kind of
foresight or planning, my artificial creatures are assembly robots which
have to plan and then execute an assembly task in the real world. This
involves a hybrid architecture, where a classical ideal-world planner
plans in terms which are derived from the behavioural capabilities of a
behaviour-based assembly agent. The assembly agent is designed with a
similar philosophy to that of Brooks, but since it is designed not only
to succeed in its tasks, but to present a suitable virtual world to the
planner, there is an extra constraint on the task modularisation. That
constraint is sometimes referred to as the symbol grounding problem.

As you have recognised, this kind of research doesn't fit the Procrustean
strong/weak AI dichotomy. Hence my interest in new terms.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

woodruff@hpavla.HP.COM (Terry Woodruff) (12/09/89)

/ hpavla:comp.ai / arshad@lfcs.ed.ac.uk (Arshad Mahmood) /  3:22 pm  Dec  3, 1989 /
In article <11870@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
>Chris Malcolm asked for a definition:
>
>Those who believe in Strong AI believe that thinking is computation
>(i.e., symbol manipulation). Those who believe in Weak AI believe that
>computation is a means of studying and testing theories of (among other
>things) thinking, which need not be just computation (i.e., not just
>symbol manipulation).

I suspect Chris already knew this! I thought his question was do you feel 
comfortable if asked which school you belong to, and if not what would be
your response.

Chris was perhaps hinting at a hierachy of possible definitions, where
each person can sit at the position at which they feel comfortable, 
(week-AI, strong-AI, strong-AI without thermostats, ....). 
There may well be such a hierarchy but I have seen no evidence of it,
but then again I am a neo-Strong AIite (well you have to be, among so many
disbelievers!!).

A. Mahmood
Laboratory fo Foundations of Computer Science
Edinburgh University
Scotland
----------