[comp.ai.digest] Computer science as a subset of artificial intelligence

ray@BCSAIC.BOEING.COM (Ray Allis) (11/03/88)

In <1776@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:

>In a previous article, Ray Allis writes:
>>If AI is to make progress toward machines with common sense, we
>>should first rectify the preposterous inverted notion that AI is
>>somehow a subset of computer science,
>Nothing preposterous at all about this.  AI is about applications of
>computers,

I was disagreeing with that too-limited definition of AI.  *Computer
science* is about applications of computers, *AI* is about the creation
of intelligent artifacts.  I don't believe digital computers, or rather
physical symbol systems, can be intelligent.  It's more than difficult,
it's not possible.

>> or call the research something other than "artificial intelligence".
>Is this the real thrust of your argument?  Most people would agree,
>even Herb Simon doesn't like the term and says so in "Sciences of the
>Artificial".

No, "I said what I meant, and I meant what I said".  The insistence that
"artificial intelligence research" is subsumed under computer science
is preposterous.  That position precludes the development of intelligent
machines.

>> Computer science has nothing  whatever to say about much of what we call 
>> intelligent behavior, particularly common sense.
>Only sociology has anything to do with either of these, so to
>place AI within CS is to lose nothing.

Only the goal.

>Intelligence is a value judgement, not a definable entity.

"Intelligence" is not a physical thing you can touch or put in a bottle,
like water or carbon dioxide.  "Intelligent" is an adjective, usually
modifying the noun "behavior", and it does describe something measurable;
a quality of behavior an organism displays in coping with its environment.
I think intelligent behavior is defined more objectively than, say, the
quality of an actor's performance in A Midsummer Night's Dream, which IS
a value judgement.

>                                Common sense is a labelling activity
>for beliefs which are assumed to be common within a (sub)culture.
>
>Such social constructs cannot have a machine embodiment, nor can any
>academic discipline except sociology sensibly address such woolly
>epiphenomena.  I do include cognitive psychology within this exclusion,
>as no sensible cognitive psychologist would use terms like common sense
>or intelligence.  The mental phenomena which are explored
>computationally by cognitive psychologists tend to be more basic and
>better defined aspects of individual behaviour.  The minute words like
>common sense and intelligence are used, the relevant discipline becomes
>the sociology of knowledge.

Common sense does not depend on social consensus.  I mean by common sense
those behaviors which nearly everyone acquires in the course of existence,
such as reluctance to put your hand into the fire.  I contend that symbol
systems in general, digital computers in particular, and therefore computer
science, are inadequate for artifacts which "have common sense".  

Formal logic is only a part of human intellectual being, computer science
is about the mechanization of that part, AI is (or should be) about the
entire intellect.  Hence AI is something more ambitious than CS, and not
a subcategory.  That's why I used the word "inverted".

>Gilbert Cockton, Department of Computing Science,  The University, Glasgow
>	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

Ray Allis
Boeing Computer Services, Seattle, Wa.
ray@boeing.com  bcsaic!ray 

ok@quintus.UUCP (Richard A. O'Keefe) (11/08/88)

In a previous article, Ray Allis writes:
>I was disagreeing with that too-limited definition of AI.  *Computer
>science* is about applications of computers, *AI* is about the creation
>of intelligent artifacts.  I don't believe digital computers, or rather
>physical symbol systems, can be intelligent.  It's more than difficult,
>it's not possible.

There being no other game in town, this implies that AI is impossible.
Let's face it, connectionist nets are rule-governed systems; anything a
connectionist net can do a collection of binary gates can do and vice
versa.  (Real neurons &c may be another story, or may not.)

ray@ATC.BOEING.COM (Ray Allis) (11/18/88)

In <639@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:

>In a previous article, Ray Allis writes:
>>I was disagreeing with that too-limited definition of AI.  *Computer
>>science* is about applications of computers, *AI* is about the creation
>>of intelligent artifacts.  I don't believe digital computers, or rather
>>physical symbol systems, can be intelligent.  It's more than difficult,
>>it's not possible.
>
>There being no other game in town, this implies that AI is impossible.
>Let's face it, connectionist nets are rule-governed systems; anything a
>connectionist net can do a collection of binary gates can do and vice
>versa.  (Real neurons &c may be another story, or may not.)

But there ARE other games.  I don't believe AI is impossible.  I'm convinced
on my interpretation of evidence that AI IS possible (i.e. artifacts that
think like people).  It's just that I don't think it can be done if methods
are arbitrarily limited to only formal logic.  If by "connectionist net" you
are referring to networks of symbols, such as semantic nets, implemented on
digital computers, then, in that tiny domain, they may well all be
rule-governed systems, interchangeable with "a collection of binary gates".
Those are not the same as "neural nets" which are modelled after real
organisms' central nervous systems.  Real neurons do indeed appear to be
another story.  In their domain, rules should not be thought of as governing,
but rather as *describing* operations which are physical analogs and not
symbols.  To be  repeatedly redundant, an organism's central nervous system
runs just fine without reference to explicit rules; rules DESCRIBE, to beings
who think with symbols (guess who) what happens anyway.  AI methodology must
deal with real objects and real events in addition to symbols and form.