[mod.ai] unlikely submission to the ai-list...

DAVIS@EMBL.BITNET.UUCP (12/18/86)

*********rambling around conciousness*******************************************


There appear to me to be utterly different, though related, meanings of
the phrase `conciousness', especially when used in the ai-domain. The
first refers to an individual's sense of its own `conciousness', whilst
the second refers to that which we ascribe to other apparently sentient
objects, mostly other humans. There tends to be an automatic assumption
that the two are necessarily related, and in some guises, of course, this
is connected with the problem of `other minds'. However, the distinction
runs to the core of ai, particularly in connection with the infamous
Turing test. I would like to illustrate that this is so, and point to at
least one possible consequence for ai as a `nuts-and-bolts' discipline.

Let us ignore (perhaps forever!) the origin of the internal sensation of
conciousness, and concentrate upon our ascription of this capacity to
other objects. This ascription is dependent upon our observation of
some object's behaviour, and it could be argued, arises from our need to
rationalize and order the world as percieved. The ascription rests conditonally
upon an object exhibiting behaviour which is seen to either demand, or at
least be commensurate with, our own feeling of `conciousness'. This in turn
requires a whole subset of properties such as intentionality and intelligence.
As we note from everyday life, most humans fulfill these demands - their
behaviour appears purposeful, intelligent, self-concious etc..

However, turn now to an example which few would defend as being a case
of a sentient being: the ubiquitous and often excellent chess machine. Despite
our intellectual position being one of knowing that "this thing ain't nuthin'
but a blob of silicon", the reactions to, and more importantly, strategies
of play against such machines rarely fits what one might (naievly) expect
in the case of a complicated circuit. Instead, the machine is (publicly,
or privately) acknowledged to be `trying to win'. It is `smart'. It doesn't
like to lose. It `fouls up' or comes up with a `brilliant move'.

Of course, all this chat from computer chess players is meaningless - nobody
*really* believes in the will of the machine. Yet, it is very instructive
in the following sense: in order to formulate sensible strategies with a
well designed machine, we ascribe it intentionality. (I owe this argument
to Daniel Dennet) That is to say, we use the fact that the machine behaves
*AS IF* it had intent, despite the fact that we know it has no such capacity.

A similar, though more risky argument may be put forward for the reactions
of owners to their pets. I say more risky since it is arguable as to the
true status of sentience in dogs, cats etc..

This ascription of intentionality is not, I believe, a mistake, simply on
the grounds that intentionality simply does not exist. It is an explanatory
construct which creates an arbitrary class (`intentional objects'), but
has no real existence in the world (either as an emergent or concrete
property). What the ascription does is to provide a powerful way of dealing
with the world - it lets us make successful predictions about well designed
objects (such as human beings). We ccannot pretend that we really know
anything about why the somewhat loosely defined object called John invited
a similarly fluid Mary over for a meal, but we can make a lot of correct
prior judgements if we ascribe John with an intent......

So, back to nuts-and-bolts ai. As technicians sit in their nuts-and-bolts
laboratories, seeking the Josephson concurrent 5th generation hypercube
that will stroll though the Turing test, and into your lounge, workplace and
maybe even elsewhere, perhaps they should reflect upon their design
strategy. The accolade of appearing as `almost human' is a function of
the describer (aka: beauty is in the ......). Humans get special points
because they are exceedingly well designed, and hence our ascriptions
of intelligence, intentionality and conciousness do a very good job of
helping us to understand and interact with other people (They also seem
to work quite well with dogs.....).But this is ONLY because we do what do
exceedingly well, and what we do covers a very wide range of activities.

No computer that just tells the weather, just builds other computers,
or even just chats through a Turing interface will ever be regarded as we
regard other humans.Instead, they will get little more than the low level
ascription of intentionality that chess machines demand in order to beat
them. The assignment of conciousness, intelligence, and intentionality
are all just higher points in this scale, however.

To sum up - you can't build a 'concious' or an intelligent computer because
`conciousness' and `intelligence' are conceptual categories of description,
and not genuine properties. Current computers are not said to be `concious'
because we are able to understand and predict their behaviour without
invoking such a category. Build us a computer as bewildering as a certain
leading US politician, and the maybe, just maybe, we may have to turn round
and say "hell, this thing really has a mind of its own...". But then again...

paul davis

bitnet/earn/netnorth: davis@embl
on the relay interchat: 'redcoat' (central european daytime)
by mail: european molecular biology laboratory
         postfach 10.2209
         meyerhofstrasse 1
         6900 heidelberg
         west germany/bundesrepublic deutschland