[net.misc] More on Searle and Artificial Intelligence

donn (11/29/82)

I want to redress some of the goofs in my previous message about John
Searle and AI, now that I actually have the original paper in hand.
The paper, by the way, appeared in THE BEHAVIORAL AND BRAIN SCIENCES,
v. 3, pp. 417-424 (that fixes one of the errors), and is reprinted in
Hofstadter and Dennett's book THE MIND'S I.  The provocative title is
'Minds, Brains, and Programs', deliberately reminiscent of J. R. Lucas'
notorious article 'Minds, Machines, and Godel'.  To summarize the
thrust of the article, Searle claims that modern AI work is a failure,
because computer programs are a priori incapable of 'understanding'.
'Understanding' is a property of machines, be they mechanical or
electronic or biological, which have 'intentionality'.
'Intentionality' is defined as "that feature of certain mental states
by which they are directed at or about objects and states of affairs in
the world.  Thus, beliefs, desires, and intentions are intentional
states; undirected forms of anxiety and depression are not" (p. 424).
As far as I can tell, intentionality cannot be attributed to computer
programs because the latter are formal systems; i.e. they address form,
not content.  The connection between a formal representation in a
program and a referent in the 'real world' exists in the mind of the
observer, not in the program, hence the program has no mind.

Searle gives his own tests of understanding in place of the Turing
Test.  An entity can be said to understand if it looks like a human
being or some other higher mammal.  This is due to induction:  we are
human beings and higher mammals and we understand, hence it is likely
that other human beings and higher mammals also understand.  An entity
can be said to fail to understand if it fails the first test and
furthermore if its behavior could be simulated by a human being
following a program, where the human being is willing to certify that
they do not understand what is being simulated.  It is irrelevant,
according to Searle, that human beings themselves may not satisfy the
second test.  The specific example that Searle uses is of an
English-literate human locked in a room who knows no written Chinese
but is given three sheets of Chinese characters with only written
English instructions for deriving a fourth sheet of Chinese.  The three
sheets of Chinese are respectively an AI program for analyzing stories,
a Chinese story, and some questions about the story designed to test
comprehension, while the English tells how to correlate Chinese
ideographs with one another, without giving away their content.
Although the monolingual English person may make exactly the same
responses to the comprehension test as a literate Chinese, they clearly
cannot be said to understand Chinese.  Since the program could equally
well run on a computer, a computer running this program does not
understand Chinese either.

Searle explicitly argues that one must be a dualist to be in AI despite
the fact that traditional dualism is usually decried by AI workers.
This dualism is Cartesian in the sense that it proposes that a program
may exist independently of its realization on a particular machine.
Searle is explicitly not a Cartesian dualist (this in spite of my
earlier remarks... sigh).  Thomas Natsoulas notes in his critique in
BBS that Searle's position can be interpreted as dualist, however,
simply because 'intentionality' is hard to nail down as being some
concrete property of brains.  In some ways it serves as the 'soul'
which inanimate machines lack.  Richard Rorty, while sympathizing with
Searle's lack of sympathy for AI, even suggests that "[i]f Searle's
present pre-Wittgensteinian attitude gains currency...  'the mental'
will regain its numinous Cartesian glow" (p. 446).  It is ironic that
John Eccles, a fervent dualist, writes in support of Searle's arguments
(although not some of his assumptions).  Dualism in the philosophy of
mind is still a primary topic of confusion, I guess.

My first reaction to the article was to think of another, similar,
thought experiment.  Imagine that you kept a human being in a situation
that prevented them from learning anything about their environment, so
that they had no referents, no "understanding" of the outside world.
Suppose this human being was then exposed to a Chinese environment, so
that Chinese words could be associated with particular visual
experiences, with particular sounds or feelings.  This person has no
references for what they are being shown, so essentially what they must
do is associate abstract symbols with abstract perceptions.  The
programmer inculcates the relations between abstract perceptual objects
and other abstract perceptual objects, and uses the abstract Chinese
symbols to communicate other relations between unperceived abstract
objects.  Does this person understand Chinese?  Well, of course,
because this is the process any child in China undergoes on their way
to maturity.  At what stage did 'understanding' arise, though?  The
child only knows the formal program that was taught by the
programmer/educator/parent...

I have not read Searle's remarks on the subject, but apparently his way
out of this ontological problem is to assume that perceptions are not
'formal' but somehow 'direct' (perhaps in the behaviorist sense,
although Searle would no doubt disagree).  One does not have
'representations' of things in the real world, rather we have
'presentations', which are a different kettle of fish.  A
'presentation' of a table differs from a 'representation' of a table in
that the former "is satisfied by the physical presence of a table there
where the table visually seems to be located" (Natsoulas, p. 441).
This by itself does not seem very useful a distinction, but I would
have to read Searle's 'The Intentionality of Intention and Action' in
Inquiry, v. 22, p. 253, to get the details.

I really do recommend the article and its accompanying commentary to
anyone interested in AI.  There are responses from many prominent
people including Robert Abelson, Daniel Dennett, John Eccles, Jerry
Fodor, Douglas Hofstadter, John McCarthy, Marvin Minsky, Zenon
Pylyshyn, and Roger Schank, as well as a rebuttal/comment by Searle.
Sorry to take so much space with this but I find it wonderful fuel for
flaming...

Donn Seeley  UCSD Chemistry Dept. RRCF  ucbvax!sdcsvax!sdchema!donn
             UCSD Linguistics Dept.     ucbvax!sdcsvax!sdamos!donn