[sci.philosophy.tech] Searle, Strong AI, and Chinese Rooms

jufier@daimi.aau.dk (Svend Jules Fjerdingstad) (11/12/90)

	The January 1990 issue of Scientific American featured two articles
about Artificial Intelligence: "Is the Brain's Mind a Computer Program?" by
John R. Searle, and "Could a Machine Think?" by Paul M. Churchland and Patricia
Smith Churchland. All three authors are professors of philosophy, which may
explain their poor general understanding of the properties of computers and
computer programs.

	In the September issue Scientific American printed a number of
responses to the articles. Although I agree with most of the objections to the
articles stated in these letters, I feel that important points of criticism was
omitted.
	The following represent my response to the articles.
 
	I must confess, that I'm not exactly impressed by the quality of the
arguments presented in the two articles. It seems to me, that most of the
arguments are quite weak and remarkably simple to refute. 
	In "Is the Brain's Mind a Computer Program?" professor of philosophy
John R. Searle attempts to show that even passing the famous Turing test does
not prove a computer program intelligent. He tells this story of a "Chinese
Room" in which a person ignorant of Chinese language manipulates Chinese
symbols according to the rules in a book. He correctly points out, that the
rule book is the "computer program", and that the person is the "computer",
merely executing the program. But then he wrongly concludes, that the _person_
satisfies the Turing test in spite of being ignorant of Chinese. Obviously, it
is the Chinese Room _as a whole_, which is able to pass the Turing test. And
the room itself certainly cannot be said to be ignorant of Chinese, as an
extensive knowledge must be present in the rule book, although the person in
the room has no means of accessing this knowledge. To him or her, the rules in
the book _seem_ entirely meaningless.
	Unless speaking Chinese in a manner "indistinguishable from those of a
native Chinese speaker" does not require intelligence, then of course the
entire room must be considered intelligent. It certainly behaves intelligently,
and as this cannot be ascribed to the person, the intelligence must be due to
the rule book, whether the rules are stored on paper or not.
	Searle tries to avoid this so-called Systems Reply by imagining yet
another (impossible) situation, in which the person in the room memorizes all
the rules for manipulating the symbols. His argument is nearly unbelievably
naive: "There is nothing in the 'system' that is not in me, and since I don't
understand Chinese, neither does the system."
	The consequences of this statement are absurd: If his statement is
correct, then he has proven nothing more, than that it should be possible for a
person ignorant of Chinese to pass the Turing test for speaking Chinese. All he
has done is reducing the requirements for speaking a language fluently. In
fact, he has simply made it impossible to determine, whether a person
understands a given human language or not: If other humans are able to answer
in speech as though they understood what was being said, then surely all their
actions might also be the result of consulting a simple rule book.
	The entire concept of comprehension has been made hollow and without
meaning.

	Furthermore, if the Turing test no longer is a valid test for
intelligence, then it has become impossible to judge the degree of intelligence
present in a person. The concept of intelligence has also been made
meaningless, as the presence or absence of Searle's kind of intelligence in no
way influences an entity's behaviour. If it did, the Turing test could be used
to distinguish between the two.  
	Of what use are Searle's concepts of comprehension and intelligence, if
they are not related to any events of this world?
	If Searle's statement were true, he would simply have made it clear
that conscious thought was unnecessary for all kinds of human behaviour!
	Which is probably the exact opposite of what he wanted to prove.

	But perhaps the key to the explanation could be, that Searle himself
actually does not understand English. Whenever he writes an article, he simply
consults a (lousy) book full of rules for writing articles against artificial
intelligence :-)
   
	The entire attitude of Searle is ridiculous. He states that "a program
merely manipulates symbols, whereas a brain attaches meaning to them". But why
is it important, that humans attach meaning to symbols? Does it matter? Of
course it does, it is exactly this attaching meaning to symbols that allows us
to interpret the symbols as conveying a message, and allows us to understand
this message.  
	Obviously, the fact that we are able to understand symbolically encoded
messages affect the way we interact with our environment. Which is why a
computer would have to be able to understand human language, not merely
manipulate it, in order to pass the Turing test.
	Does our intelligence influence the way we behave? Of course, it does.
If not, what is intelligence, then? And why are we equipped with it, if it is
of no real, practical use?
	But if intelligence influences our behaviour, then it follows, that a
computer also would have to be intelligent, if ever to pass the Turing test.
	As the person in the Chinese Room were unable to understand the Chinese
signs entering the room, all of these qualities would have to be present in the
"rule book" in the Chinese Room, which would make it something far more
sophisticated than merely a book. I think, this is where Searle really cheats.
By using a simple book full of rules, he is able to carry along with his
arguments, because it is obvious, that a book could not understand Chinese, or
be intelligent. However, he has at no time demonstrated (or even made likely)
that a book would indeed be sufficient for his project. His assumption is, in
fact, a degradation of intelligence to a set of simple rules that can easily be
formulated in a book.
	Searle gets things mixed up by assigning the name "book" to an entity
with properties, that are fundamentally different from those of a book. This
is, in my opinion, the central error in the Chinese Room argument. Searle
confuses himself (and the Churchlands) by calling something a book, that could
never be just a book of rules.

	Searle's attack on the Turing test is unfair and unfounded. He mistakes
his own lack of understanding for flaws in the Turing test.
	It is odd, that the Churchlands in their article "Could a Machine
Think?" fail to recognize the untenability of Searle's arguments against the
Turing test. They agree with Searle, that "it is also very important how the
input-output function is achieved; it is important that the right sorts of
things be going on inside the artificial machine."
	Well, I certainly do not agree with that. That's just pure mysticism. A
black box must be considered intelligent, if it acts intelligently. It is
ridiculous to define "conscious intelligence" in a way, which makes it
impossible to measure, because it in no ways affect its surroundings. With this
definition, we won't ever be able to determine whether conscious intelligence
is present or not in an object. All we can say is the following: "Is it human?
Ah, then it's intelligent! Not human? Well, then this seemingly intelligent
behaviour is not achieved the right (human) way. Therefore, it is not
intelligent!"
	Then we have defined intelligence in such a way, that it can only occur
in humans. And by defining it this way, we have excluded all the most
impressive and important qualities of human intelligence.

	I am not at all impressed by these three philosophers' abilities to
reason: Searle also argues that simulating a process on a computer is very
different from the actual process, and that therefore even if we could simulate
all the processes of the brain, we should still not have attained artificial
intelligence. However his example, that you cannot make a car run with a
computer simulation of the oxidation of hydrocarbons in an engine, is really
not relevant. Obviously, "a simulation of cognition" would process the same
genuine information as a brain, not just some symbolic substitute. And it would
process it the same way as the brain. The simulation would be a genuine
"processor of information", just like the brain. Therefore the two situations
are not comparable, and the argument is invalid.

	Let me try to summarize, why I think AI is possible. The definition of
intelligence I will use, is the more usual one than Searle's: Intelligence is
the ability to interact with the environment in an "intelligent" way, that is,
in a way which shows comprehension of the workings of the environment.
	My argument goes like this: Only purely random events (if such exist at
all) are not governed by rules. And since intelligence is the quintessence of
non-randomness, rules for intelligent behaviour must exist, however complex
they may be.
	These rules are not the kind of rules to be found in Searle's rule
book. These are complex rules, which take into account all knowledge and memory
of past experiences, all emotions, the behaviour of the surroundings, et
cetera. Because of that, all rules are not the same in all humans, but no doubt
we share a large proportion of these rules.
	Intelligent behaviour is in essence highly non-predictable. But this is
simple due to the complexity and multitude of the rules guiding intelligence.
It is certainly not the result of no rules, as this would only lead to random,
and thereby non-intelligent, behaviour. 
	As a consequence of the existence of such complex rules, artificial
intelligence is possible, as it is "simply" a matter of creating a machine,
which is able to handle these complex rules and the enormous amount of memory
required. And it would be very unlike human beings, if we were not to achieve
such a machine one day.

	In his response in the September issue of Scientific American, Searle
writes: "It is a mistake to suppose that in opposing a computational
explanation of consciousness I am opposing a mechanical explanation." In my
opinion, this doesn't make sense at all. If there is a mechanical explanation,
then there has to be a computational explanation as well, because every
mechanical process can be described computationally.
	Searle admits: "There must be a mechanical explanation for how the
brain processes produce consciousness because the brain is a system governed by
the laws of physics." Yes, and that is precisely the reason why artificial
intelligence is possible. "Consciousness" and "intelligence" are results of a
functioning brain. It is this functionality, that we want to recreate.
What matters is the functionality itself, not how it is achieved. And if there
is a mechanical explanation of how the brain's information processing works,
then of course we can, in principle, recreate it as a computer program, however
complex it might have to be.

	Searle ends his response my stating that: "Any sane analysis has to
grant that the person in the Chinese room does not understand Chinese." Right,
I agree. I don't think, that Searle could find anybody disagreeing. However,
this is completely uninteresting. As Searle pointed out in his article, the
person is acting as a "computer", whereas it is the rule book, that corresponds
to the computer _program_. So Searle has just proved, that the _hardware_ of a
computer need not be intelligent in order for the computer _system_ to be
intelligent. But he certainly hasn't proved, that the brain's mind could not be
the result of executing a computer program, as was his intention.

	Searle's article, and others like it, always makes me think of the
apparent paradox, that the people most strongly opposed to the notion of
artificial intelligence are sometimes those, who seem less well endowed with
natural intelligence :-)

--

Svend Jules Fjerdingstad, jufier@daimi.aau.dk
Computer Science Department, University of Aarhus
Ny Munkegade 116, DK-8000 Aarhus C, DENMARK

deichman@cod.NOSC.MIL (Shane D. Deichman) (11/14/90)

In his earlier posting, Svend makes some brilliant arguments in
support of a deterministic, non-free will environment for human
existence.  By deftly casting the arguments of both Searle and
the Churchlands aside, he resorts to a "If it exhibits the qualities
of intelligence then it IS intelligent" argument.  Is that to say
that human perceptions are always infallible, and that what we
see and perceive actually IS?  Or does it imply that our percep-
tions, while not always accurate, still elicit a deeper understanding
of a given phenomenon based on multiple repetitions?

The Chinese Room argument points out some deficiencies in the Turing
Test -- deficiencies which call upon the observer to take a deeper,
more profound look at what is meant by "understanding" and "knowledge."
Svend disregards the subconsciousness associated with cognition and
lucidity, and therefore begs the question.

Furthermore, he attacks the Churchlands (supposed "allies" in his 
campaign in support of Strong AI) in their reasoning capacities
for failing to see this point he so astutely raises.  Perhaps, in
a stolid, deterministic world where emotions are bleak representa-
tions of mere "sensory inputs," Svend's arguments would carry some
weight.  But in a world enriched by the subtleties of life, his
"intelligence" as a function of outward appearance is exceedingly
bland.

-shane

"the Ayatollah of Rock-and-Rollah"

JAHAYES@MIAMIU.BITNET (Josh Hayes) (11/14/90)

What needs defining here is "intelligence", because it seems that
Searle has his own definition which _de facto_ includes being a
human being, or at least an organic being; it's no surprise then
that no machine "intelligence" need apply....
 
Sven, on the other hand (I have a Colombian friend named Sven; it's
an odd name for that part of the world....where was I?) defines
intelligence as "that which appears intelligent" (I paraphrase,
but I think, fairly). This is a simple definition (though it begs
the question of how we determine what "appearing intelligent" is),
and is, I think, the relevant definition to the question of A.I.
 
I believe we want a pragmatic definition: what is the PURPOSE of
AI? We ostensibly design AIs to perform a task or tasks that we
assume to require a degree of intelligence; to the extent that they
carry them out well, are they not intelligent?
 
This all ties back to the emergent properties shtick (sorry). The
systems reply to Searle's CR analogy is entirely appropriate if we
regard "intelligence" as a property of a system as a whole which
cannot be said to reside in any particular component of that system.
It is the property of the "instruction book" and the "guy who
manipulates the symbols" AND the interaction between these sub-
systems AND the interaction of that whole system with the outside
world (that speaks Chinese to the "Room"). As such, "intelligence"
may not be a very useful term, since it's so difficult to pin down,
and of course, since it's such a loaded term.
-------
Josh Hayes, Zoology Department, Miami University, Oxford OH 45056
voice: 513-529-1679      fax: 513-529-6900
jahayes@miamiu.bitnet, or jahayes@miamiu.acs.muohio.edu
"It is always wise to remember that it was the gods who put
 nipples on men, seeds in pomegranates, and priests in temples."

jufier@daimi.aau.dk (Svend Jules Fjerdingstad) (11/17/90)

deichman@cod.NOSC.MIL (Shane D. Deichman) writes:

>In his earlier posting, Svend makes some brilliant arguments in
>support of a deterministic, non-free will environment for human
>existence.  By deftly casting the arguments of both Searle and
>the Churchlands aside, he resorts to a "If it exhibits the qualities
>of intelligence then it IS intelligent" argument.  Is that to say
>that human perceptions are always infallible, and that what we
>see and perceive actually IS?  Or does it imply that our percep-
>tions, while not always accurate, still elicit a deeper understanding
>of a given phenomenon based on multiple repetitions?

No.

The point is this: If a human being "exhibits the qualities of intelligence"
(according to our (subjective) perception of such qualities), then we DO
(in normal every-day life) consider this human being to be intelligent.

Therefore if some entity (be it a computer system or anything else) behaves
"intelligently", then we MUST also conclude, that this entity has intelligence.

If we cannot consider a computer system intelligent EVEN THOUGH it behaves
intelligently, then we have redefined the concept of intelligence in such a
way as to make it completely unrelated to any behaviour, that we can observe.
This means that any piece of dirt might indeed be considered intelligent, or
alternatively, that it is impossible to conclude about any human being, that
he or she is intelligent. This definition could, in fact, lead to a belief in
the non-existence of true intelligence, whether in humans or in computers.
(Except in me, of course :-))

In my opinion, this last definition of intelligence is absurd an useless.
Intelligence is the ability to BEHAVE intelligently. Nice definition, eh :-)

The problem is that we cannot at the present time (and perhaps we never will
be able to) give a precise and exhaustive definition of intelligent behaviour.
Therefore the Turing test represents the brilliant solution of using one
intelligent system, human beings, to evaluate the possible degree of (verbal)
intelligence residing in some other supposedly intelligent system, a computer
system, for example.

>The Chinese Room argument points out some deficiencies in the Turing
>Test -- deficiencies which call upon the observer to take a deeper,
>more profound look at what is meant by "understanding" and "knowledge."
>Svend disregards the subconsciousness associated with cognition and
>lucidity, and therefore begs the question.

If subconsciousness is a prerequisite for intelligence, if it plays a role
in forming intelligent behaviour, then of course a computer system would have
to possess subconsciousness, in order to pass the Turing test.

Anyway, IMHO the only deficiencies pointed out by The Chinese Room argument are
deficiencies in Searle's understanding of the Turing test :-)

If Searle's Chinese Room argument were valid, then all of you people out there
on the net might just be mindless machines looking up words in a dictionary.
But then, why do I bother writing this? Better stop now :-)

>Furthermore, he attacks the Churchlands (supposed "allies" in his 
>campaign in support of Strong AI) in their reasoning capacities
>for failing to see this point he so astutely raises.  Perhaps, in
>a stolid, deterministic world where emotions are bleak representa-
>tions of mere "sensory inputs," Svend's arguments would carry some
>weight.  But in a world enriched by the subtleties of life, his
>"intelligence" as a function of outward appearance is exceedingly
>bland.

Ah, I thought so. You ARE one of those :-)

>-shane
>"the Ayatollah of Rock-and-Rollah"

Svend
--

Svend Jules Fjerdingstad, jufier@daimi.aau.dk       |  "To love,
Computer Science Department, University of Aarhus   |     and to learn."
Ny Munkegade 116, DK-8000 Aarhus C, DENMARK         |

marky@caen.engin.umich.edu (Mark Anthony Young) (11/18/90)

In article <1990Nov16.161134.2845@daimi.aau.dk> jufier@daimi.aau.dk (Svend Jules Fjerdingstad) writes:
>deichman@cod.NOSC.MIL (Shane D. Deichman) writes:
>
>>The Chinese Room argument points out some deficiencies in the Turing
>>Test -- deficiencies which call upon the observer to take a deeper,
>>more profound look at what is meant by "understanding" and "knowledge."
>>Svend disregards the subconsciousness associated with cognition and
>>lucidity, and therefore begs the question.
>
>If subconsciousness is a prerequisite for intelligence, if it plays a role
>in forming intelligent behaviour, then of course a computer system would have
>to possess subconsciousness, in order to pass the Turing test.
>
I think this is a very important point, one that is ignored in the Chinese
room argument.  The CR argument goes like this:

  IF the Turing test is correct,
  AND a machine of such-and-such a type passes it,
  THEN that machine is intelligent.

  BUT, that type of machine can't be intelligent
  (it doesn't have the "right stuff")
  THEREFORE, the Turing test is not correct.

Implicit in this argument is that the offending machine will pass the Turing
test (otherwise the implication is invalid).  

It is possible that the Turing test is valid, and yet no machine will ever
pass it.  It is possible that any machine that passes the TT will be totally
unlike anything we now consider to be a computer.  Nevertheless, this would
not invalidate the test itself.

...mark young