[comp.ai.philosophy] Searle, Strong AI, and Chinese Rooms

jufier@daimi.aau.dk (Svend Jules Fjerdingstad) (11/13/90)

	The January 1990 issue of Scientific American featured two articles
about Artificial Intelligence: "Is the Brain's Mind a Computer Program?" by
John R. Searle, and "Could a Machine Think?" by Paul M. Churchland and Patricia
Smith Churchland. All three authors are professors of philosophy, which may
explain their poor general understanding of the properties of computers and
computer programs.

	In the September issue Scientific American printed a number of
responses to the articles. Although I agree with most of the objections to the
articles stated in these letters, I feel that important points of criticism was
omitted.
	The following represent my response to the articles.
 
	I must confess, that I'm not exactly impressed by the quality of the
arguments presented in the two articles. It seems to me, that most of the
arguments are quite weak and remarkably simple to refute. 
	In "Is the Brain's Mind a Computer Program?" professor of philosophy
John R. Searle attempts to show that even passing the famous Turing test does
not prove a computer program intelligent. He tells this story of a "Chinese
Room" in which a person ignorant of Chinese language manipulates Chinese
symbols according to the rules in a book. He correctly points out, that the
rule book is the "computer program", and that the person is the "computer",
merely executing the program. But then he wrongly concludes, that the _person_
satisfies the Turing test in spite of being ignorant of Chinese. Obviously, it
is the Chinese Room _as a whole_, which is able to pass the Turing test. And
the room itself certainly cannot be said to be ignorant of Chinese, as an
extensive knowledge must be present in the rule book, although the person in
the room has no means of accessing this knowledge. To him or her, the rules in
the book _seem_ entirely meaningless.
	Unless speaking Chinese in a manner "indistinguishable from those of a
native Chinese speaker" does not require intelligence, then of course the
entire room must be considered intelligent. It certainly behaves intelligently,
and as this cannot be ascribed to the person, the intelligence must be due to
the rule book, whether the rules are stored on paper or not.
	Searle tries to avoid this so-called Systems Reply by imagining yet
another (impossible) situation, in which the person in the room memorizes all
the rules for manipulating the symbols. His argument is nearly unbelievably
naive: "There is nothing in the 'system' that is not in me, and since I don't
understand Chinese, neither does the system."
	The consequences of this statement are absurd: If his statement is
correct, then he has proven nothing more, than that it should be possible for a
person ignorant of Chinese to pass the Turing test for speaking Chinese. All he
has done is reducing the requirements for speaking a language fluently. In
fact, he has simply made it impossible to determine, whether a person
understands a given human language or not: If other humans are able to answer
in speech as though they understood what was being said, then surely all their
actions might also be the result of consulting a simple rule book.
	The entire concept of comprehension has been made hollow and without
meaning.

	Furthermore, if the Turing test no longer is a valid test for
intelligence, then it has become impossible to judge the degree of intelligence
present in a person. The concept of intelligence has also been made
meaningless, as the presence or absence of Searle's kind of intelligence in no
way influences an entity's behaviour. If it did, the Turing test could be used
to distinguish between the two.  
	Of what use are Searle's concepts of comprehension and intelligence, if
they are not related to any events of this world?
	If Searle's statement were true, he would simply have made it clear
that conscious thought was unnecessary for all kinds of human behaviour!
	Which is probably the exact opposite of what he wanted to prove.

	But perhaps the key to the explanation could be, that Searle himself
actually does not understand English. Whenever he writes an article, he simply
consults a (lousy) book full of rules for writing articles against artificial
intelligence :-)
   
	The entire attitude of Searle is ridiculous. He states that "a program
merely manipulates symbols, whereas a brain attaches meaning to them". But why
is it important, that humans attach meaning to symbols? Does it matter? Of
course it does, it is exactly this attaching meaning to symbols that allows us
to interpret the symbols as conveying a message, and allows us to understand
this message.  
	Obviously, the fact that we are able to understand symbolically encoded
messages affect the way we interact with our environment. Which is why a
computer would have to be able to understand human language, not merely
manipulate it, in order to pass the Turing test.
	Does our intelligence influence the way we behave? Of course, it does.
If not, what is intelligence, then? And why are we equipped with it, if it is
of no real, practical use?
	But if intelligence influences our behaviour, then it follows, that a
computer also would have to be intelligent, if ever to pass the Turing test.
	As the person in the Chinese Room were unable to understand the Chinese
signs entering the room, all of these qualities would have to be present in the
"rule book" in the Chinese Room, which would make it something far more
sophisticated than merely a book. I think, this is where Searle really cheats.
By using a simple book full of rules, he is able to carry along with his
arguments, because it is obvious, that a book could not understand Chinese, or
be intelligent. However, he has at no time demonstrated (or even made likely)
that a book would indeed be sufficient for his project. His assumption is, in
fact, a degradation of intelligence to a set of simple rules that can easily be
formulated in a book.
	Searle gets things mixed up by assigning the name "book" to an entity
with properties, that are fundamentally different from those of a book. This
is, in my opinion, the central error in the Chinese Room argument. Searle
confuses himself (and the Churchlands) by calling something a book, that could
never be just a book of rules.

	Searle's attack on the Turing test is unfair and unfounded. He mistakes
his own lack of understanding for flaws in the Turing test.
	It is odd, that the Churchlands in their article "Could a Machine
Think?" fail to recognize the untenability of Searle's arguments against the
Turing test. They agree with Searle, that "it is also very important how the
input-output function is achieved; it is important that the right sorts of
things be going on inside the artificial machine."
	Well, I certainly do not agree with that. That's just pure mysticism. A
black box must be considered intelligent, if it acts intelligently. It is
ridiculous to define "conscious intelligence" in a way, which makes it
impossible to measure, because it in no ways affect its surroundings. With this
definition, we won't ever be able to determine whether conscious intelligence
is present or not in an object. All we can say is the following: "Is it human?
Ah, then it's intelligent! Not human? Well, then this seemingly intelligent
behaviour is not achieved the right (human) way. Therefore, it is not
intelligent!"
	Then we have defined intelligence in such a way, that it can only occur
in humans. And by defining it this way, we have excluded all the most
impressive and important qualities of human intelligence.

	I am not at all impressed by these three philosophers' abilities to
reason: Searle also argues that simulating a process on a computer is very
different from the actual process, and that therefore even if we could simulate
all the processes of the brain, we should still not have attained artificial
intelligence. However his example, that you cannot make a car run with a
computer simulation of the oxidation of hydrocarbons in an engine, is really
not relevant. Obviously, "a simulation of cognition" would process the same
genuine information as a brain, not just some symbolic substitute. And it would
process it the same way as the brain. The simulation would be a genuine
"processor of information", just like the brain. Therefore the two situations
are not comparable, and the argument is invalid.

	Let me try to summarize, why I think AI is possible. The definition of
intelligence I will use, is the more usual one than Searle's: Intelligence is
the ability to interact with the environment in an "intelligent" way, that is,
in a way which shows comprehension of the workings of the environment.
	My argument goes like this: Only purely random events (if such exist at
all) are not governed by rules. And since intelligence is the quintessence of
non-randomness, rules for intelligent behaviour must exist, however complex
they may be.
	These rules are not the kind of rules to be found in Searle's rule
book. These are complex rules, which take into account all knowledge and memory
of past experiences, all emotions, the behaviour of the surroundings, et
cetera. Because of that, all rules are not the same in all humans, but no doubt
we share a large proportion of these rules.
	Intelligent behaviour is in essence highly non-predictable. But this is
simple due to the complexity and multitude of the rules guiding intelligence.
It is certainly not the result of no rules, as this would only lead to random,
and thereby non-intelligent, behaviour. 
	As a consequence of the existence of such complex rules, artificial
intelligence is possible, as it is "simply" a matter of creating a machine,
which is able to handle these complex rules and the enormous amount of memory
required. And it would be very unlike human beings, if we were not to achieve
such a machine one day.

	In his response in the September issue of Scientific American, Searle
writes: "It is a mistake to suppose that in opposing a computational
explanation of consciousness I am opposing a mechanical explanation." In my
opinion, this doesn't make sense at all. If there is a mechanical explanation,
then there has to be a computational explanation as well, because every
mechanical process can be described computationally.
	Searle admits: "There must be a mechanical explanation for how the
brain processes produce consciousness because the brain is a system governed by
the laws of physics." Yes, and that is precisely the reason why artificial
intelligence is possible. "Consciousness" and "intelligence" are results of a
functioning brain. It is this functionality, that we want to recreate.
What matters is the functionality itself, not how it is achieved. And if there
is a mechanical explanation of how the brain's information processing works,
then of course we can, in principle, recreate it as a computer program, however
complex it might have to be.

	Searle ends his response my stating that: "Any sane analysis has to
grant that the person in the Chinese room does not understand Chinese." Right,
I agree. I don't think, that Searle could find anybody disagreeing. However,
this is completely uninteresting. As Searle pointed out in his article, the
person is acting as a "computer", whereas it is the rule book, that corresponds
to the computer _program_. So Searle has just proved, that the _hardware_ of a
computer need not be intelligent in order for the computer _system_ to be
intelligent. But he certainly hasn't proved, that the brain's mind could not be
the result of executing a computer program, as was his intention.

	Searle's article, and others like it, always makes me think of the
apparent paradox, that the people most strongly opposed to the notion of
artificial intelligence are sometimes those, who seem less well endowed with
natural intelligence :-)

--

Svend Jules Fjerdingstad, jufier@daimi.aau.dk
Computer Science Department, University of Aarhus
Ny Munkegade 116, DK-8000 Aarhus C, DENMARK

deichman@cod.NOSC.MIL (Shane D. Deichman) (11/14/90)

In his earlier posting, Svend makes some brilliant arguments in
support of a deterministic, non-free will environment for human
existence.  By deftly casting the arguments of both Searle and
the Churchlands aside, he resorts to a "If it exhibits the qualities
of intelligence then it IS intelligent" argument.  Is that to say
that human perceptions are always infallible, and that what we
see and perceive actually IS?  Or does it imply that our percep-
tions, while not always accurate, still elicit a deeper understanding
of a given phenomenon based on multiple repetitions?

The Chinese Room argument points out some deficiencies in the Turing
Test -- deficiencies which call upon the observer to take a deeper,
more profound look at what is meant by "understanding" and "knowledge."
Svend disregards the subconsciousness associated with cognition and
lucidity, and therefore begs the question.

Furthermore, he attacks the Churchlands (supposed "allies" in his 
campaign in support of Strong AI) in their reasoning capacities
for failing to see this point he so astutely raises.  Perhaps, in
a stolid, deterministic world where emotions are bleak representa-
tions of mere "sensory inputs," Svend's arguments would carry some
weight.  But in a world enriched by the subtleties of life, his
"intelligence" as a function of outward appearance is exceedingly
bland.

-shane

"the Ayatollah of Rock-and-Rollah"

JAHAYES@MIAMIU.BITNET (Josh Hayes) (11/14/90)

What needs defining here is "intelligence", because it seems that
Searle has his own definition which _de facto_ includes being a
human being, or at least an organic being; it's no surprise then
that no machine "intelligence" need apply....
 
Sven, on the other hand (I have a Colombian friend named Sven; it's
an odd name for that part of the world....where was I?) defines
intelligence as "that which appears intelligent" (I paraphrase,
but I think, fairly). This is a simple definition (though it begs
the question of how we determine what "appearing intelligent" is),
and is, I think, the relevant definition to the question of A.I.
 
I believe we want a pragmatic definition: what is the PURPOSE of
AI? We ostensibly design AIs to perform a task or tasks that we
assume to require a degree of intelligence; to the extent that they
carry them out well, are they not intelligent?
 
This all ties back to the emergent properties shtick (sorry). The
systems reply to Searle's CR analogy is entirely appropriate if we
regard "intelligence" as a property of a system as a whole which
cannot be said to reside in any particular component of that system.
It is the property of the "instruction book" and the "guy who
manipulates the symbols" AND the interaction between these sub-
systems AND the interaction of that whole system with the outside
world (that speaks Chinese to the "Room"). As such, "intelligence"
may not be a very useful term, since it's so difficult to pin down,
and of course, since it's such a loaded term.
-------
Josh Hayes, Zoology Department, Miami University, Oxford OH 45056
voice: 513-529-1679      fax: 513-529-6900
jahayes@miamiu.bitnet, or jahayes@miamiu.acs.muohio.edu
"It is always wise to remember that it was the gods who put
 nipples on men, seeds in pomegranates, and priests in temples."

vic@corona.Solbourne.COM (Vic Schoenberg) (11/16/90)

I am enjoying this revisiting of the Chinese Room as much as ever this
time around, but once again I feel we are having all the fun at Searle's
expense. Typical of the Searle bashing is this conclusion to the posting
by Svend Jules Fjerdingstad:

>         Searle's article, and others like it, always makes me think of the
> apparent paradox, that the people most strongly opposed to the notion of
> artificial intelligence are sometimes those, who seem less well endowed with
> natural intelligence :-)

I suppose it's possible that AI researchers are smarter than philosophers,
but there are other possibilities. For example, Searle may understand
the issues differently, or he may impose different criteria on a
satisfactory reply. In the case of the question of whether passing the
Turing Test in and of itself assures that a system understands a natural
language, I think both these factors are involved.

Recall that the very purpose of the Turing Test is to establish an operational
test for intelligence, bypassing any attempt to agree on the definition of
what intelligence is, or what it means to understand a language. With the
Turing Test, we have a mathematician's attempt to bypass these sticky
questions of philosophy. It isn't surprising that a philosopher should
be unamused. To a philosopher of mind, this end run around the main
issues of the day isn't acceptable. Searle isn't satisfied with an
operational definition of intelligence because this doesn't address the
issues of subjectivity, qualia, the problem of other minds, and so
forth that are central to the human experience and constitute the core
unsolved problems of this area of philosophic study.

>	Searle tries to avoid this so-called Systems Reply by imagining yet
> another (impossible) situation, in which the person in the room memorizes all
> the rules for manipulating the symbols. His argument is nearly unbelievably
> naive: "There is nothing in the 'system' that is not in me, and since I don't
> understand Chinese, neither does the system."
>	The consequences of this statement are absurd: If his statement is
> correct, then he has proven nothing more, than that it should be possible
> for a person ignorant of Chinese to pass the Turing test for speaking
> Chinese. 

This is one of the points Searle wished to establish, that the Turing
Test is inadequate.

Searle is often accused of dualism or even mysticism, but he doesn't 
consider himself as either. If anyone is taking a leap of faith here,
it is the AI advocates. I doubt if any of them think a radio
understands speech, or a television enjoys sitcoms, or a computer reads
the email that passes through it and forms opinions on its quality. 
But we think that with the right wiring and right programs it will
suddenly become conscious and have beliefs. 

Searle doesn't deny that material entities can have such properties, but
he suggests that something in the brain is making possible these
subjective experiences which humans have, and the something that does
this, whatever it is, is quite beyond anything computer scientists have
created or proposed. 

I think he has a valid point, and I wish we could address the problems
of qualia, other minds, and the subjective experiences of humans and
other intelligent agents instead of belittling him and the issues he
raises.

--

Vic Schoenberg  	vic@Solbourne.COM
303/678-4603		...!{uunet,boulder,sun}!stan!vic

marky@caen.engin.umich.edu (Mark Anthony Young) (11/16/90)

In article <1990Nov15.204949.12075@Solbourne.COM> vic@corona.Solbourne.COM (Vic Schoenberg) writes:
> [In some other article someone else writes:]
>>	Searle tries to avoid this so-called Systems Reply by imagining yet
>> another (impossible) situation, in which the person in the room memorizes all
>> the rules for manipulating the symbols. His argument is nearly unbelievably
>> naive: "There is nothing in the 'system' that is not in me, and since I don't
>> understand Chinese, neither does the system."
>>	The consequences of this statement are absurd: If his statement is
>> correct, then he has proven nothing more, than that it should be possible
>> for a person ignorant of Chinese to pass the Turing test for speaking
>> Chinese. 
>
>This is one of the points Searle wished to establish, that the Turing
>Test is inadequate.
>
My interpretation of Searle's reply here is that it should be possible for
someone who doesn't understand Chinese to speak it in a way indistinguishable
from someone who does understand, simply by memorizing the rules from the
Chinese room.  Of course, if we were carrying on a conversation with someone
in Chinese, and that person claimed not to understand the language, we would
hardly believe him.  If Searle persisted in claiming that he didn't understand
Chinese, in spite of carrying on perfectly fluent conversation therein, we 
would question his sanity before his understanding.  Thus Searle's claim that
the system does not understand seems far-fetched.

So the Turing test is only inadequate as a theory of understanding (can it
even be called a theory of understanding?).  It is perfectly adequate as a
test of understanding.

>Recall that the very purpose of the Turing Test is to establish an operational
>test for intelligence, bypassing any attempt to agree on the definition of
>what intelligence is, or what it means to understand a language. With the
>Turing Test, we have a mathematician's attempt to bypass these sticky
>questions of philosophy. It isn't surprising that a philosopher should
>be unamused. To a philosopher of mind, this end run around the main
>issues of the day isn't acceptable. Searle isn't satisfied with an
>operational definition of intelligence because this doesn't address the
>issues of subjectivity, qualia, the problem of other minds, and so
>forth that are central to the human experience and constitute the core
>unsolved problems of this area of philosophic study.

Since the Turing test doesn't address these issues, and was never meant to,
isn't it irrelevent to them?  Turing wasn't trying to help us understand
what understanding is, only to help us recognise it.  Why does Searle spend
so much time and effort criticising something that has no bearing on what
he's interested in?  

...mark young

jufier@daimi.aau.dk (Svend Jules Fjerdingstad) (11/17/90)

deichman@cod.NOSC.MIL (Shane D. Deichman) writes:

>In his earlier posting, Svend makes some brilliant arguments in
>support of a deterministic, non-free will environment for human
>existence.  By deftly casting the arguments of both Searle and
>the Churchlands aside, he resorts to a "If it exhibits the qualities
>of intelligence then it IS intelligent" argument.  Is that to say
>that human perceptions are always infallible, and that what we
>see and perceive actually IS?  Or does it imply that our percep-
>tions, while not always accurate, still elicit a deeper understanding
>of a given phenomenon based on multiple repetitions?

No.

The point is this: If a human being "exhibits the qualities of intelligence"
(according to our (subjective) perception of such qualities), then we DO
(in normal every-day life) consider this human being to be intelligent.

Therefore if some entity (be it a computer system or anything else) behaves
"intelligently", then we MUST also conclude, that this entity has intelligence.

If we cannot consider a computer system intelligent EVEN THOUGH it behaves
intelligently, then we have redefined the concept of intelligence in such a
way as to make it completely unrelated to any behaviour, that we can observe.
This means that any piece of dirt might indeed be considered intelligent, or
alternatively, that it is impossible to conclude about any human being, that
he or she is intelligent. This definition could, in fact, lead to a belief in
the non-existence of true intelligence, whether in humans or in computers.
(Except in me, of course :-))

In my opinion, this last definition of intelligence is absurd an useless.
Intelligence is the ability to BEHAVE intelligently. Nice definition, eh :-)

The problem is that we cannot at the present time (and perhaps we never will
be able to) give a precise and exhaustive definition of intelligent behaviour.
Therefore the Turing test represents the brilliant solution of using one
intelligent system, human beings, to evaluate the possible degree of (verbal)
intelligence residing in some other supposedly intelligent system, a computer
system, for example.

>The Chinese Room argument points out some deficiencies in the Turing
>Test -- deficiencies which call upon the observer to take a deeper,
>more profound look at what is meant by "understanding" and "knowledge."
>Svend disregards the subconsciousness associated with cognition and
>lucidity, and therefore begs the question.

If subconsciousness is a prerequisite for intelligence, if it plays a role
in forming intelligent behaviour, then of course a computer system would have
to possess subconsciousness, in order to pass the Turing test.

Anyway, IMHO the only deficiencies pointed out by The Chinese Room argument are
deficiencies in Searle's understanding of the Turing test :-)

If Searle's Chinese Room argument were valid, then all of you people out there
on the net might just be mindless machines looking up words in a dictionary.
But then, why do I bother writing this? Better stop now :-)

>Furthermore, he attacks the Churchlands (supposed "allies" in his 
>campaign in support of Strong AI) in their reasoning capacities
>for failing to see this point he so astutely raises.  Perhaps, in
>a stolid, deterministic world where emotions are bleak representa-
>tions of mere "sensory inputs," Svend's arguments would carry some
>weight.  But in a world enriched by the subtleties of life, his
>"intelligence" as a function of outward appearance is exceedingly
>bland.

Ah, I thought so. You ARE one of those :-)

>-shane
>"the Ayatollah of Rock-and-Rollah"

Svend
--

Svend Jules Fjerdingstad, jufier@daimi.aau.dk       |  "To love,
Computer Science Department, University of Aarhus   |     and to learn."
Ny Munkegade 116, DK-8000 Aarhus C, DENMARK         |

mcdermott-drew@cs.yale.edu (Drew McDermott) (11/17/90)

   In article <1990Nov15.204949.12075@Solbourne.COM>
   vic@corona.Solbourne.COM (Vic Schoenberg) writes:

   Searle isn't satisfied with an
   >operational definition of intelligence because this doesn't address the
   >issues of subjectivity, qualia, the problem of other minds, and so
   >forth that are central to the human experience and constitute the core
   >unsolved problems of this area of philosophic study.
   >
   >Searle is often accused of dualism or even mysticism, but he doesn't 
   >consider himself as either. If anyone is taking a leap of faith here,
   >it is the AI advocates. I doubt if any of them think a radio
   >understands speech, or a television enjoys sitcoms, or a computer reads
   >the email that passes through it and forms opinions on its quality. 
   >But we think that with the right wiring and right programs it will
   >suddenly become conscious and have beliefs. 

If you delete the word "suddenly," then of course you're right: Our
operating assumption is that with the "right ... programs, it will be
... conscious."  

   >Searle doesn't deny that material entities can have such properties, but
   >he suggests that something in the brain is making possible these
   >subjective experiences which humans have, and the something that does
   >this, whatever it is, is quite beyond anything computer scientists have
   >created or proposed. 

   >I think he has a valid point, and I wish we could address the problems
   >of qualia, other minds, and the subjective experiences of humans and
   >other intelligent agents instead of belittling him and the issues he
   >raises.
   >

I agree entirely (although "quite beyond" seems a little strong), ....

   >--
   >
   >Vic Schoenberg  	vic@Solbourne.COM
   >303/678-4603		...!{uunet,boulder,sun}!stan!vic

.... however, it would be nice to have a decisive refutation of
Searle.  And here it is: Searle's argument takes the form of a
Gedanken experiment.  Such an experiment resembles a real experiment
in that one starts with a prediction and at some point it gets refuted
or confirmed.  Obviously, this makes sense only if the experiment
causes two theories to interact in surprising ways.  E.g., Einstein
imagined what light would look like if you were traveling at the speed
of light, thus exposing basic contradictions among existing physical
theories.  

Now, the question for Searle is: Exactly what prediction would
cognitive science (or "Strong AI") make about the Chinese Room
situation?  He always talks as if the theory would predict that the
squiggle-manipulator would come to understand Chinese.  But I doubt
anyone would agree to abide by that prediction.  Instead, the
prediction would be that (with the "right programs" again), a virtual
person would come into existence that did understand Chinese.  This is
the virtual person you are communicating with via squiggles and
squoggles.  This prediction may seem crazy to scoffers at AI, but
merely seeming crazy is not sufficient for a hypothesis to be refuted
in a Gedanken experiment.  One actually has to arrive at a
contradiction of existing theories.  And of course we're ludicrously
underequipped with useful theories in this area.

Anyway, the key principle here is that those espousing the theory
being critiqued, and not those making the critique, get to say what
the theory predicts.

                                             -- Drew McDermott

cw2k+@andrew.cmu.edu (Christopher L. Welles) (11/17/90)

In <1990Nov15.204949.12075@Solbourne.COM> Vic Schoenberg says:
In response to Svend Fjerdingstad
>>       The consequences of this statement are absurd: If his statement is
>> correct, then he has proven nothing more, than that it should be possible
> >for a person ignorant of Chinese to pass the Turing test for speaking
> >Chinese. 
>
>This is one of the points Searle wished to establish, that the Turing
>Test is inadequate.

It appears you've missed the whole point of what Svend was trying to
say.  Just the fact that a part of the system, a logical part, not
physical part, does not understand, does not mean that the  system as a
whole does not understand.  It is not actually the person, as Svend had
said, that passes the turning test, but the system as a whole, rules and
all.  The place Svend made a mistake was in just assuming what was meant
would be understood.

It seems those people who actually understand the systems reply take it
for granted that others do.  As for myself, it's hard to imagine not
understanding the systems reply.  It just seems obvious.  So obvious, in
fact, that it is difficult to communicate the reasoning behind it.  It
seems to be a conceptual leap of sorts.

Let me try to explain the concept once again.  Let us take you for
example.  When photons of light strike your eye, a complex series of
chemical reactions take place resulting in pulses being sent from the
nerve.  If we traced these pulses throughout the whole brain, they would
still only be pulses, or at least would result in specific physical
reactions.  We can define all those physical reactions as rules. 
Throughout the whole brain, you can follow all the pulses, all the
chemical reactions, and they won't mean a thing to you.  It's only the
system itself that sees any meaning.  It's doesn't even realize they are
pulses! The two views of what's going on are from completely different
view points.

If you still don't understand, I'm at a loss.  I'm strongly inclined to
take the viewpoint that Svend did:  "that the people most strongly
opposed to the notion of artificial intelligence are sometimes those,
who seem less well endowed with natural intelligence."  I just don't
know how to state it more clearly!

I'm curious though.  There is the question of why some people seem to
understand the Systems Reply, while others have no concept of what it
means.  Could programming experience have something to do with it.  It
just seems to be that those people who understand the systems reply are
those very same people who understand why a computer can be built out of
toilet paper and rocks.

			<<<<< Chris >>>>>

G.Joly@cs.ucl.ac.uk (Gordon Joly) (11/18/90)

Why Chinese? Is this is a Red Herring (or racism)? Why not French,
Serbo-Croat or Swedish?  The fact that Chinese use pictograms rather
than letters is irrelevant to the arguments.

Gordon Joly                                       +44 71 387 7050 ext 3716
InterNet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

marky@caen.engin.umich.edu (Mark Anthony Young) (11/18/90)

In article <1990Nov16.161134.2845@daimi.aau.dk> jufier@daimi.aau.dk (Svend Jules Fjerdingstad) writes:
>deichman@cod.NOSC.MIL (Shane D. Deichman) writes:
>
>>The Chinese Room argument points out some deficiencies in the Turing
>>Test -- deficiencies which call upon the observer to take a deeper,
>>more profound look at what is meant by "understanding" and "knowledge."
>>Svend disregards the subconsciousness associated with cognition and
>>lucidity, and therefore begs the question.
>
>If subconsciousness is a prerequisite for intelligence, if it plays a role
>in forming intelligent behaviour, then of course a computer system would have
>to possess subconsciousness, in order to pass the Turing test.
>
I think this is a very important point, one that is ignored in the Chinese
room argument.  The CR argument goes like this:

  IF the Turing test is correct,
  AND a machine of such-and-such a type passes it,
  THEN that machine is intelligent.

  BUT, that type of machine can't be intelligent
  (it doesn't have the "right stuff")
  THEREFORE, the Turing test is not correct.

Implicit in this argument is that the offending machine will pass the Turing
test (otherwise the implication is invalid).  

It is possible that the Turing test is valid, and yet no machine will ever
pass it.  It is possible that any machine that passes the TT will be totally
unlike anything we now consider to be a computer.  Nevertheless, this would
not invalidate the test itself.

...mark young

thornley@cs.umn.edu (David H. Thornley) (11/20/90)

In article <1990Nov15.204949.12075@Solbourne.COM> vic@corona.Solbourne.COM (Vic Schoenberg) writes:
>
>I suppose it's possible that AI researchers are smarter than philosophers,
>but there are other possibilities. For example, Searle may understand
>the issues differently, or he may impose different criteria on a
>satisfactory reply. In the case of the question of whether passing the
>Turing Test in and of itself assures that a system understands a natural
>language, I think both these factors are involved.
>
Certainly, certainly.  It seems to me that Alan Turing was interested
in testing for the existence of intelligence, while Searle is
interested in the nature of intelligence.  To give a gravitational
analogy, Turing is calculating possible observed planetary orbits
based on Newtonian theory to see if planets might follow them,
while Searle is studying the curvature of space, and why it
happens.

>Recall that the very purpose of the Turing Test is to establish an operational
>test for intelligence, bypassing any attempt to agree on the definition of
>what intelligence is, or what it means to understand a language. With the
>Turing Test, we have a mathematician's attempt to bypass these sticky
>questions of philosophy. It isn't surprising that a philosopher should
>be unamused. To a philosopher of mind, this end run around the main
>issues of the day isn't acceptable. Searle isn't satisfied with an
>operational definition of intelligence because this doesn't address the
>issues of subjectivity, qualia, the problem of other minds, and so
>forth that are central to the human experience and constitute the core
>unsolved problems of this area of philosophic study.
>

Frankly, Alan Turing didn't write his little paper to amuse philosophers.
He was trying to come up with an operational definition that people could
use, if and when anybody declared that a machine was intelligent.

The technique of establishing an operational definition for something
you don't understand is very common.  I've seen it applied to gravity,
memory, and a host of other things.  What makes such a criterion useful
is not its theoretical basis, or an ability to capture all members of
a class, but that it establishes some set of instances (in this case,
hypothetical intelligent computers) that we can observe and reason from.

At the very least, it has the virtue of being somewhat objective.  Consider
this Searle person (how did he get into this discussion? :-).  He keeps
saying that brains think.  How does he establish that?  Has he ever
observed a brain removed from the rest of its body, and determined that
it thinks?  How does he know that anybody but himself thinks, if he
is willing to consider that behavior indicating thought may proceed from
another source?

Speaking personally, I don't know that I have a brain.  I have hard
stuff in my head, which corresponds to descriptions and pictures I
have seen of "skulls."  I am assured from many quarters that skulls
of humans (which class I seem to fall into - consider appearance,
physical capabilities, and the fact that medical techniques based
on humans seem to work on me, not to mention genealogical evidence)
contain brains.  I am further assured that electrodes placed upon
my scalp have detected electrical activity consistent with sleeping
and waking (whether this is from an alleged brain or not I do not
know).  Furthermore, I am told that various parts of science, now
somewhat united as cognitive science, tell me that brains are the
source of various functions which description resembles that which
I experience as "thought."  Therefore, there is one entity which I
know thinks, and I have no more than strongly suggestive evidence
that that entity possesses a brain.

Were I therefore to construct the appropriate thought experiment, I
could conclude that there is a possibility that you cannot build
anything capable of thinking with organic materials, but you need
a computer.  If, on the other hand, I grant that intelligent behavior
indicates intelligence, the thought experiment becomes too much like
Descartes' malevolent demon to be even vaguely plausible.

Therefore, I will start taking Searle more seriously when he provides
some sort of criterion of thought that is not ultimately based on
the Turing Test, or when he gives an understandable difference
between the cognitive powers of a human and the proper computer
programs running on an appropriate machine.  (Searle is correct in
that programs don't think; actually, programs don't do anything.
It is the system running the programs that does something.)
I will take the "causal powers" argument seriously when I find
out what "causal powers" are and why computers (*not* programs,
see above) don't have them.  I will take the "symbol grounding"
argument seriously when somebody shows how to test for it, in
a system-dependent way, in a way not dependent on behavior.
(If you ask me what an apple is, what kind of apples I like,
and to pick an apple out of a fruit basket, how do you know I
am referring to apples, and not thinking I am playing some sort
of chess game or discussing the stock market in some weird code?
The possibility that a computer is doing this plays a major role
in the Scientific American article.)

DHT

mark@adler.philosophie.uni-stuttgart.de (Mark Johnson) (11/21/90)

Drew McDermott makes the interesting claim that in Searle's 
Chinese Room, we wind up communicating with a "virtual person".
This raises all sorts of interesting questions, like "What is
a virtual person?", and "How is it that real people like you
and me might be able to communicate with them?"  Presumably
such "virtual people" must be "instantiated" somehow on
real entities, and maybe they even need to be "connected" to
the "real world" in some way to be "real virtual people"?

Maybe all of this sounds crazy to you A.I.'ers --- of course
the mind is a computer, what else could it be?  (But remember
during the last century people thought the brain was like a
steam engine, controlled by governors and what not: there is
a definite tendancy to view the brain as the most complicated
machine around).

Actually, what I really wanted to do here is point out the relationship
between Searle's Chinese Room argument and some recent issues in the
semantics of natural language.  There are two basic approaches to
N.L. semantics.  The first trys to understand N.L.U. solely in terms
of symbol processing; if only we can come up with the right representations
and algorithms we will be able to explain natural language understanding.
The second claims that one must focus on the fact that natural language
expressions are *about* something: that when I say "The sun is shining"
I'm not just telling you about my internal psychological state, but
also about things external to me: the relationship between the sun,
clouds, and where I happen to be sitting, etc.  These people take the
*situatedness* of natural language to be perhaps its most important
property.  If these people are right, and if intelligence is like
language, then its not just the abstract representations and algorithms
used by an entity that make it intelligent, but crucially the way
in which these representations are grounded in (related to) external reality.

Mark Johnson

G.Joly@cs.ucl.ac.uk (Gordon Joly) (11/23/90)

I said
> Why Chinese? Is this is a Red Herring (or racism)? Why not French,
> Serbo-Croat or Swedish?  The fact that Chinese use pictograms rather
> than letters is irrelevant to the arguments.

Somebody suggested that this revealed my own xenophobia. Another said
that Searle's thrust was to present a language that was very foreign
in every sense, that is in syntax, idiom, grammar and so on.

In response, I would like to suggest "The BSL Room". The operator in
room cannot speak British Sign Language but has a method of reading
the signs and giving answers back in BSL.

Sign is a language in its own right. It is not a one-to-one mapping
onto, say, English. BSL and ASL (American SL) are different languages.

Note also that Belgian Sign is used by both the Flemish and French
communities.

Gordon Joly                                       +44 71 387 7050 ext 3716
InterNet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

cam@aipna.ed.ac.uk (Chris Malcolm) (11/26/90)

In article <MARK.90Nov21124612@adler.philosophie.uni-stuttgart.de> mark@adler.philosophie.uni-stuttgart.de (Mark Johnson) writes:
>
>Drew McDermott makes the interesting claim that in Searle's 
>Chinese Room, we wind up communicating with a "virtual person".
>This raises all sorts of interesting questions, like "What is
>a virtual person?", and "How is it that real people like you
>and me might be able to communicate with them?"  Presumably
>such "virtual people" must be "instantiated" somehow on
>real entities, and maybe they even need to be "connected" to
>the "real world" in some way to be "real virtual people"?

The analogy is with "virtual machine". Some machines have a level of
description of their functioning which is independent of the technology
used to implement the operations of that level. This lets you port
software around different computers: you only have to re-implement the
virtual machine the stuff runs on. 

Is there such a level of description  of human mental functioning?
Another way of asking that: is cognition computation? Possible answers
are:

	1. Impossible.

	2. Well, in fact people aren't built like that, but they could be.

	3. Yes.

Much AI and cognitive science has presumed one of the latter two answers
and is still making entertaining progress. 

So, if the answer is "yes", then you and I are in fact virtual people. If
the answer is "well ...", then you and I are probably indistinguishable
from virtual people. It's just that the concept only has theoretical
interest until you find a way of breaking the thing apart at some
virtual machine interface.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

smoliar@vaxa.isi.edu (Stephen Smoliar) (11/27/90)

In article <1990Nov19.191925.28285@cs.umn.edu> thornley@cs.umn.edu (David H.
Thornley) writes:
>
>Frankly, Alan Turing didn't write his little paper to amuse philosophers.
>He was trying to come up with an operational definition that people could
>use, if and when anybody declared that a machine was intelligent.
>
Having now read several of David's contributions, I am not about to accuse him
of not having read "Computing Machinery and Intelligence."  He knows enough of
the details of the story to convince me that he has read the paper at least
once, if not several times.  However, the above paragraph indicates, to me
at least, that he may not have received Turing's message.  Therefore, I would
like to try to clear up a couple of points.

First of all, for those who do not know this already, Turing's "little paper"
was published in MIND.  He may not have been interested in amusing
philosophers, but he certainly considered them the primary audience
for his observations.  (Remember that Turing spent quite a few hours
in discussion with Wittgenstein during his Cambridge days, so his thoughts
about mind date back to before his work on breaking codes or building computing
machines.)

A more important point, however, is that nowhere in this paper does Turing talk
about operational definitions.  He begins with the question, "Can machines
think?"  The first thing he does is dismiss this question on the grounds that
it bites off more than any sensible thinker can chew.  THEN he poses the
scenario of the "imitation game."  The purpose of posing the scenario is
to ask whether or not a machine could play it.  He argues that this question
is more tractable than his original question and then proceeds to discuss how
one might ultimately build such a machine.

Thus, we are now quite some distance from anything remotely resembling any sort
of definition (operational or otherwise) for intelligence.  Unfortunately,
there now seems to be a flood of philosophers of mind who want to read more
into Turing's paper than he ever intended to write.  The "imitation game" was
nothing more than an engineering decision to pull thought away from (possibly)
fruitless speculation and direct it towards something more concrete.  Of
course, many of us have anecdotes about how some implementation of ELIZA
managed to play the "imitation game" successfully.  All this means is that
we have probably now come far enough to think about scenarios more
sophisticated than Turing's original suggestion.  This seems like
an excellent thing to do.  Turing introduced the "imitation game"
to discourage philosophers from idle speculation.  Those philosophers
now seem to be rushing back to those nebulous words like "think" and
"intelligence" again.  All this means is that it is time to invent a
new scenario, more challenging than the imitation game, which can allow
us to return to more concrete issues again.

=========================================================================

USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

G.Joly@cs.ucl.ac.uk (Gordon Joly) (12/01/90)

Thought experiments are OK, but how long would it take to process one
question in the Chinese Room, by hand and in real time?

The age of the Universe? Or less time than that?

Gordon Joly                                       +44 71 387 7050 ext 3716
InterNet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

thornley@cs.umn.edu (David H. Thornley) (12/01/90)

In article <15798@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>In article <1990Nov19.191925.28285@cs.umn.edu> thornley@cs.umn.edu (David H.
>Thornley) writes:
>>
>>Frankly, Alan Turing didn't write his little paper to amuse philosophers.
>>He was trying to come up with an operational definition that people could
>>use, if and when anybody declared that a machine was intelligent.
>>
>[Polite notification that Smoliar wishes to disagree with me]
>
>First of all, for those who do not know this already, Turing's "little paper"
>was published in MIND.  He may not have been interested in amusing
>philosophers, but he certainly considered them the primary audience
>for his observations.  (Remember that Turing spent quite a few hours
>in discussion with Wittgenstein during his Cambridge days, so his thoughts
>about mind date back to before his work on breaking codes or building computing
>machines.)

Guess I should have put the smiley on that comment.
>
>A more important point, however, is that nowhere in this paper does Turing talk
>about operational definitions.  He begins with the question, "Can machines
>think?"  The first thing he does is dismiss this question on the grounds that
>it bites off more than any sensible thinker can chew.  THEN he poses the
>scenario of the "imitation game."  The purpose of posing the scenario is
>to ask whether or not a machine could play it.  He argues that this question
>is more tractable than his original question and then proceeds to discuss how
>one might ultimately build such a machine.
>
Here's how I have read the paper.  First, Turing points out the difficulty
of answering the question, "Can machines think?"  He discusses the male-
female "imitation game," then switches to the human-computer game, suggesting
that a digital computer is a good machine to use.

He then suggests that, in about 2000 AD, machines will exist (with gigabyte
storage) such that they will fool many people much of the time, and also
says that he thinks that, by this time, it will be possible to "speak of
machines thinking without expecting to be contradicted."  My reasoning
from this is that Turing thinks that his test is somewhat connected with
the basic question, "Can machines think?"

Turing then proceeds to discuss nine separate possible objections, or,
as he calls them, "opinions opposed to my own."  He is not clear about
which of his own opinions they are opposed to, but some, particularly
number 4, _The_Argument_From_Consciousness_, do seem to argue that a
machine that can imitate a human sufficiently can be said to think or 
understand or something vaguely like that.

This is why I interpret Turing's paper as supporting the common notion
of the "Turing Test."

>Thus, we are now quite some distance from anything remotely resembling any sort
>of definition (operational or otherwise) for intelligence.  Unfortunately,
>there now seems to be a flood of philosophers of mind who want to read more
>into Turing's paper than he ever intended to write.  The "imitation game" was
>nothing more than an engineering decision to pull thought away from (possibly)
>fruitless speculation and direct it towards something more concrete.  Of
>course, many of us have anecdotes about how some implementation of ELIZA
>managed to play the "imitation game" successfully.  All this means is that
>we have probably now come far enough to think about scenarios more
>sophisticated than Turing's original suggestion.  This seems like
>an excellent thing to do.  Turing introduced the "imitation game"
>to discourage philosophers from idle speculation.  Those philosophers
>now seem to be rushing back to those nebulous words like "think" and
>"intelligence" again.  All this means is that it is time to invent a
>new scenario, more challenging than the imitation game, which can allow
>us to return to more concrete issues again.
>
I don't think the Turing test has been outdated yet; for one thing, I
have not seen anything reliably win the "imitation game" yet, and I do
not expect to see a winner by 2000.  I think the problem of machine
"intelligence" is less tractable than Turing thought.

I do believe that we will eventually produce machines that can pass the
Turing test (in the sense that one believes one's mortgage will be
sold to an out-of-state outfit with bad record-keeping, not in the sense
that one believes in God), and I am sure that people, when interacting
with these machines, will believe they are intelligent, and capable of
thinking and understanding.

I have no anecdotes about Eliza et al. playing the imitation game, just
stories about Eliza being mistaken for human (*not* the same thing).
Nor do I think we need something more challenging than the imitation
game, since we haven't come near making a good player in forty years,
even with storage greatly exceeding the gigawhatever quantities
Turing wrote of.

DHT

smoliar@vaxa.isi.edu (Stephen Smoliar) (12/02/90)

In article <1990Nov30.231103.17041@cs.umn.edu> thornley@cs.umn.edu (David H.
Thornley) writes:
>
>He then suggests that, in about 2000 AD, machines will exist (with gigabyte
>storage) such that they will fool many people much of the time, and also
>says that he thinks that, by this time, it will be possible to "speak of
>machines thinking without expecting to be contradicted."  My reasoning
>from this is that Turing thinks that his test is somewhat connected with
>the basic question, "Can machines think?"
>
"Somewhat" is a well-chosen word.  I still think that David's reading is not
quite on the mark.  To make my case, I would like to provide a bit more context
for his quotation from Turing:

	I believe that in about fifty years' time it will be possible
	to programme computers, with a storage capacity of about 10**9,
	to make them play the imitation game so well that an average
	interrogator will not have more than 70 per cent chance of
	making the right identification after five minutes of questioning.
	The original question, "Can machines think?" I believe to be too
	meaningless to deserve discussion.  Nevertheless I believe that
	at the end of the century the use of words and general educated
	opinion will have altered so much that one will be able to speak
	of machines thinking without expecting to be contradicted.

Thus, having told his story about the imitation game, Turing still dismisses
the prospect of pondering his original question as basically a waste of time.
As I have said before, I am inclined to agree with him, leaving the question
to philosophers while the engineers go off and try to do something useful.

On the other hand, David is quite right that a computer which is mistaken for
a human is not necessarily a "winner" at Turing's original imitation game.  I
would guess, however, that he would have been content to accept an example of
ELIZA being confused for a human as a reasonable alternative solution to his
original problem.  After all, the scenario is not that different:  Humans are
communicating through typewriters, and the question is one of whether or not
a computer could be successfully substituted for a human.  (One of the factors
Turing probably did not count on was a tendency of people who use computers too
much to start talking like them, thus giving the computer an added edge on
winning the game!)

=========================================================================

USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

marky@caen.engin.umich.edu (Mark Anthony Young) (12/02/90)

In article <15878@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>
>On the other hand, David is quite right that a computer which is mistaken for
>a human is not necessarily a "winner" at Turing's original imitation game.  I
>would guess, however, that he would have been content to accept an example of
>ELIZA being confused for a human as a reasonable alternative solution to his
>original problem.  After all, the scenario is not that different:  Humans are
>communicating through typewriters, and the question is one of whether or not
>a computer could be successfully substituted for a human.  
>
While it's largely fruitless to argue about what someone would or would not
have been content with, I have to disagree with the statement that ELIZA's 
being taken for human constitutes a "reasonable alternative solution".  While
the surface structure may be similar to the Turing Test, ELIZA's "test" is
missing the two most important parts:

	(1) Direct comparison of human and non-human.  The subject must be 
		aware that it is possible s/he is talking with a non-human.
		Otherwise the natural assumption is that one is talking to 
		a human (this is the assumption we all make on the net).  
		Only when it becomes usual to talk with non-humans will this 
		assumption go away.  

		The direct comparison is important because research shows
		that raising suspicions about lying doesn't increase 
		accuracy in detecting lies, it only makes people more 
		suspicious of everyone (Toris & DePaulo, JPSP 47-5, 1985).  
		By always having one truth-teller and one liar, the
		interviewer can concentrate on differences between the
		interviewees, and our measurements will thus be more
		meaningful.

	(2) The non-human must be able to fool a significant proportion of
		people, not simply a few here and there.  There will always
		be a part of the population that has no idea how to tell
		a (simple but clever) computer simulation from the real
		thing.  These people will be reduced to guessing, and the
		non-human will get half of them simply by chance.

		When we say significant proportion, we must have some
		comparable task carried out by humans (known intelligence).
		The humans' rate of success here sets the base rate
		against which the non-human is measured.  Turing
		suggested the Imitation Game (man pretending to be a
		woman) as a comparable task for humans.

I have often heard it said that ELIZA passed the Turing Test (or a version
thereof).  I've heard two stories describing this amazing feat.  In one a
person insisted that there must be someone on "the other side," otherwise
who were they talking to?  The other story involved a person who asked
someone else to leave as the conversation with ELIZA was getting personal.
In the second case, it's not even clear that ELIZA was mistaken for a 
person.  In the first, the simple rejection that anything but a person was
even possible seems the best explanation.

Tests with PARRY come closer to the Turing test (though still not there).  
The only version I've seen in an actual journal (sorry, I can't remember
where) involved having psychiatrists rate transcripts for degree of
paranoia.  PARRY did rather well, scoring "mildly paranoid" in its "low"
setting and "very paranoid" in its "high" setting, nicely bracketing the
actual paranoids used as control.  Psychiatrists were not told, however,
that they might be reading a transcript generated by computer.  

Apparently there was a later version of this experiment in which the
psychiatrists were able to interview PARRY, and were actually told that
it might be a computer on the other end.  If anyone has any information
on this experiment I'd be interested in seeing it.  I'd be particularly
interested in knowing whether any base rate measures were taken, and, if
so, how PARRY compared to humans in their task.

...mark young

thornley@cs.umn.edu (David H. Thornley) (12/05/90)

In article <15878@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>In article <1990Nov30.231103.17041@cs.umn.edu> thornley@cs.umn.edu (David H.
>Thornley) writes:
>>
>>[Discussion of exactly what Turing wrote.]
>
>[More discussion, including a longer quotation.]
>
>Thus, having told his story about the imitation game, Turing still dismisses
>the prospect of pondering his original question as basically a waste of time.
>As I have said before, I am inclined to agree with him, leaving the question
>to philosophers while the engineers go off and try to do something useful.

If you mean that you'd be impressed at a system that would pass the
Turing test, and wouldn't start arguing that it isn't "really"
understanding (Turing did consider that question in his paper),
we can agree.  Trying to figure out exactly what Turing meant is
difficult and somewhat pointless.  I still maintain that it is
an operational definition for intelligence, and question any
definition of intelligence that disagrees with it.  (What is a
simulation of intelligence?  What is an image of a bright light?
Did I ever tell you about the time I was frazzled at work, and
asked myself how I would solve a problem if I could actually
concentrate on it, and got the right answer? :-)

More seriously, if Turing was claiming the question, "Can machines
think?" was simply a waste of time, why did he discuss objections
like "It isn't really conscious," or "It doesn't have a sense of humor/
sense of ethics/enjoyment of hot fudge sundaes?"
>
>On the other hand, David is quite right that a computer which is mistaken for
>a human is not necessarily a "winner" at Turing's original imitation game.  I
>would guess, however, that he would have been content to accept an example of
>ELIZA being confused for a human as a reasonable alternative solution to his
>original problem.  After all, the scenario is not that different:  Humans are
>communicating through typewriters, and the question is one of whether or not
>a computer could be successfully substituted for a human.  (One of the factors
>Turing probably did not count on was a tendency of people who use computers too
>much to start talking like them, thus giving the computer an added edge on
>winning the game!)
>
I don't think it would have satisfied Turing, and it certainly doesn't
satisfy me.  I require (a) that the interrogator know that he or she
may be communicating with a computer, and (b) that the interrogator
have a real human to compare the computer with.  Also, I require that
the human used for comparison be adult, intelligent, educated, literate
in the language used for the experiment, and fully able to use the
communication mechanism in use.  Any other specifications, it seems to
me, allow too many abuses.  (Then there was the time I ran into an
inter-terminal talk program, and assumed it was something like
Eliza for about five back-and-forth messages.  Live and maybe learn.)

DHT

kohout@drinkme.cs.umd.edu (Robert Kohout) (12/06/90)

In article <15878@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>Thus, having told his story about the imitation game, Turing still dismisses
>the prospect of pondering his original question as basically a waste of time.
>As I have said before, I am inclined to agree with him, leaving the question
>to philosophers while the engineers go off and try to do something useful.
>

I agree wholeheartedly, especially insofar as it applies to this Chinese
Room business. Leaving aside for a moment my objections to his "proof"
let us accept Searle's gedanken experiment as valid. Of what practical
importance is it? He is telling us that, if you ever build a machine
that can pass the Turing Test, it still won't "think". This sounds
like an issue to be debated in Star Trek, The Next Generation, not
here, not now.

If, on the other hand, Searle were trying to show that digital systems
alone will not be capable of passing the Turing Test, I would be much
more concerned. This is, I believe, one of Steve Harnad's basic
tenets. "Intelligence", whatever it may be, may simply not be computable.

I know that some of you will want to object: "But the brain in just
a big finite state machine". That is a conjecture, and for that matter
an old one which has been largely discredited. Neurons are cells,
real analog devices with certain behaviors that can be characterized
as digital behaviors. It is by no means certain that they are, in fact,
strictly digital in nature.

Some of you may even want to go further. That is, you may say
"At the fundamental level, matter is discreet, so at least in theory
we should be able to model the way it behaves." Again, this is
incorrect, on two counts. First, modern physics is somewhat confused
about the nature of "fundamental" particles: they behave as both waves
and particles. They are discreet, but also analog. Secondly, theory
also tells us that we will never (as in NEVER) be able to model the
world, or even a single brain, at the level of fundamental particles.
So we might as well adopt a sort of Heisenberg's uncertainly principle
and assume that, theoretically, we cannot model the behavior of
a complex material system at the level of fundamental particles.

Simple objections aside, the question remains: is "intelligence" 
computable? We immediately face the problem of having to define
intelligence. To simplify then, and bring this back 'round to Searle:
is it possible for a digital computer to pass the linguistic Turing
Test? That is, can the Chinese Room itself ever be built? Now
THAT is to me a more substantive and pertinent question that needs
to be address, by philosopher and engineer alike.

		- Bob Kohout

cam@aipna.ed.ac.uk (Chris Malcolm) (12/10/90)

In article <28345@mimsy.umd.edu> kohout@drinkme.cs.umd.edu (Robert Kohout) writes:

>"At the fundamental level, matter is discreet, ...

A reference to Heisenberg's Uncertainty Principle? Or, more generally,
the idea that Knowing How It All Works is fundamentally beyond our
punny minds?
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   +44 31 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK             DoD #205

rapaport@acsu.buffalo.edu (William J. Rapaport) (12/12/90)

In article <MARK.90Nov21124612@adler.philosophie.uni-stuttgart.de> mark@adler.philosophie.uni-stuttgart.de (Mark Johnson) writes:
>
>Drew McDermott makes the interesting claim that in Searle's 
>Chinese Room, we wind up communicating with a "virtual person".

I may have missed McDermott's posting, but the notion of a virtual person
was discussed in a paper presented at the American Philosophical
Association Central Division meetings last April:

Cole, David J. (1990), ``Artificial Intelligence and Personal Identity,''
paper presented at the Colloquium on AI, American Philosophical
Association Central Division, New Orleans, 27 April 1990.

Cole can be contacted at phil@ub.d.umn.edu (he's in the Phil. Dept. at
Univ. of Minnesota/Duluth).

My reply is available in LaTeXable form by emailing me at
rapaport@cs.buffalo.edu, or in hardcopy from Sally Elder, Dept. of
Computer Science, SUNY Buffalo, Buffalo, NY 14260; ask for:

Rapaport, William J. (1990), ``Computer Processes and Virtual Persons:
Comments on Cole's `Artificial Intelligence and Personal Identity',''
Technical Report 90-13 (Buffalo:  SUNY Buffalo Dept. of Computer Science,
May 1990).
 
			William J. Rapaport
			Associate Professor of Computer Science
			Center for Cognitive Sciencew

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo		 ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260	 ||uucp: {rutgers,uunet}!cs.buffalo.edu!rapaport
(716) 636-3193, 3180     ||fax:  (716) 636-3464

G.Joly@cs.ucl.ac.uk (Gordon Joly) (12/12/90)

In article   <3634@aipna.ed.ac.uk>, cam@aipna.ed.ac.uk (Chris Malcolm) writes

< In article <28345@mimsy.umd.edu> kohout@drinkme.cs.umd.edu (Robert Kohout) writes:
< 
< >"At the fundamental level, matter is discreet, ...
< 
< A reference to Heisenberg's Uncertainty Principle? Or, more generally,
< the idea that Knowing How It All Works is fundamentally beyond our
< punny minds?
< --
< Chris Malcolm    cam@uk.ac.ed.aipna   +44 31 667 1011 x2550
< Department of Artificial Intelligence, Edinburgh University
< 5 Forrest Hill, Edinburgh, EH1 2QL, UK             DoD #205

It would be interesting to known if space-time is discrete or
continuous, before we start on the hard problems like matter and
energy. Stephen Hawking gave a paper, many years ago, where he
calculated the Winding number of space-time to be about unity, so
space-time had holes in it. He christened this space-time "foam". More
recently, Chris Isham has been looking at the space-time manifold and
considering "quantum topology" of the "continuum" of space-time. If
space were discrete, "in reality", then pseudo-Riemannian manifolds
would be a poor models (but differentiable). Having discrete
space-time would make numerical experiments a tad easier.

But I digress...

Gordon Joly                                       +44 71 387 7050 ext 3716
InterNet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT