[comp.ai] Chinese Room by Shannon and McCarthy from 1956

daryl@oravax.UUCP (Steven Daryl McCullough) (02/01/90)

In article <2891@bingvaxu.cc.binghamton.edu>, cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes: (quoting Shannon and McCarthy)
> 
> "A disadvantage of the Turing definition of thinking is that it is
> possible, in principle, to design a machine with a complete set of
> arbitrarily chosen responses to all possible input stimuli.  Such a
> machine, in a sense, for any given input situation (including past
> history) merely looks up in a 'dictionary' the appropriate response. 
> With a suitable dictionary, such a machine would surely satisfy Turing's
> definition, but does not reflect our usual intuitive concept of
> thinking.  This suggests that a more fundamental definition must involve
> something relating to the manner in which the machine arrives at its
> responses -- something which corresponds to differentiating between a
> person who solves a problem by thinking it out and one who has
> previously memorized the answer".

I'm not certain that it is important to differentiate between these
two cases. The usual reasons that we worry when someone is using the
"wrong" method to solve problems are (1) there may come a time when he
is faced with a problem that is not in the set he memorized, and (2)
the correct method is important in its own right, since it teaches
general principles which will be useful in similar problems.

In both cases, the worry is that, although the problem-solver got the
right answer, there will eventually come a time when his
problem-solving performance will not be as good as someone who learned
the correct method. When it is clear that future performance is not
affected, noone cares (usually) whether answers are memorized or not.
Test yourself: if I ask you "What is 6 times 7?" do you figure out,
starting from the definition of multiplication, or do you recite a
memorized answer?

If a machine passes the Turing test, then by the definition of
passing, there is *no* performance difference between it and someone
who can *really* think. So why should we care *how* it does the
thinking?

There is also a very practical side to this question: a lookup table
for all possible input histories would be absolutely enormous! Consider
the assignment "Read this 10,000 word essay and write a report on it."
Assuming the essayist has a very small vocabulary of 1000 words, there
are still

         10,000
     1000

possible essays. A table of all possibilities would have many more
entries than there are atoms in our galaxy. For this reason, an actual
machine which could pass the Turing test would *have* to do something
more intelligent than a table lookup. 

O------------------------------------------------------------------------->
> | Cliff Joslyn, Cybernetician at Large,cjoslyn@bingvaxu.cc.binghamton.edu 
> | Systems Science, SUNY Binghamton, Box 1070, Binghamton NY 13901, USA 
> V All the world is biscuit shaped. . .

. . . Sure. And I've got one, two, three, four, five
senses working all the time.

Daryl McCullough, Odyssey Research Associates, Ithaca, NY
oravax.uucp!daryl@cu-arpa.cs.cornell.edu

 

cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) (02/02/90)

In article <1307@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes:
>Test yourself: if I ask you "What is 6 times 7?" do you figure out,
>starting from the definition of multiplication, or do you recite a
>memorized answer?

Actually, I've got that memorized.  And so that is a mental performance
that doesn't rely on my intelligence, becasue I could train a rat to do
it.  Not everyhting I do that is mental is also intelligent, just as
nothing my cat does, a lot of which is mental, is intelligence.

>If a machine passes the Turing test, then by the definition of
>passing, there is *no* performance difference between it and someone
>who can *really* think. So why should we care *how* it does the
>thinking?

Now that pragmatic point is significant: Shannon does not say that we
could *tell* that the machine is not intelligent.  *For all we know* it
might be.  But that is an observation about the poverty of evidence and
inductive inference, which applies to any scientific decision we might
make, not strictly about the intelligence of systems.

I have always said that the Chinese room (and Shannon's observation)
shows that the TTT is a *necessary*, but not a *sufficient* condition
for intelligence.

>There is also a very practical side to this question: a lookup table
>for all possible input histories would be absolutely enormous! 

Another important pragmatic problem: construction of the Chinese room is
physically impossible.  Question: in passing the TTT, does the system
have to respond *as quickly* as a human?

-- 
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjoslyn@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Box 1070, Binghamton NY 13901, USA
V All the world is biscuit shaped. . .

hougen@umn-cs.cs.umn.edu (Dean Hougen) (02/02/90)

In article <2891@bingvaxu.cc.binghamton.edu>,
 cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
Shannon and McCarthy say:

>"A disadvantage of the Turing definition of thinking is that it is
>possible, in principle, to design a machine with a complete set of
>arbitrarily chosen responses to all possible input stimuli. ... "

Perhaps a disadvantage *in principle*, but the "Turing definition of 
thinking" (I think this is a bad phrasing of what Turing was up to)
was intended to answer the real world question, "Can machines think?"
Supposing that the real world is different than it really is to object
to Turing seems quite silly, IMHO.  (For those who haven't seen it:
the construction of the set of responses for a real world machine would
have to start at some time and end at some time - therefore the machine
would only be ready to respond to some finite set of input stimuli, and
would fail the test badly if the questioner strayed outside that set.)

Dean Hougen
--
"And all you touch and all you see
 Is all your life will ever be."  - Pink Floyd

hwajin@ganges.wrs.com (Hwa Jin Bae) (02/02/90)

I do remember Searle citing Claude Shannon's work in some of his papers
but none of them were coherent (as most of his writings) or even
understood.  I cannot help but marvelling at his career that is entirely
based on this Chinese Room nonsense.  The sheer volume of responses and
arguments generated since its first appearance in his 1981 (?) paper
boggles the mind.  Douglas Hofstadter summed it up best in _The Mind's I_
when he commented on Chinese Room -- one must understand before one can
criticize.

--
Hwa Jin Bae, Wind River Systems, Emeryville CA
hwajin@wrs.com  (uunet!wrs!hwajin)

hougen@umn-cs.cs.umn.edu (Dean Hougen) (02/03/90)

In article <HWAJIN.90Feb1191721@ganges.wrs.com> hwajin@ganges.wrs.com (Hwa Jin Bae) writes:
>I cannot help but marvelling at his career that is entirely
>based on this Chinese Room nonsense.  The sheer volume of responses and
>arguments generated since its first appearance in his 1981 (?) paper
>boggles the mind. 
>Hwa Jin Bae, Wind River Systems, Emeryville CA

This is especially mind boggling (though surely not marvelous, at least
not in the most widely used sense of the word) when one considers that
along with the original article came a number of replies from various
authors, and one of them (the one by Haugland, if I remember correctly)
completely destroys Searle's argument.  And Searle's reply to this
criticism completely misses the boat.  Its rather like having someone
give a proof that 2=1, having the error in the proof accompany the proof,
and yet having the proof repeated and discussed for years.  Part of the
problem may be that the criticisms (peer review) of the article are often
ignored (because, for example, in the class in which the Searle's paper is
brought up, the instructor doesn't photocopy this part).  They shouldn't
be.  There is much more to be learned from the peer review of the article
than all that Searle has written on the Chinese room.

Dean Hougen
--
"Say something once, why say it again?" - Talking Heads

cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) (02/03/90)

In article <HWAJIN.90Feb1191721@ganges.wrs.com> hwajin@ganges.wrs.com (Hwa Jin Bae) writes:
>I cannot help but marvelling at his career that is entirely
>based on this Chinese Room nonsense.  

My own impression is that the Chinese room argument is a correct and
obviously simple rebuttal to the claim that the TTT is both a necessary
and sufficient condition for something to be intelligent.  I doubt that
this is what Turing claimed, however, and it's hardly a stopping point
for discussion in cognitive science.  Rather it's a small, natural and
naive starting point in any discussion about machine intelligence. 

As to Searle's personal success with the idea, it's probably just that
his Gedanken experiment is nicely held in the mind in a "cute" manner,
and it's very clever.  Again, Shannon summarized the position in that
one paragraph, yet he wasn't famous for it. 
-- 
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjoslyn@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Box 1070, Binghamton NY 13901, USA
V All the world is biscuit shaped. . .

sloan@cs.washington.edu (Kenneth Sloan) (02/04/90)

In article <2903@bingvaxu.cc.binghamton.edu> cjoslyn@bingvaxu.cc.binghamton.edu.cc.binghamton.edu (Cliff Joslyn) writes:
>...
>As to Searle's personal success with the idea, it's probably just that
>his Gedanken experiment is nicely held in the mind in a "cute" manner,
>and it's very clever.  Again, Shannon summarized the position in that
>one paragraph, yet he wasn't famous for it. 

Somehow, Shannon and McCarthy managed to get quite famous enough,
anyway...and for somewhat more substantial contributions.  I don't think they
felt the need to make a career out of a one paragraph straw-man idea.

0) "Those who don't read the literature are doomed to rewrite it"

1) "Those who do read the literature, but have nothing to add, are grateful
for those in class 0)"

-Ken Sloan

blenko-tom@CS.YALE.EDU (Tom Blenko) (02/05/90)

|
|Somehow, Shannon and McCarthy managed to get quite famous enough,
|anyway...and for somewhat more substantial contributions.  I don't think they
|felt the need to make a career out of a one paragraph straw-man idea.
|
|0) "Those who don't read the literature are doomed to rewrite it"
|
|1) "Those who do read the literature, but have nothing to add, are grateful
|for those in class 0)"

... appeared as an echo of previous slurs on Searle's professional
contributions.

This is certainly ironic.

First, Searle has had quite a distinguished and productive career as a
philosopher independent of anything he has written about AI.

Second, it is evident to anyone who has done some homework where
Searle's concerns originated. Indeed, he himself adopted the
functionalist view of meaning early on, and moved to a different view
subsequently.  He has even gone to the trouble of writing a book to
discuss the latter point of view!


I think that conduct in this newsgroup of late has been disgraceful.  I
strongly doubt that several respondants have read the article(s) in
question, and it is quite clear that most have failed to understand
what they say (never mind arguments about whether the position
presented is correct or defensible). And now it has moved from the
level of sophistry contributed by the uninformed to the higher plane of
attacks on Searle's professional standing mounted by the profoundly
ignorant.  I think the readership is entitled to some relief.

While I doubt that this was its intent, I sometimes believe Searle's
Minds, Brains, and Programs has become an intelligence test for the AI
community. This is unfortunate.

	Tom

hwajin@ganges.wrs.com (Hwa Jin Bae) (02/07/90)

In article <14266@cs.yale.edu> blenko-tom@CS.YALE.EDU (Tom Blenko) writes:
   First, Searle has had quite a distinguished and productive career as a
   philosopher independent of anything he has written about AI.

No one is making fun of his career as a philosopher.  Everyone knows that he
studied philosophy and was a Rhodes scholar and has written several books
on various subjects, etc.  This doesn't mean that he knows what he's talking
about when it comes to AI or computers.  His simple-minded and ill-motivated
attack [he used to say that he was out to get those who waste valuable
grant money on "useless" studies] on connectionist and parallelism approach
to AI is yet another proof that he's just what he has been all along --
a hack.

   Second, it is evident to anyone who has done some homework where
   Searle's concerns originated. Indeed, he himself adopted the
   functionalist view of meaning early on, and moved to a different view
   subsequently.  He has even gone to the trouble of writing a book to
   discuss the latter point of view!

He's not the only one who's done this -- there are numerous examples of
the same story -- people who started out as a skeptic but turned into
a true believer and talk ad nauseam about their revelations.  What's the point?
Does this turn-of-his-positions make his ignorant criticisms any more valid?

   I think that conduct in this newsgroup of late has been disgraceful.  I
   strongly doubt that several respondants have read the article(s) in
   question, and it is quite clear that most have failed to understand
   what they say (never mind arguments about whether the position
   presented is correct or defensible). And now it has moved from the
   level of sophistry contributed by the uninformed to the higher plane of
   attacks on Searle's professional standing mounted by the profoundly
   ignorant.  I think the readership is entitled to some relief.

You certainly have a very negative view of the readership in this newsgroup.
On the contrary, I strongly believe that everyone in this newsgroup has 
at one time another (if not in the Jan issue of Scientific American) read
Searle's Chinese Room blather and probably by now sick and tired of its
retarded rhetorics.  After all, he's been dwelling on it since 1981 without
ever adequately responding to his critics [I sincerely think that he doesn't
even understand some of the best arguments presented to him over the years
since he first brought up this Chinese Room nonsense; he simply chooses
to pick on less elegant/powerful arguments and decline to comment on the
rest -- as he did in his Scientific American article -- he doesn't even attempt
to properly address the criticisms in the accompanying article in the same
issue of the magazine, the luminuous room article.]  Hofstadter and Dennett
have shattered the Chinese Room on every point long ago.  Go look them up
on your bookshelves.

   While I doubt that this was its intent, I sometimes believe Searle's
   Minds, Brains, and Programs has become an intelligence test for the AI
   community. This is unfortunate.

This is truly laughable.  Most of us believe that the responses to Searle's
Chinese Room have become a litmus test for its critics.

hwajin
--
Hwa Jin Bae, Wind River Systems, Emeryville CA
hwajin@wrs.com  (uunet!wrs!hwajin)

xerox@cs.vu.nl (J. A. Durieux) (02/07/90)

In article <HWAJIN.90Feb6121201@ganges.wrs.com>,
	hwajin@ganges.wrs.com (Hwa Jin Bae) writes:

>Hofstadter and Dennett have shattered the Chinese Room on every point
>long ago.  Go look them up on your bookshelves.

Strange.  Whenever I reread their stuff I get more convinced that
they haven't understood what Searle is talking about at all.
Possibly because they are so "immersed" in their position that
operationality is all there is to being.
Searle doesn't seem to be a good defender of his own cause, I
think one has to have thought about his points of view before in
order to have them "resonate", and to understand them. (I don't
feel able to state them better, by the way.)
I think the class of people that doesn't think there is a
fundamental difference between "thinking" and "understanding" is
not going to feel that resonance.

My opinion is, that behaviour has simply a too small bandwidth to
be able to distinguish understanding and not-understanding
systems *in principle*.  Compare the "reduced Turing test": an
observer puts his hand into either of two holes, and gets hit by
a stone when he does so.  If the observer is unable to find out
behind which hole is the human with a stone, and behind which a
robot with a stone ...
[I think this comes from Hofstadter, by the way]

If I (coolly and rationally) decide to play being angry, and do
so convincingly. is "my whole system" in some sense angry?

						Biep.

radford@ai.toronto.edu (Radford Neal) (02/08/90)

In article <5319@star.cs.vu.nl> xerox@cs.vu.nl (J. A. Durieux) writes:

> I think one has to have thought about [ Searle's ] points of view before 
> in order to have them "resonate", and to understand them. (I don't
> feel able to state them better, by the way.) ...

> My opinion is, that behaviour has simply a too small bandwidth to
> be able to distinguish understanding and not-understanding
> systems *in principle*...

The problem with this argument is that it is too powerful. If you
accept it, you must also abandon any beliefs you may have that 
_other people_ have minds. 

I've tried to understand the Chinese Room argument, and failed. It 
seems to be based on a simple refusal to understand basic technical
and/or philosophical points. This may seem implausible, given that Searle 
is supposedly competent, but I have no better hypothesis. The topic
seems to induce nonsense all around, as with the "refutation" that 
conventional programs can't understand, but neural networks might.

Let us suppose that a machine is constructed that can at least mimic
all human intellectual and emotional behaviour. Whether this is possible
is an empirical question, but Searle appears willing to hypothesize that
it is. Will people consider the machine to be a "person", endowed with
attributes such as intelligence and morality? This too is an empirical
question. If they've had long conversations with it, heard it describe
its hopes and fears, had it help them with their personal problems, etc.
I think most people would consider it a person, but some might not, if
they knew how it was implemented. Finally, one might ask whether one
_should_ consider it a person. This is a moral question, similar to 
that of whether one should consider members of other races to be people.
There is nothing logically inconsistent in Searle answering this question
in the negative, but once it is seen in this light, the argument looses
all force for those who do not share his prejudices.

    Radford Neal

nolanj@ccvax.ucd.ie (James Nolan) (02/08/90)

In article <5319@star.cs.vu.nl>, xerox@cs.vu.nl (J. A. Durieux) writes:

> I think the class of people that doesn't think there is a
> fundamental difference between "thinking" and "understanding" is
> not going to feel that resonance.
> 
> 						Biep.
What exactly is the difference between thinking and understanding? I know
we might be getting into a pedantic discussion about the meaning of words
which will be exacerbated by the fact that we don't share a common first
language ( correct me if I'm wrong ). Basically, my problem is that I
don't see this fundamental difference you talk about.

cash@convex.com (Peter Cash) (02/08/90)

In article <90Feb7.120434est.6602@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes:

>I've tried to understand the Chinese Room argument, and failed. It 
>seems to be based on a simple refusal to understand basic technical
>and/or philosophical points.
>...
>Let us suppose that a machine is constructed that can at least mimic
>all human intellectual and emotional behaviour. Whether this is possible
>is an empirical question, but Searle appears willing to hypothesize that
>it is. Will people consider the machine to be a "person", endowed with
>attributes such as intelligence and morality? This too is an empirical
>question. If they've had long conversations with it, heard it describe
>its hopes and fears, had it help them with their personal problems, etc.
>I think most people would consider it a person, but some might not, if
>they knew how it was implemented. Finally, one might ask whether one
>_should_ consider it a person. This is a moral question, similar to 
>that of whether one should consider members of other races to be people.
>There is nothing logically inconsistent in Searle answering this question
>in the negative, but once it is seen in this light, the argument looses
>all force for those who do not share his prejudices.

Your remarks are very sensible, and I am in much closer agreement with you
than I am with Searle.  But I have to rise to Searle's defense to the
extent of saying that his mistakes are not quite as simple as you make out.

Remember, Searle is a philosopher.  He has certain philosophical hypotheses
about what consitutes the essence of a thinking being.  He believes that
there is something altogether special about thinking beings, and that this
special thing cannot--even in principle--be shared by any machine or
program.  From what he says, I gather that this "specialness" centers
around the way humans use language.  He thinks that there is a "semantic
content" to the things we say, and that any system that uses mere rules to
manipulate language does not and cannot use language in this way.  

Therefore, Searle would say that the question "Should we consider [the
hypothetical machine that imitates a human] a person?" is not primarily a
moral one.  Instead, he would say that it revolves around a point of fact:
does the machine's conversation carry "semantic content"?

I believe that this talk about "semantic content" is nothing more than a
modern rephrasing of the old philosophical jargon about "consciousness".
Furthermore, I believe that the nature of "semantic content" will prove
just as elusive as was "consciousness": you just can't take it to the bank.
(Of course, to prove that I am correct would take a sizeable philosophical 
paper, and is not the sort of thing one can undertake in a net posting.) 

 


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
             |      Die Welt ist alles, was Zerfall ist.     |
Peter Cash   |       (apologies to Ludwig Wittgenstein)      |    cash@convex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~