[comp.ai] Chinese rooms?

root@thespot.UUCP (Postmaster) (06/30/90)

Pardon my ignorance, I just started reading this newsgroup... But, what 
are these 'Chinese Rooms' people keep talking about?

Thanks,


djg


Send any e-mail to...              |   "Even a broken watch is right
         root@thespot              |    twice a day."
!iuvax!ndmath!nstar!thespot!root   |                  -- Bazooka Joe

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (07/02/90)

In article <DPNJL2w161w@thespot.UUCP> root@thespot.UUCP (Postmaster) writes:
>Pardon my ignorance, I just started reading this newsgroup... But, what 
>are these 'Chinese Rooms' people keep talking about?

Good question - most of us talking about them don't really know what they
are either!

The 'Chinese Room' is an 'experiment' proposed by Searle in his paper
'Mind, Brains, and Programs' (I'm pretty sure that's the name of the paper,
but I may stand corrected - everyone nowadays refers to it simply as the
'Chinese Room Paper').  In it, Searle says that you can put a man in a 
room with input and output slots, and lots of tables, translation books,
etc. of Chinese characters (the man in the room is a non-Chinese speaker).

On receiving Chinese characters as input, the man would look up conversion
tables that would tell him what Chinese characters to translate the input
into, then put that translation into the output slot.

The contention is that even though conversation performed in Chinese
characters looks like a real conversation, there's no 'understanding' on
the part of the room.  You could put in a phrase (in Chinese) that says
'What do you think of Searle's paper?', and the response could be 'I think
it's a crock'.  The man in the room only did symbol manipulations (like
a computer), so no understanding of the conversation is taking place.

Searle's point is that the brain is different from things like computers.
It has some fundamental physical property in it not found in silicon chips
or vacuume tubes that allows for 'causitive powers' to arise, leading to
'understanding'.

There's been a lot of controversy about this paper over the the years, and
here's why I think that is (no doubt, many would disagree with some of
my points).  While on the surface it sounds good, many of the arguments and
assumptions Searle makes has such big holes you could drive a truck through
them.  The idea that building such a room is possible is one of them.  In
order to respond to ANY phrase about ANYTHING, it would seem that the room
would need to be able to learn (e.g., a discussion about a new concept
in physics).  So either you couldn't make the room do this in the first
place (making the premise for the argument invalid), or you could, but only
if the room did have causitive powers.

Searle is also very loose with his language.  No definition is given
of such important terms as 'understanding' or 'causitive power'.  It
could be argued that there's no firm definition of these phenomena in the
AI community, but I think if you're going to argue that something can or
can't have a property, you should at least give a local definition of it
for the purposes of discussion.

There's a lot more that could be said about the paper, but I've said enough.
I'd recommend reading it, as it seems to keep coming up in discussions on
the net.  It seems to me that most people disagree with Searle's arguments,
if not his conclusions.  The crux of the contention, I think, is that there
is still so little known about such things as 'causitive powers' and what
makes us think that at the moment, any position can be taken on the subject
and a reasonable-sounding argument made for it.

- Jim Ruehlin

hougen@cs.umn.edu (Dean Hougen) (07/03/90)

In article <3446@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>In article <DPNJL2w161w@thespot.UUCP> root@thespot.UUCP (Postmaster) writes:
>>Pardon my ignorance, I just started reading this newsgroup... But, what 
>>are these 'Chinese Rooms' people keep talking about?
>
>Good question - most of us talking about them don't really know what they
>are either!
>
>The 'Chinese Room' is an 'experiment' proposed by Searle in his paper
>'Mind, Brains, and Programs' (I'm pretty sure that's the name of the paper,
>but I may stand corrected - everyone nowadays refers to it simply as the
>'Chinese Room Paper').  In it, Searle says that you can put a man in a 
>room with input and output slots, and lots of tables, translation books,
                                                       ^^^^^^^^^^^
>etc. of Chinese characters (the man in the room is a non-Chinese speaker).
>
>On receiving Chinese characters as input, the man would look up conversion
>tables that would tell him what Chinese characters to translate the input
                                                       ^^^^^^^^^
>into, then put that translation into the output slot.
>
>The contention is that even though conversation performed in Chinese
>characters looks like a real conversation, there's no 'understanding' on
>the part of the room.  You could put in a phrase (in Chinese) that says
>'What do you think of Searle's paper?', and the response could be 'I think
>it's a crock'.  The man in the room only did symbol manipulations (like
>a computer), so no understanding of the conversation is taking place.
>
>- Jim Ruehlin

To the original poster:  Don't let this description confuse you.  The
Chinese Room Argument is *NOT* about translations between languages.
The conversation carried out by the room is carried out entirely in 
Chinese.

I see a lot of wasted time and effort made by people arguing against
misconceptions about what the Chinese Room Argument is (such as the
argument that no exact translations can be made, so the room cannot
exist).  If you are really interested, look up the original article
(preferably - and be sure to read the pear commentary) or at least his
recent (within the last twelve months) Scientific American article.


Dean Hougen
--
"They say you're stupid, that you haven't got a brain."  - Oingo Boingo