[comp.ai.philosophy] Intelligent Chinese Rooms

fnwlr1@acad3.fai.alaska.edu (RUTHERFORD WALTER L) (11/22/90)

  I have been following the Chinese Room arguments for a while now and
it keeps reminding me of another question.  If a man sees a squirrel
on a tree and walks around the tree trying to get behind the squirrel,
but the squirrel (not wanting to be caught) also circles around the
tree keeping the man always in sight (by always being on the same side
of the tree as the man) - Did he actually walk around the squirrel?
Another way to look at it is if you try to walk around me, but I keep
turning in a circle so that I always face you - Have you actually
walked around me? The answer to each questions appears to be "That
all depends on your definition (of 'around' and 'intelligence')."
  The problem with the Chinese Room argument is that NOBODY has defined
intelligence. The Chinese Room is intelligent in the same way that a
student who takes a quiz with all of the answers written on his sleeve.
He may score 100% yet understand nothing. Is that intelligence?
The problem is magnified, because the Chinese Room test is more specific
than a Turing Test. My test for detecting intelligence would force the
'Testee' to prove that it had learned something - a feat that I can't
imagine from a Chinese Room.  For example, if I told the CR "Today I
would like to teach you English", (or any other code that it doesn't
already "know") it is GOING to soon fail my test! Because it doesn't
truly UNDERSTAND Chinese I can't truly teach it anything beyond its
hardwired knowledge.
  IMHO the true test of intelligence is how the 'Testee' deals with
novel situations.  The CR fails my test, because it just has an enormous
crib sheet that it is using to pass a specific predefined test - push
it outside of its artificial boundaries and it will show you how little
it understands (just like the cheating student).

  Because I'm trying to keep this short there is a lot I haven't covered
so please keep the flames low - all of the hand waving in my arguments
I'm sure will fan the flames high enough!  :-)


---------------------------------------------------------------------
      Walter Rutherford
       P.O. Box 83273          \ /    Computers are NOT intelligent;
   Fairbanks, Alaska 99708    - X -
                               / \      they just think they are!
 fnwlr1@acad3.fai.alaska.edu
---------------------------------------------------------------------

thornley@cs.umn.edu (David H. Thornley) (11/25/90)

In article <1990Nov22.012416.4493@hayes.ims.alaska.edu> fnwlr1@acad3.fai.alaska.edu writes:
>
>[Discussion of the lack of definition of "intelligence"]
>  The problem with the Chinese Room argument is that NOBODY has defined
>intelligence. The Chinese Room is intelligent in the same way that a
>student who takes a quiz with all of the answers written on his sleeve.
>He may score 100% yet understand nothing. Is that intelligence?
>The problem is magnified, because the Chinese Room test is more specific
>than a Turing Test. My test for detecting intelligence would force the
>'Testee' to prove that it had learned something - a feat that I can't
>imagine from a Chinese Room.  For example, if I told the CR "Today I
>would like to teach you English", (or any other code that it doesn't
>already "know") it is GOING to soon fail my test! Because it doesn't
>truly UNDERSTAND Chinese I can't truly teach it anything beyond its
>hardwired knowledge.


Sorry, this isn't a Chinese Room.  Searle defined the Chinese Room
as implementing a program sufficient to pass the Turing Test.

Consider a situation where an interrogator has a connection to a human
and a computer running Program X.  If the interrogator cannot tell the
difference, then Program X has passed the Turing Test.  (I am
simplifying somewhat, but this is the essence of the test.)

Searle is not attempting to prove that no program can ever pass the
Turing Test.  What he is attempting to do is prove that the test is
insufficient to establish intelligence.

To do this, Searle claims that the hypothetical Program X can be
implemented as a human blindly following a set of specific rules.
In fact, this is true, provided that the human is extremely accurate
and meticulous, and provided we are willing to wait a *long* time
for an answer.  (I assume that Program X is long and complicated,
since it must imitate so many aspects of human nature.)  (It may be
worth noting that I do *not* believe that any human could memorize
Program X and "run" it in his/her head, knowing what I do about
the nature of computer programs and human memory capacity.  As a
skilled programmer, I absolutely need paper to hand-run short computer
programs, and I don't believe anyone could be sufficiently many
order of magnitudes better than I am at this task to run Program X
from memory.  This means that, as far as I can tell, Searle has based
his refutation of the systems argument on a false statement.)

Anyway, it is clear that the Chinese Room, if it exists, is fully
capable of learning and expressing all other signs of understanding
that we can derive from conversation.  If not, it isn't a Chinese Room.

It would be perfectly reasonable to argue that no program could ever
pass the Turing Test, although I don't remember seeing any such
arguments recently.  It is also reasonable to argue that a program
could pass the Turing Test and not be intelligent, although I have
not found any such arguments convincing.  However, I have seen
lots of straw Turing Tests out there (I think Searle is using one)
and I don't want the discussion further confused with straw Chinese
Rooms.

DHT