[mod.ai] Searle's Chinese Room

Laws@SRI-STRIPE.ARPA.UUCP (07/14/86)

There is a lengthy rebuttal to Searle's Chinese Room argument
as the cover story in the latest Abacus.  Dr. Rappaport claims
that human understanding (of Chinese or anything else) is different
from machine understanding but that both are implementations of
an abstract concept, "Understanding".  I find this weak on three
counts:

  1) Any two related concepts share a central core; defining this as the
  abstract concept of which each is an implementation is suspect.  Try
  to define "chair" or "game" by intersecting the definitions of class
  members and you will end up with inconsistent or empty abstractions.

  2) Saying that machines are capable of "machine understanding", and
  hence of "Understanding", takes the heart out of the argument.  Anyone
  would agree that a computer can "understand" Chinese (or arithmetic)
  in a mechanical sense, but that does not advance us toward agreement
  on whether computers can be intelligent.  The issue now becomes "Can
  machines be given "human" understanding.?"  The question is difficult
  even to state in this framework.

  3) Searle's challege needn't have been ducked in this manner.  I
  believe the resolution of the Chinese Room paradox is that, although
  Searle does not understand Chinese, Searle plus his hypothetical
  algorithm for answering Chinese queries would constitute a >>system<<
  that does understand Chinese.  The Room understands, even though
  neither Searle nor his written instruction set understands.  By
  analogy, I would say that Searle understands English even though his
  brain circuitry (or homunculus or other wetware) does not.

I have not read the literature surrounding Searle's argument, but I
do not believe this Abacus article has the final word.

					-- Ken Laws