[comp.ai] Reply to Harnad on Chinese Room

kck@g.gp.cs.cmu.edu (Karl Kluge) (02/22/89)

> From: harnad@elbereth.rutgers.edu (Stevan Harnad)
> 
> [My own approach, in "Minds, Machines and Searle," has been to argue
> for replacing the mere Teletype version of the Turing Test with the
> full Robotic version -- the Total Turing Test (TTT), calling for all of
> our capacities, linguistic and nonlinguistic (e.g., sensorimotor). It
> turns out that because at least some of the functions of the system
> that successfully passes the TTT would have to be nonsymbolic, Searle
> couldn't simulate them, and hence the system would be immune to the
> Chinese Room Argument. 
> The other deep implication of Searle's Argument is that it
> points out why a purely symbolic approach to mind-modeling is a
> nonstarter. It's useful to know that... It suggests you should try
> something else instead. I in turn describe an alternative hybrid
> approach to grounding symbolic representations bottom-up in nonsymbolic
> (analog and categorical) representations.]

I do computer vision, so I don't really find this controversial. A couple of
tangential points...

First, I doubt if even hard core supporters of the Physical Symbol System
Hypothesis (PSSH) would deny that some form of perception like vision is
necessary for a system to be intelligent. We know human brains can
potentially house minds. We also know that if you raise an infant in a dark
room with no companionship or human interaction for the first 7 or 8 years
of its life it's unlikely that its brain will ever develop a fully human
mind. Similarly, there ought be no reason to believe that a PSS "raised in a
dark room" could ever possess a mind. This is, of course, a completely
different issue than whether a PSS that got symbolic inputs from a partly
analog vision module could have a mind.

Second, it's been a while since I've read Turing's papers, but I doubt he
would have considered the use of teletypes (as opposed to, say, fax
machines) as central to the point of the Turing Test -- in other words, it's
unclear to me that he intended the test to be purely linguistic (this may be
memory failure on my part).

Third, although I read your TTT posts, I don't recall why you think motor
skills are important. There are a great many people who can only produce a
small, discrete set (often binary) of controlled motor actions (Stephen
Hawking comes to mind).

> " No one cares what Searle does or doesn't understand when he is simulating a
> " physical symbol system capable of passing a written Turing Test in Chinese.
> " Period. If what Searle calls Strong AI is true I still wouldn't expect
> " Searle to understand Chinese in the Chinese room. 
> 
> First of all, SOME people apparently do care about this -- care enough
> to engage in some rather strained arguments to the effect that Searle
> (or "something/someone") IS understanding in the Chinese Room (see some
> of the postings that have appeared on this topic lately, and my replies
> to them). 

You're right, of course. What I meant to say was not that no one cared
(clearly they do, or we wouldn't be having this discussion), rather that it
was immaterial whether Searle understood Chinese in the Chinese Room. You'll
see why below.

> Not to care is either...(2) it is not to care
> about the inconsistency of claiming that the computer understands but
> Searle doesn't, even though he is doing exactly the same thing the
> computer is doing!

AHA! Here we have the problem. Strong AI does not imply that the computer
possesses understanding of Chinese in the Chinese Room any more than Searle
possesses understanding of Chinese in the Chinese Room. Strong AI does not
imply that Searle or the computer *ought* to possess understanding of
Chinese in the Chinese Room. 

The understanding is *not* (by the hypothesis of Strong AI) a property
possessed by the box by virtue of its emulating some Understanding Turing
Machine (UTM). The understanding is a property possessed by the UTM by
virtue of its being instantiated and executing on some box, whether Searle
or a VAX. John Haugeland made what seems to be this point in defining GOFAI
in a talk he gave at Pitt. 

Since the truth of the Strong AI hypothesis does not imply that Searle ought
understand Chinese in the Chinese Room, then it cannot be the case that
Searle not understanding Chinese in the Chinese Room is sufficient to show
that Strong AI is false. 

Now, one can argue that I haven't properly characterized Strong AI here.
That may well be the case, since I don't work on "Strong AI". I leave it to
older, wiser, and more involved defenders of Strong AI to correct or confirm
my characterization of what Strong AI implies.

Karl Kluge (kck@g.cs.cmu.edu) 
These opinions are mine. I have no idea how well they match the opinions of
DARPA, Alan Newell, or the School of Computer Science...

--