[comp.ai] Reply to Harnad re:Chinese Room

kck@g.gp.cs.cmu.edu (Karl Kluge) (02/20/89)

> From: harnad@elbereth.rutgers.edu (Stevan Harnad)
> Searle's argument is simple but deep. Its simplicity has
> led a lot of people who have not understood the deeper point it is
> making into irrelevancies of their own creation. To show it to be
> incorrect you must first understand it.

What deeper point? It appears to be nothing but a form of vitalism -- brains
have these mysterious "causal powers" without which understanding is not
possible. It looks to me like the normal sort of confusion one might expect
from someone not used to layered sytems in which each layer interprets/runs
the layer above it. 

Further, Searle engages in gratuitous non-sequitors when he says things like
"For example, my stomach has a level of description where it does
information processing, and it instantiates any number of computer programs
(true -- ed.), but I take it we do not want to say that it has any
understanding (also true -- ed.). Yet if we accept the systems reply, it is
hard to see how we avoid saying that stomach, heart, liver, etc. are all
understanding subsystems, since there is no principled way to distinguish
the motivation for saying the Chinese subsystem understands from saying that
the stomach understands (yes there is -- we have posited that the I/O
behavior of the Chinese system passes the Turing Test, we have never posited
that wrt the information processing description of the stomach -- ed.)." 
There's something "deep" here, all right, but it's not the philosophy.

> Understanding is what is "+" of Searle (and you) with respect to
> English, and "-" with respect to Searle (and you, and the computer
> running the program he's executing) with respect to Chinese. 

No one cares what Searle does or doesn't understand when he is simulating a
physical symbol system capable of passing a written Turing Test in Chinese.
Period. If what Searle calls Strong AI is true I still wouldn't expect
Searle to understand Chinese in the Chinese room. 
                                If

Searle-doesn't-understand-Chinese-in-the-Chinese-Room --> ~(Strong AI),

                      then it must be true that 

(Strong AI) --> ~(Searle-doesn't-understand-Chinese-in-the-Chinese-Room). 

Unfortunately, that isn't true. Therefore, Searle not understanding Chinese
in the Chinese Room is not sufficient to disprove Strong AI. I may be an
ideologically blinded AI fanatic (death to the heretical unbelievers!), but
I'm still capable of applying the law of the contrapositive.

> [This is the negative note on which Searle's Argument ended in 1980;
> not to leave it at that, let me add that in "Minds, Machines and
> Searle" (1989) I've tried to take it further in a positive direction,
> showing that it's only the symbolic approach to modeling the mind
> that's vulnerable to Searle's Argument; nonsymbolic and hybrid
> symbolic/nonsymbolic models are not. 

Does Searle agree with you? It would certainly seem that he anticipates
this sort of argument in the paper reprinted in "Mind Design" when he
discusses "The Brain Simulator Reply". You have to have those mysterious
"causal powers" that neurons have. 

Karl Kluge (kck@g.cs.cmu.edu)

-- 

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/21/89)

kck@g.gp.cs.cmu.edu (Karl Kluge) of Carnegie-Mellon University, CS/RI
write:;

" What deeper point? [Searle's position] appears to be nothing but a
" form of vitalism -- brains have these mysterious "causal powers"
" without which understanding is not possible. It looks to me like the
" normal sort of confusion one might expect from someone not used to
" layered systems in which each layer interprets/runs the layer above it...

The deeper point is that Searle's view is NOT vitalism but a reductio
of a particular KIND of (putative) model of the mind: The kind
advocated by "Strong AI." Searle has said repeatedly that he's not
claiming that only brains could have the requisite causal powers, just
that brains clearly DO and symbolic models (as shown by his Chinese
Room Argument) clearly [sic] DON'T. It would require a grasp of this
deeper point to realize why hand-waving about "layered systems" is NOT
a satisfactory reply to this; it just misses the point and begs the
question yet again. To show that symbolic models have mental powers
(e.g., "interpreting") you can't just wave your hand and baptise them with it.

[My own approach, in "Minds, Machines and Searle," has been to argue
for replacing the mere Teletype version of the Turing Test with the
full Robotic version -- the Total Turing Test (TTT), calling for all of
our capacities, linguistic and nonlinguistic (e.g., sensorimotor). It
turns out that because at least some of the functions of the system
that successfully passes the TTT would have to be nonsymbolic, Searle
couldn't simulate them, and hence the system would be immune to the
Chinese Room Argument. This would not, of course "prove" (or even give
empirical support of the usual kind for the hypothesis) that the system
actually had mental powers, but it would capture as many of the causal
powers of the mind or the brain that we could ever expect to capture
empirically. The other deep implication of Searle's Argument is that it
points out why a purely symbolic approach to mind-modeling is a
nonstarter. It's useful to know that... It suggests you should try
something else instead. I in turn describe an alternative hybrid
approach to grounding symbolic representations bottom-up in nonsymbolic
(analog and categorical) representations.]

" No one cares what Searle does or doesn't understand when he is simulating a
" physical symbol system capable of passing a written Turing Test in Chinese.
" Period. If what Searle calls Strong AI is true I still wouldn't expect
" Searle to understand Chinese in the Chinese room. 

First of all, SOME people apparently do care about this -- care enough
to engage in some rather strained arguments to the effect that Searle
(or "something/someone") IS understanding in the Chinese Room (see some
of the postings that have appeared on this topic lately, and my replies
to them). Not to care is either (1) not to care whether AI can capture
understanding (which is fine, but then why should mind-modelers be
discussing it with such modelers at all, any more than with auto
mechanics? and why do such modelers persist in using words like
"understanding" to describe their models?) or (2) it is not to care
about the inconsistency of claiming that the computer understands but
Searle doesn't, even though he is doing exactly the same thing the
computer is doing!

" Does Searle agree with you [about hybrid symbolic/nonsymbolic models]?
" [In his response to] "The Brain Simulator Reply" [he says].. [y]ou have
" to have those mysterious "causal powers" that neurons have.

I am holding out for a model that captures all of the brain's OBJECTIVE
powers, namely, its TTT powers, trusting that the subjective ones will
piggy-back on them, or if not, accepting that we can never hope to be
the wiser. Searle is holding out for something that captures ALL of the
brain's powers, objective and subjective. We both agree on the essential
point here, however, which is that symbolic models don't have the
requisite powers.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771