[comp.ai] Question on the Chinese Room argument

kck@g.gp.cs.cmu.edu (Karl Kluge) (02/27/89)

> From: gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
> Subject: Re: Question on Chinese Room Argument
> 
> In article <3305@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg 
> Lee) writes:
> >used, I have a limited sympathy with the argument.  When we
> >know the mechanism behind the behavior, we don't usually speak
> >of 'understanding'.
> Hence the linguistic fact in English (and French, German, Chinese etc? - 
> comments please) that any mechanical process cannot possess understanding.
> It is a central feature of "understanding" that mechanical processes are not
> involved.

Huh? Brains understand. Brains are physical objects obeying physical laws.
Either we regress into dualism or we acknowledge that understanding arises
from the physical interactions of the parts of the brain. Even Searle
accepts that.

> The same is true of the new and creative meanings developed within the AI
> subculture.  If a computer system has understanding, then where does it lie?

If AI is correct, in the interaction of the parts of certain kinds of systems.
Resistance to that notion in no way constitutes disproof of it. If brains
have understanding, then where does it lie?

> Mine's in that still small voice within - why do AI types have to disown
> their's? Why insist on being 'scientific' when it's quite clear that you 
> can't be on these issues?

1) I doubt many "AI types" disown that small voice, they simply refuse to 
accept that it is something which must forever remain sacred, mysterious,
and beyond human comprehension. There is more awe to be felt in contemplating
the Universe from a position of comprehension of its functioning than from
a position of fear and ignorance.

2) The fact that something is clear to you is hardly compelling evidence.
AI (or more generally, information processing/computational models of
cognition) will stand or fall the way any other scientific hypothesis stands
or falls -- by how well such models explain the phenomena under study. Not
by how well or poorly it matches people's philosophical preconceptions of
how the world must be, or what "understanding" is, or what the moral and
ethical consequences of it being true would be. At the momment, information
processing models seem to be doing quite well.

> Gilbert Cockton, Department of Computing Science,  The University, Glasgow

Karl Kluge (kck@g.cs.cmu.edu)

--