[comp.ai] Searle again

utility@quiche.cs.mcgill.ca (Ronald BODKIN) (01/11/90)

(More on the Chinese Room)
	One thing I'm curious about is why everyone ignores the book
with the instructions in it when it comes to understanding.  If I have
a computer that "understands" Chinese, with a cpu and a memory and I
sever the two there is no way that either would understand.  Likewise,
Searle provides the processor and the book provides the memory.  The
system understands, and if it seems weird to have understanding disembodied
in a system like this, then its also weird to having processing severed.
If Searle could MEMORIZE these rules, it probably still wouldn't be
enough, because he is used to understanding in a very different manner of
operation (i.e. its hard to make a plant grow like a frog does, but the
both of them grow) -- and moreover his understanding of Chinese is
entirely isolated from the rcontext of his usual knowledge,
so it doesn't seem like understanding at all to a more advanced creature
(then again, compared to a university math student, one can argue
convincingly that a grade 2 student learning multiplication doesn't
"understand" multiplication and more importantly IF the math
student at university had that little understanding he wouldn't really
consider himself to understand at all).  So in some sense, the claim
that Searle doesn't understand is partly a manifestation of our OWN
inability to imagine a person operating like a computer and understanding.
	As for the point about computers which fake understanding so
well that they pass a Turing Test, my question is why does it even
matter if they don't "understand" but they can respond appropriately
in every case.  Or how about a computer that is so intelligent it ALWAYS
wins (i.e. fools the examiner) -- even if he tries to use "reverse-
psychology" on it?  Such a machine might start talking about us only
simulating computers but I'm straying from my point.  And essentially,
that is to say that if aa book had the instructions for a process which
enabled  people to act intelligently enough to pass/win a turing
test and it could also give a prescription for how to save a man's
life which wasn't otherwise available, I'd be willing to bet that people
would be happy to execute that process and that the process (or at
least many processes complex enough to be interesting) cannot be
executed correctly without an understanding.
			Ron

muttiah@cs.purdue.EDU (Ranjan Samuel Muttiah) (01/11/90)

In article <1953@quiche.cs.mcgill.ca> utility@quiche.cs.mcgill.ca (Ronald BODKIN) writes:
>(More on the Chinese Room)
>	One thing I'm curious about is why everyone ignores the book
>with the instructions in it when it comes to understanding.  If I have
>a computer that "understands" Chinese, with a cpu and a memory and I
>sever the two there is no way that either would understand.  Likewise,
>Searle provides the processor and the book provides the memory.  The
>system understands, and if it seems weird to have understanding disembodied
>in a system like this, then its also weird to having processing severed.





For a more provocative reading on this checked out:

		Mind and Brain, the many-facted problems
		Ed. Sir John Eccles.

		- Pay close attention to the chapter by one 
		J. Pringle(Ox. Zoologist) and the commentary 
		by J. Josephson (Cam. Physicist. Yes, the 
		Josephson junction man).

kmcentee@Apple.COM (Kevin McEntee) (01/12/90)

In article <1953@quiche.cs.mcgill.ca> utility@quiche.cs.mcgill.ca (Ronald BODKIN) writes:
>(More on the Chinese Room)
>So in some sense, the claim
>that Searle doesn't understand is partly a manifestation of our OWN
>inability to imagine a person operating like a computer and understanding.
>			Ron

I have to agree here.  I only hope that AI research does not limit itself
by only recognizing intelligent artifacts that are analogous to presently
accepted biological brains.  This criterion of intelligence, by analogic
behavior, might close our eyes to a radical discovery of machine intelligence.

Kevin
kmcentee@apple.com