[comp.ai] Chinese Rooms and slippery concepts

vanroy@bellatrix.uucp (01/15/90)

Gentlemen,
  I've been following the Searle -- Chinese room argument on comp.ai
with some interest.  But I'm puzzled by all the vagueness.  Please
correct me if I'm wrong, but to me the Chinese Room behaves just like
a real person, and that's good enough.  Why go to all the lengths to
introduce fuzzy words like "understands" and "mind" and so forth?
Using fuzzy words means that nobody will ever convince anyone with
another opinion.
  In addition, someone on the net gave the example of a child being born
and learning to live in the world to show that semantics can arise from syntax.
Unless the child is hardwired with semantics at birth, this seems a valid
argument.  Since all of our senses just get syntax input from the outside world,
and we seem to build up an internal model out of it, where does that leave
Searle's axiom?
  Assume that the child is born with hardwired semantics is not good enough
either.  Where does that semantics come from?  Does evolution slowly put
semantics into living beings as they evolve?  At the Big Bang, did the
universe start out with semantics (some meaning, symbols) or not?  Where is
the inconsistency in assuming that the universe started without any "meaning"?
The argument starts to look like religion.
  A book which gives some indirect insight into part of this is "Vision
and the Brain" (title not exact) from the Sci.Am. Library.  It gives a good
discussion on the first few layers of vision processing in the brain.
There's no mystery, just lots of hardware!  The brain has a nontrivial
problem to solve & it tackles it without shirking by introducing enough
hardware to solve it.  The design of vision processing in the brain
is quite clever & there are nice tricks to reduce the amount of hardware,
but no mystery is involved.  Why shouldn't the rest of the brain be
built in this way too?  Why introduce slippery concepts like "mind" and
"understands"?

Confusedly yours,
	Peter Van Roy
	vanroy@ernie.berkeley.edu

polito@husc4.HARVARD.EDU (Jessica Polito) (01/16/90)

In article <33668@ucbvax.BERKELEY.EDU> vanroy@bellatrix.uucp () writes:
>Gentlemen,

sorry to fill the ai group up with this, but just a point -- can the 
gratuitous sexism please be left out?  i know that most of the readers/
posters may be male, but we're not all.. thank you...
--maya
polito@husc4.harvard.edu

cam@aipna.ed.ac.uk (Chris Malcolm) (01/16/90)

In article <33668@ucbvax.BERKELEY.EDU> vanroy@bellatrix.uucp () writes:

>  I've been following the Searle -- Chinese room argument on comp.ai
>with some interest.  But I'm puzzled by all the vagueness.
                              ^^^^^^^            ^^^^^^^^^
 ...

>Why introduce slippery concepts like "mind" and
>"understands"?

Ok. You rephrase your objection without introducing slippery concepts
like "puzzled" and "vagueness", and I'll see what I can do.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (01/16/90)

In article <33668@ucbvax.BERKELEY.EDU>, vanroy@bellatrix.uucp writes:
> Gentlemen, [..]
>   In addition, someone on the net gave the example of a child being born
>and learning to live in the world to show that semantics can arise from syntax.
[..]

That was myself, and you summarise my argument excellently.
-- 
...........................................................................
Andrew Palfreyman	andrew@dtg.nsc.com	Albania before April!