[comp.ai] Chinese Room & Whirlpools

darren@cs.city.ac.uk (Darren Whobrey) (01/30/90)

  Despite being somewhat off the mark, Searle's Chinese Room problem does raise
some interesting points. Firstly, the room does not understand Chinese and,
secondly, that non-self-referencial algorithms cannot embed symbols with
internal meaning. Thus he concludes that computers can never understand or
think, and that strong AI is, well, lacking.
  Before we make such bold claims don't we have to define what we mean by 
'understanding' and 'consciousness'? What is it in our minds that understands,
or is conscious? If you introspect on this for a few moments you'll no doubt
start thinking in circles, or that you don't really understand anything at all.
What if we have a self-referencial algorithm that imparts meaning to its
symbols in an internally consistent manner. 'Consciousness' is deliberately
programmed, or modelled, via a self-referencial algorithm, and is not some
quirk arising from a very complex system as some would suggest.
In Searle's terms the mind-model that the algorithm simlutes 'understands'
or is 'conscious'. Not the room or person in the room, or the rules they are
following, but the model itself understands, has internal meaning etc.

The Whirlpool in the Room
-------------------------
  Suppose we have a whirlpool in the room and our little helper (Searle say)
is standing in it's eye. Now this whirlpool is rather special for its made
from ping-pong balls, which for our purposes are symbolic of molecules
(water, nitrogen, whatever). In order to maintain the whirlpool our helper
must apply some rules to each ball. Specifically, move each with a certain 
velocity.  What prevents him getting wet (however you want to define getting
wet by ping-pong balls)? Suppose further that these ping-pong balls are
pretty sly, they've been lisening to Searle's argument that syntactically
manipluated symbols cannot have meaning, so all at once they think to
themselves 'Hey, we're not a whirlpool and thus shouldn't be wizzing around
this person'. Immediately they collapse in upon our helper drenching him to
the skin. 
  The above scenario could quite as easily have been carried out on a computer
of any sort, as Searle reminds us. The point is though, that firstly the
model the algorithm is simulating is what's important, e.g. the whirlpool, and
not the ping-pong balls, the helper, the rules he's following, or the room.
Secondly, the room and it's contents has it's own internal semantics. 
Compare the whirlpool to consciousness, it's derived from rules applied to
symbols which have meaning steming from their interaction and relation to 
each other.
  I know the above scenario isn't air-tight, but I think it's amusing anyway.  
Basically we have to consider self-referencial systems with either direct
feedback, or some decaying recursive feedback process, if we are ever to
create artificially conscious machines. This is what gives the system meaning.

Darren Whobrey,                      e-mail: Janet: darren@uk.ac.city.cs
City University,                     God was satisfied with his own work,
London.                              and that is fatal.     Butler, 1912.