[comp.ai] Chinese rooms and other rooms

park@usceast.cs.scarolina.edu (Kihong Park) (04/11/90)

Sorry to barge in on the argument like this. Not to beat around the bush,
I would like to forward this comment. A better understanding of high-level
aspects of "intelligent" phenomena, or for that matter any type of solid
understanding of this elusive property which we so cherish is definitely 
called for.

But can an understanding be reached by thought experiments and contemplation
alone? Don't get me wrong. For what it's worth, I do consider myself a
theoretician and do ravel in this aspect of investigation. But could it be 
that our approach is wrong? At least, it may be the slipperiest side of 
the mountain we wish to climb. Why not try more pragmatic approaches? 

Instead of trying to understand what it means to understand, we could try
to direct the effort at understanding less "abstract" phenomena. For
instance, in my mind, the second most amazing aspect about "intelligence"
is its low-level nature. What we conventionally consider "intelligent"
phenomena(inductive/deductive reasoning, huge self-awareness, abstract
symbolic manipulation one-step removed from its physical basis, etc.) are
traits that we would in most part attribute to this select class of
beings called humans.

But what about the commonalities we share with our "lesser" beings? Your dog
surely wouldn't know how to play chess, but is knows to walk/run without
bumping into things, it can distinguish different people and objects, it
has "feelings" in the sense that it knows what it likes, and it also can
be trained to do various tricks. What about a bee? It solves mind-boggling
navigational problems, it knows to build "bee-homes", they have a sort of
hierarchical social structure which they do abide by, etc.

Surely, these things require some aspect of "intelligence". A fundamental
substrate upon which higher forms of "intelligent" phenomena are maybe
based? We are prone to zoom-in on the most interesting aspect. But is the
correct theory inferable from the information conveyed by the tip of
the iceberg alone? I for one have gotten the feeling that it's not. I
think this is the wrong track. Has 3 decades of symbol-manipulation oriented
AI research provided us with any fundamental insights into this phenomenon
called "intelligence"?

The most basic questions still stare us right in the face. I am not trying to
make an all-out pitch for neural network type approaches. I am just saying
that whatever computational model one uses, the "right" questions should be
asked and dealt with. A pretty presumptious statement, but that's how I view
it. I don't think trying to understand what it means to understand is
solvable at this point in time where we don't even have a scant understanding
of the substrate processes. I may be wrong. I may be crazy. Sounds like
lyrics to a tune...

Kihong Park. (park@cs.scarolina.edu)