[mod.ai] AIList Digest V4 #166

PHayes@SRI-KL (Pat Hayes) (07/15/86)

re: Searles chinese room
There has been by now an ENORMOUS amount of discussion of this argument, far
more than it deserves.  For a start, check out the BBS treatment surrounding
the original paper, with all the commentaries and replies. 
Searle's position is quite coherent and rational, and ultimately
whether or not he is right will have to be decided empirically, I
believe.  This is not to say that all his arguments are good, but
that's a different question. He thinks that whatever it is about the
brain ( or perhaps the whole organism ) which gives it the power of
intentional thought will be something biological. No mechanical
electronic device will therefore really be able to *think about* the
world in the way we can.  An artificial brain might be able to, it's
not a matter of natural vs. artificial, notice: and it's just possible
that some other kind of hardware might support intentional thinking,
although he believes not; but certainly, it can't be done by a
representationalist machine, whose behavior is at best a simulation of
thought ( and which, he believes, will never in fact be a successful
simulation ).  Part of this position is that the behavior of a system
is no guide to whether or not it is *really* thinking.  If his closest
friend died, and an autopsy revealed, to Searles great surprise, that
he had been a computational robot all his life, then Searle would say
that the man hadn't been aware of anything all along. The 'Turing test'
is quite unconvincing to Searle.
This intellectual position is quite consistent and impregnable to argument.
It turns ultimately on an almost legal point: if a robot behaves
'intelligently', is that enough reason to attribute 'intelligence'
to it? ( Substitute your favorite psychological predicate. ) Turing and his
successors say yes, Searle says no.  I think all we can do is agree to 
disagree for the time being.  When the robots get to be more convincing, let's
come back and ask him again ( or send one of them to do it ).
Pat Hayes
-------

JMYERS@SRI-STRIPE.ARPA (John Myers) (07/17/86)

I do not believe a concept of self is required for perception of objects.
Concepts needed for the perception of objects include temporal consistency,
directional location, and differentiation; semantic labeling (i.e., "meaning"
or "naming") is also useful.  None of these require the concept of a self
who is doing the perceiving.
  The robots I work with have no concept of self, and yet they are quite
successful at perceiving objects in the world, constructing an internal world
model of the objects, and manipulating these objects using the model.  (Note
that a "world model" consists of objects' locations and names--temporal
consistency is assumed, and differentiation is implicit.  Superior world
models include spatial extent and perceptual features.)  I would argue that
they are moving by "reflex"--without true understanding of the "meaning" of
their motions--but they certainly are able to truly perceive the world around
them.  I believe lower-level life-forms (such as amoebas, perhaps ants) work
in the same manner.  When such-and-such happens in the world, run program FOO
which makes certain motions with the effectors, which (happens to) result in
"useful things" getting accomplished.
  I think this describes what most of consciousness is:  (1) being able to
perceive things in the environment, (2) knowing the meaning of these things,
and (3) being able to respond in an appropriate manner.  Notice that all of 
these concepts are vague; different degrees of 1,2,3 represent different
degrees of consciousness.
  Self-consciousness is more than consciousness.
  The concept of self is not required for conscious beings, and it certainly
is not required for perception.
						John Myers~~
-------