[mod.ai] Searle and understanding

cugini@NBS-VMS.ARPA ("CUGINI, JOHN") (07/15/86)

This is in response to recent discussion about whether AI systems
can/will understand things as humans do.  Searle's Chinese room
example suggests the extent to which the implementation of a formal
system may or may not understand something.  Here's another,
perhaps simpler, example that's been discussed on the philosophy
list.

Imagine we are visited by ETS - an extra-terrestial scientist.
He knows all the science we do plus a lot more - quarks, 
quantum mechanics, neurobiology, you-name-it.  Being smart,
he quickly learns our language and studies our (pitifully
primitive) biology, so he knows about how we perceive as well.
But, like all of his species, he's totally color-blind.

Now, making the common assumption that color-knowledge cannot
be conveyed verbally or symbolically, does ETS "understand"
the concept of yellow?

I think the example shows that there are two related meanings
of "understanding".  Certainly, in a formal, scientific sense, 
ETS knows (understands-1) as much about yellow as anyone - all
the associated wavelengths, retinal reactions, brain-states,
etc.  He can use this concept in formal systems, manipulate it,
etc. But *something* is missing - ETS doesn't know
(understand-2) "what it's like to see yellow", to borrow/bend
Nagel's phrase.

It's this "what it's like to be a subject experiencing X" that
eludes capture (I suppose) by AI systems.  And I think the
point of the Chinese room example is the same - the system as
a whole *does* understand-1 Chinese, but doesn't understand-2
Chinese.

To get a bit more poignant, what systems understand-2 pain?
Would you really feel as guilty kicking a very sophisticated
robot as kicking a cat?  I think it's the ambiguity between
these senses of understanding that underlies a lot of the debate.
They correspond somewhat to Dennett's "program-receptive" and
"program-resistant" properties of consciousness.

As far as I can see, the lack of understanding-2 in artificial
systems poses no particular barrier to their performance.
Eg, no doubt we could build a machine which in fact would
correctly label colors - but that is not a reason to suppose
that it's *conscious* of colors, as we and some animals are.

Nonetheless, *even if there are no performance implications*,
there is a real something-or-other we have going on inside us
that does not go on inside Chinese rooms, robots, etc., and no
one knows how even to begin to address the replication of this
understanding-2 (if indeed anyone wants to bother).

John Cugini <Cugini@NBS-VMS> 
------

eyal@wisdom.BITNET (Eyal mozes) (07/24/86)

>I think the example shows that there are two related meanings
>of "understanding".  Certainly, in a formal, scientific sense,
>ETS knows (understands-1) as much about yellow as anyone - all
>the associated wavelengths, retinal reactions, brain-states,
>etc.  He can use this concept in formal systems, manipulate it,
>etc. But *something* is missing - ETS doesn't know
>(understand-2) "what it's like to see yellow", to borrow/bend
>Nagel's phrase.
>
> It's this "what it's like to be a subject experiencing X" that
> eludes capture (I suppose) by AI systems.  And I think the
> point of the Chinese room example is the same - the system as
> a whole *does* understand-1 Chinese, but doesn't understand-2
> Chinese.

No, I think you're missing Searle's point.

What you call "understanding-2" is applicable only to a very small
class of concepts - to concepts of sensory qualities, which can't be
conveyed verbally. For the concept of a color, you don't even have to
stipulate ETS; any color-blind person with a fair knowledge of physical
optics (and I happen to be such a person) has "understanding-1", but
not "understanding-2", of the concept; I know the conditions which
cause other people to see that color, I can reason about it, but I
don't know what it feels like to see it. But for concepts which don't
directly involve sensory qualities (for example, for understanding a
language) there can be only "understanding-1".

Now, Searle's point is that this "understanding-1" (such as a native
Chinese's understanding of the Chinese language, or my understanding of
colors) involves intentionality; it does not consist of manipulating
uninterpreted symbols by formal rules. That is why he denies that a
computer program can have it.

Those who think Searle sees something "magical" in human understanding
also miss his point. Quite on the contrary, he regards understanding as
a completely natural phenomenon, which, like all natural phenomena,
depends on specific material causes. To quote from his paper "Minds,
Brains and Programs": "Whatever else intentionality is, it is a
biological phenomenon, and it is as likely to be as causally dependent
on the specific biochemistry of its origins as lactation,
photosynthesis, or any other biological phenomena. No one would suppose
that we could produce milk and sugar by running a computer simulation
of the formal sequences in lactation and photosynthesis, but where the
mind is concerned many people are willing to believe in such a miracle
because of a deep and abiding dualism: the mind they suppose is a
matter of formal processes and is independent of quite specific
material causes in the way that milk and sugar are not".

        Eyal Mozes

        BITNET:                         eyal@wisdom
        CSNET and ARPA:                 eyal%wisdom.bitnet@wiscvm.ARPA
        UUCP:                           ..!ucbvax!eyal%wisdom.bitnet

Newman.pasa@XEROX.COM.UUCP (07/25/86)

Eyal Mozes quotes from Searle to explain how Searle thinks about human
understanding and its biological nature. I had seen that passage of
Searle's before, and I think that this is a major part of my problem
with Searle. He accepts the biological nature of thought and mind, yet
cannot accept the proposition that a computer can reproduce the
necessary features of these items. I cannot see any reason to believe
that Searle's position is correct. More importantly, I can see many
reasons why his position is incorrect.

Searle uses milk and sugar to illustrate his point. I think that this is
a terrible comparison because milk and sugar are physical products of
biological processes while thought and mind are not. I also think that
Searle's attack on grounds of dualism is rather unfair. Even Searle must
agree that there are physical things and non-physical things in the
world (eg Volkswagens and numbers), and that milk and sugar are members
of the first class while thought and mind are members of the second.
Moreover, Searle's position apparently demands that there are features
of thought and mind that are dependent on features of very low-level
biological processes that make thought and mind happen. What evidence is
there that there are such features?  I don't see that features of the
neurotransmitters (for example) can have an effect at any level other
than their own, particularly since any one biochemical event is unlikely
to have a large effect (my understanding is that large numbers of
biochemical events must occur in concert for anything to be apparent at
higher levels).

Admittedly there is as little evidence for my position as there is for
Searle's, but I think that there is more evidence against Searle than
there is against me. One last point is my paraphrase of John Haugeland's
comment in "Artificial Intelligence - The Very Idea": that brains are
merely symbol processors is a hypothesis and nothing more - until more
solid proof comes along. 

>>Dave