[mod.ai] Intelligence and Representation

cugini@NBS-VMS.ARPA ("CUGINI, JOHN") (09/25/86)

This is in response to some points raised by Charles Kalish -
Allow a somewhat lengthy re-quotation to set the stage:

    I think that Dennet (see "Brainstorms") is right in that intentions
    are something we ascribe to systems and not something that is built in
    or a part of that system.  The problem then becomes justifying the use
    of intentional descriptions for a machine; i.e. how can I justify my
    claim that "the computer wants to take the opponent's queen" when the
    skeptic responds that all that is happening is that the X procedure
    has returned a value which causes the Y procedure to  move piece A to
    board position Q?...
    
    I think the crucial issue in this question is how much (or whether)
    the computer understands. The problem with systems now is that it is
    too easy to say that the computer doesn't understand anything, it's
    just manipulating markers. That is that any understanding is just
    conventional -- we pretend that variable A means the Red Queen, but it
    only means that to us (observers) not to the computer.  ...
    
    [Pirron's] idea is that you want to ground the computer's use of
    symbols in some non-symbolic experience....  
    
    One is looking for pre-symbolic, biological constraints;  Something
    like Rosch's theory of basic levels of conceptualization.  ....
    
    The other point is that maybe we do have to stay within this symbolic
    "prison-house" after all event the biological concepts are still
    represented, not actual (no food in the brain just neuron firings).
    The thing here is that, even though you could look into a person's
    brain and, say, pick out the neural representation of a horse, to the
    person with the open skull that's not a representation, it constitutes
    a horse, it is a horse (from the point of view of the neural sytem).
    And that's what's different about people and computers. ...     

These seem to me the right sorts of questions to be asking - here's a stab
at a partial answer.

We should start with a clear notion of "representation" - what does it mean
to say that the word "rock" represents a rock, or that a picture of a rock
represents a rock, or that a Lisp symbol represents a chess piece?

I think Dennett would agree that X represents Y only relative to some
contextual language (very broadly construed as any halfway-coherent
set of correspondence rules), hopefully with the presence of
an interpreter.  Eg, "rock" means rock in English to English-speakers.
opp-queen means opponent's queen in the mini-language set up by the
chess-playing program, as understood by the author.  To see the point a
bit more, consider the word "rock" neatly typed out on a piece of paper
in a universe in which the English language does not and never will exist.
Or consider a computer running a chess-playing program (maybe against
another machine, if you like) in a universe devoid of conscious entities.
I would contend that such entities do not represent anything.

So, roughly, representation is a 4-place relation:
R(representer,     represented, language            interpreter)
  "rock"           a rock       English             people
  picture of rock  a rock       visual similarity   people, 
                                                    maybe some animals
  ...
and so on.

Now... what seems to me to be different about people and computers is that
in the case of computers, meaning is derivative and conventional, whereas
for people it seems intrinsic and natural.  (Huh?) ie, Searle's point is
well taken that even after we get the chess-playing program running, it
is still we who must be around to impute meaning to the opp-queen Lisp
symbol.  And furthermore, the symbol could just as easily have been
queen-of-opponent.  So for the four places of the representation relation
to get filled out, to ground the flying symbols, we still need people
to "watch" the two machines.  By contrast two humans can have a perfectly
valid game of chess all by themselves, even if they're Adam and Eve.

Now people certainly make use of conventional as well as natural
symbol systems (like English, frinstance).  But other representers in
our heads (like the perception of a horse, however encoded neurally).
seem to *intrinsically* represent.  Ie, for the representation
relation, if "my perception of a horse" is the representer, and the
horse out there in the field is the represented thing, the language
seems to be a "special", natural one namely the-language-of-normal-
veridical-perception. (BTW, it's *not* the case, as stated in
Charles's original posting that the perception simply is the horse -
we are *not* different from computers with respect to
the-use-of-internal-things-to-represent-external-things.) 
Further, it doesn't seem to make much sense at all to speak of an
"interpreter".  If *I* see a horse, it seems a bit schizophrenic to
think of another part of myself as having to interpret that
perception. In any event, note that this is self-interpretation.

So people seem to be autonomous interpreters in a way that computers
are not (at least not yet).  In Dennett's terminology, it seems that
I (and you) have the authority to adopt an intentional stance towards
various things (chess-playing machines, ailist readers, etc.),
*including* ourselves - certainly computers do not yet have this
"authority" to designate other things, much less themselves,
as intentional subjects.

Please treat the above as speculation, not as some kind of air-tight
argument (no danger of that anyway, right?)

John Cugini <Cugini@NBS-VMS> 
------