[mod.ai] intelligent machines

"charles_kalish.EdServices"@XEROX.COM (09/22/86)

In his message, Peter Pirron sets out what he believes to be necessary
attributes of a machine that would deserved to be called intelligent
From my experience, I think that his intuitions about what it would take
for for a machine to be intelligent are, by and large, pretty widely
shared and as far as I'm concerned, pretty accurate.  Where we differ,
though, is in how these intuitions apply to designing and demonstrating
machine intelligence.

Pirron writes: "There is the phenomenon of intentionality amd motivation
in man that finds no direct correspondent phenomenon in the computer." I
think it's true that we wouldn't call anything intelligent we didn't believe
had intentions (after all intelligent is an intentional ascription).
But I think that Dennet (see "Brainstorms") is right in that intentions
are something we ascribe to systems and not something that is built in
or a part of that system.  The problem then becomes justifying the use
of intentional descriptions for a machine; i.e. how can I justify my
claim that "the computer wants to take the opponent's queen" when the
skeptic responds that all that is happening is that the X procedure has
returned a value which causes the Y procedure to  move piece A to board
position Q?

I think the crucial issue in this question is how much (or whether) the
computer understands. The problem with systems now is that it is too
easy to say that the computer doesn't understand anything, it's just
manipulating markers. That is that any understanding is just
conventional -- we pretend that variable A means the Red Queen, but it
only means that to us (observers) not to the computer.  How then could
we ever get something to mean anything to a computer? Some people (I'm
thinking of Searle) would say you can't, computers can't have semantics
for the symbols they process. I  found this issue  in Pirron's message
where he says:
"Real "understanding" of natural language however needs not only
linguistic competence but also sensory processing and recognition
abilities (visual, acoustical).   Language normally refers to objects
which we first experience  by sensory input and then name it."   The
idea is that you want to ground the computer's use of symbols in some
non-symbolic experience.

Unfortunately, the solution proposed by Pirron:
"The constructivistic theory of human learning of language by Paul
Lorenzen und O. Schwemmer (Erlanger Schule) assumes a "demonstration
act" (Zeigehandlung)  constituting a fundamental element of man (child)
learning language.  Without this empirical fundament of language you
will never leave the hermeneutic circle, which drove former philosphers into
despair." ( having not read these people, I presume the mean something
like pointing at a rabbit and saying "rabbit") has been demonstrated by
Quine (see "Word and Object") to keep you well within the circle.  But
these arguments are about people, not computers and we do (at least
feel) that the symbols we use and communicate with are rooted in
non-symbolic something. I can see two directions from this.

One is looking for pre-symbolic, biological constraints;  Something like
Rosch's theory of basic levels of conceptualization.  Biologically
relevant, innate concepts, like mother, food, emotions, etc.  would
provide the grounding for complex concepts.  Unfortunately for a
computer, it doesn't have an evolutionary history which would generate
innate concepts-- everything it's got is symbolic.  We'd have to say
that no matter how good a computer got it wouldn't really understand.   

The other point is that maybe we do have to stay within this symbolic
"prison-house" after all event the biological concepts are still
represented, not actual (no food in the brain just neuron firings).  The
thing here is that, even though you could look into a person's brain
and, say, pick out the neural representation of a horse, to the person
with the open skull that's not a representation, it constitutes a horse,
it is a horse (from the point of view of the neural sytem).  And that's
what's different about people and computers. We credit people with a
point of view  and from that point of view, the symbols used in
processing are not symbolic at all, but real.  Why do people have a
point of view and not computers?  Computers can make reports of their
internal states probably better than we.  I think that Nagel has hit it
on the head (in "What is it like to be a Bat" I saw this article in "The
Minds I") with his notion of "it is (or is not) like something to be
that thing."  So it is like something to be a person and presumably is
not like something to be a computer.  For a machine to be intelligent
and truly understand it must be like something to be that machine. Only
then can we credit that machine with a point of view and stop looking at
the symbols it uses as "mere" symbols.  Those symbols will have content
from the machine's point of view.  Now, how does it get to be like
something to be a machine? I don't know but I know it has a lot more to
do with the Turing test than what kind of memory orgainization or search
algorithms the machine uses.

Sorry if this is incoherent, but it's not a paper so I'm not going to
proof it. I'd also like to comment on the claim that: 
  "  I would claim, that the conviction mentioned above {that machines
can't equal humans} however philosphical or sophisticated it may be
justified, is only the "RATIONALIZATION"..  of understandable   but
irrational and normally unconscious existential fears and need of human
beings"  but this message is too long anyway.  Suffice it too say that
one can find a nasty Freudian interpretation of any point.

I'd appreciate hearing any comments on the above ramblings.

-Chuck

ARPA: chuck.edservices@Xerox.COM