[comp.ai] I concede confusion

kck@g.gp.cs.cmu.edu (Karl Kluge) (03/21/89)

I tried to take this to email, but the path addrmap gave me for Roel W.
didn't work. Hopefully the subject line will get caught by people's kill
files if they don't want to follow this....

It's unclear to me what question or objection R. W. is making. The first
post in the series says

> How then could a universal TM (i.e. a computer) fed with a program which can
> answer questions in Chinese ever come to "know" the denotation of the
> symbols it is manipulating? The outcome of its computation is invariant
> under changes of denotation of the symbols it manipulates; the people
> programming the UTM may change the denotation of symbol xyz from chair to
> table or to anything else, without it making the slightest difference to the
> computation.

What do you mean by "knowing the denotations of the symbols it manipulates",
and why do you consider this relevent to the question of whether a TM can
"think"?

If "know" really means "know" here, then I'll have to confess that neither I
nor anyone in AI has an answer. People have been arguing over epistemology
for 2500+ years, and we still can't prove that *we* "know" anything, let
alone whether a Turing machine can "know" something. This isn't to trash
philosophy, just to acknowledge its limitations.

If what you meant to ask is how a TM can represent the denotation of its
symbols, then (if I'm not mistaken -- I may well be, as formal logic isn't
my area) this is the symbol-grounding problem. Several answers have been
offered (similar to Steve Harnad's), usually based on the correlation
between the distal stimuli that are the denoted objects and the proximal
stimuli that are generated in the sensors of a robot driven by a TM. Your
reply, however, seems to indicate that this concerns the operational
semantics of the symbols rather than the denotations.

I still find the statement "the people programming the UTM may change the
denotation of symbol(s)..." somewhat odd. If the symbol is an input or
output symbol (let's restrict ourselves to a TM with text I/O for a second),
then the denotations of these "public" symbols are not up to the programmers
(as they are socially approximately agreed upon). If the symbol is a
"private" internal symbol, then I fail to see that the programmers are in a
priviledged position to "decide" what that symbol's denotation is. You seem
to accept that when you say

> However, for
> symbol-manipulating system like a computer, I agree with the following:
> 
> >There are no social conventions, and
> >hence no privileged denotations for the system's "private" symbols.

*********************

> What you confuse is knowing a denotation of a symbol and agreeing with other
> people about what the denotation of the symbol is (during the discourse).

I think many people (Deconstructionists come to mind) would deny the
possibility of anyone "knowing" the denotations of the words they use other
than by the apparent agreement of other people in discourse. Once again, I
sense that the word you want to use is "represent" rather than "know". Is
this the case? Part of the problem is that we have technical words like
"represent" that we don't necessarily want to apply to minds, while we have
natural language terms like "know" that we aren't sure it makes sense to
apply to formal systems. 

Further, you seem to be happy punting the issue of whether people "know" the
denotations of the symbols they use (regardless of whether people are
"symbols all the way down"), but seem to feel that TMs "knowing" the
denotations of their symbols is relevent to whether a TM can think. 
If we can agree that people can "think" without "knowing" the denotations of
the symbols they use (regardless of whether people are "symbols all the way
down"), then why is the ability of a TM to "know" the denotations of its
symbols relevent to the question of whether it can think?

> After showing that symbols can be realized physically, and that their
> operational semantics can be realized physically in a causal process, you
> remark that 
> 
> >The denotations of the symbols have no corresponding reality.
> 
> Correct, but I don't see what that has to do with the problem of whether a
> symbol-manipulation process can think. Unless, of course, you assume that 
> our brains realize thinking by realizing a symbol-manipulating process.

I'm trying not to assume anything about the brain for a momment. The point I
was trying to make is that "denotation" seems to be an arbitrary property of
the symbols in an instantiated, executing formal system, unlike the syntax
and operational semantics of the symbols. I don't understand your concern
about the system's "knowing" this arbitrary property, when it can (by
hypothesis) do all the right things in terms of interacting with the world
without "knowing"/representing this property.

Put differently, there are two questions we'd like to answer

1) Can a TM (attached to appropriate I/O devices) pass appropriate tests
(the LTT or TTT or pick your favorite) that would make us comfortable saying
that the TM could simulate a mind (an empirical question), and

2) Would such a machine actually have a mind (a philosophical question whose
answer is not open to verification or disproof, since the answer you get
will depend on the assumptions you start with).

As a scientist, I find (1) interesting. As a human, I find (2) interesting,
but irrelevent to my work. I don't understand why you feel that whether a TM
can "know" the denotations of its symbols is related to the answer to (2).

Searle purports to have proven that the answer to (2) is no. I don't buy his
argument because I don't believe that the truth of what he calls "Strong AI"
would imply that he ought, while simulating a TM that understands Chinese,
understand Chinese in the way he understands English. Remember, not all of
what is going on in Searle's mind is causally directed by the TM he is
simulating, so the presence of extraneous thoughts (like "I don't understand
Chinese") in his mind (serially or in parallel with the simulation) doesn't
necessarily tell us anything. No one defending Searle has chosen to address
this problem.

As an afterthought, Steve Harnad offered a definition of symbolic AI. I'd
like to offer John Haugland's definition (he calls it GOFAI, Good 'Ole
Fashion AI). This is taken from my notes on a talk he gave
**************************************************************
a) Thinking is symbolic
   i) Symbols have semantic content which pertain to aspects of the situation
   ii) Symbols have a syntax
   iii) Meanings of atomic symbols are essentially arbitrary

b) Process of thinking is regular or inference-like
   i) Rules expressed in terms of syntax of composition of symbols.
      Rules need not be explicit anywhere.
   ii) Rules suffice to constrain manipulation in ways that keep thought
       rational, have it make sense (example: valid inference rules).
   iii) Need not be explicitly coded.

c) Thinking can be mechanized
   i) Rules built into causal structure of the physical machine.
   ii & iii) Missing in notes
   iv) Note that all that is required is that the machine be structured
       to carry out the manipulation -- medium independent (as opposed
       to Searle's "causal powers" argument).
**************************************************************

The issue of rule-based vs. rule-following has been brought up. The classic
example is "The path of the planets can be described by a set of partial
differential equations, but the planets don't solve partial differntial
equations as they orbit the Sun." Isn't the same point just as true of a
digital computer (which, after all, is just an analog electrical circuit at
bottom)? Suppose that I want to solve the set of diffential equations that
describe the planets' paths, and choose to use the real planets as an analog
computer to do it -- wouldn't we feel comfortable then saying that the
planets were solving the equations in the same way that we say the analog
electical circuit is executing a set of rules?

Regarding the Hawking analogy -- there are two (related) questions
1) What kinds of interaction with the world does it take to develop a mind
in an obect capable of having one (feral children, while possessing fully
human brains, generally never develop fully human cognitive abilities), and
2) What kinds of interaction with the world do we want to see in order to be
convinced that an object has a mind.

On the output side, it seems to be the case (for instance, in children born
with serious Cerebral Palsy) that someone who can make only a small,
discrete (often binary) set of controlled movements can develop normal
cognitive abilities, and can demonstrate that they have minds. Therefore, I
don't see that the ability to perform the rich motor tasks a "normal" (I
can't think of a more PC term right now, I apologize for any offense) person
can engage in is relevent to developing/proving one has a mind.

On the input side, Helen Keller provides an upper bound on how much one can
reduce human sensory bandwidth and develop a "normal" mind. The lower bound
is unknown.

Karl Kluge (kck@g.cs.cmu.edu)

"There is no practical difference between a very, very busy server and a
dead server. They're like advisors that way." -- John Ousterhout
-- 

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/22/89)

From article <4532@pt.cs.cmu.edu>, by kck@g.gp.cs.cmu.edu (Karl Kluge):
" ...
" Regarding the Hawking analogy -- there are two (related) questions
" 1) What kinds of interaction with the world does it take to develop a mind
" in an obect capable of having one (feral children, while possessing fully
" human brains, generally never develop fully human cognitive abilities), and
" 2) What kinds of interaction with the world do we want to see in order to be
" convinced that an object has a mind.
" 
" On the output side, it seems to be the case (for instance, in children born
" with serious Cerebral Palsy) that someone who can make only a small,
" discrete (often binary) set of controlled movements can develop normal
" cognitive abilities, and can demonstrate that they have minds. Therefore, I
" don't see that the ability to perform the rich motor tasks a "normal" (I
" can't think of a more PC term right now, I apologize for any offense) person
" can engage in is relevent to developing/proving one has a mind.

Yes.  So while we're parsing questions, let's separate:

1) The evolutionary influence of motor and sense mechanisms on human
   thinking abilities, and

2) the current-day connection between motor and sense mechanisms
   and human thinking abilities.

When we stick to normal cases, it's easy to mistake 1) for 2).  Some
speculation about the evolution of human language connects it with the
development of a tongue and other articulatory facilities that can make
a rich variety of sounds in a controlled way.  That's as may be.  But
some people are born without tongues.  They learn to understand and
speak language.  They don't speak well, but they can do it.

I'm not sure just what Harnad's continual references to "transducer
and effector surfaces" mean, but if the general idea is that
human thought/understanding is somehow crucially dependent on
human sense and motor mechanisms, though this is a plausible
speculation, I think the evidence is against it as a synchronic
hypothesis.

		Greg, lee@uhccux.uhcc.hawaii.edu