[comp.ai] Denotations in the Chinese Room

kck@g.gp.cs.cmu.edu (Karl Kluge) (03/14/89)

From: roelw@cs.vu.nl
Subject: Chinese room argument

> 2. Put the UTM in a box equiped with video cameras, wheels and a motor and
> let it drive in the street. Again there is no difference to its computation
> if you interpret "green" as red etc. (You may put any other
> symbol-manipulating device in a box with the same result.)

Fine. Let's suppose that the presence of a red traffic light facing the box
in the camera's field of view causes the symbol "xyz" to be generated, while
the presence of a green traffic light doesn't. Let's suppose that the
behavior of the UTM in driving the box is perfectly satisfying -- it avoids
accidents, follows traffic rules, etc. Let's pretend that the programmer
decides to change the denotation of the symbol "xyz" from "there is a red
traffic light facing me" to "there is a green traffic light facing me" (me =
the box), and does this without touching the box to change it's program or
wiring. Fine. The box still drives in the same way as it did before the
programmer did this. The programmer has chosen to do something bizarre.  The
games formal logicians play allow the programmer to do this. That doesn't
make what the programmer has done sensible, since the formal games logicians
play are exactly that, and you can't get more out of them conceptually than
went into them in the first place.

> The general result is that a symbol-manipulating device is not affected by
> the denotation you or anyone else give to the symbols it manipulates.

If you are talking about the process by which strings of output symbols get
produced, this is true but irrelevant. The process by which *you* generate
the strings of symbols that form your posts (regardless of whether your mind
is describable by a formal system) is not affected by the denotation I or
anyone else give to the symbols you produce. That is not a demonstration
that you do not "know" the denotations of those words, yet you want us to
believe that it is a demonstration that a UTM does not (and cannot) "know"
the denotations of its symbols.

> 3. Talk with someone, using the word "green" to mean red and "red" to mean
> green, without the other person's knowing about this change in denotation.
> S/he quickly will find something weird about your conversation;...

And if you persist in acting as though I mean green when I say "red", I will
quickly find something weird about the conversation. And if you persist in
acting as though a robot means green when it says "red" *simply because you
programmed the robot, and despite everything the robot says and does to
prove that it means red when it says "red" (picking the right crayon from a
box, etc)*, then I will also quickly find something odd about *your*
conversation -- not the robot's conversation.

> What this shows, I think, is that the denotation of public symbols is
> publicly known and that if you make a private change in what is
> conventionally denoted by a symbol, you will get social problems.
> Conventions about what the denotations of symbols are, are not (merely) in
> the head of one individual but are social institutions. 

First, let's acknowledge the distinction (popping up explicitly for the
first time) between the "public" symbols produced by a system (the words
sent along the teletype to the subject in the Turing Test, for instance),
and it's "private" symbols (gensyms produced by LISP, for instance). Neither
the programmer or the system can reasonably insist on ignoring the social
conventions as to the denotations of the system's "public" symbols -- if the
program insists on describing the American flag as "red, white, and cream
cheese", then there is something odd about the program; and if the
programmer insists that the program means "orange" when it says the word
"blue" in describing the American flag as "red, white, and blue", then there
is something odd about the programmer. There are no social conventions, and
hence no privileged denotations for the system's "private" symbols.

Second, to talk of an entity knowing "the" denotation of a symbol implies
that a symbol *has* an intrinsic denotation, and as you have been so
persistent in pointing out that just isn't true.

Third, to talk of "knowing" the denotation of a symbol is dubious. If you
insist that it's possible, then I'm afraid there are several million
deconstructionists who'd like to have a word with you in the hall. As far as
I can tell, the only indication that I "know" the denotations of the words I
use is

1) I have some goal in mind (I'd like a bowl of ice cream)
2) I emit a string of symbols ("John, would you get me a bowl of ice cream
   while you're getting yourself one?"), and
3) Things in the environment react in such a way that my goal in emitting the
   string of symbols I did is satisfied (my apartmentmate brings me a bowl of
   ice cream).

Since this happens fairly consistently, I assume that the denotations I have
for the words I use roughly overlaps the denotations of those words in the
minds of those I talk to, but that's the most I can conclude. What I say is
not what you read. What you say is not what I read.

Fourth, to talk of knowing the "denotation" of symbols is also questionable.
If I have some robot whose sensors produce a symbolic description of the
world which is transformed by a formal system into a symbolic description of
actions for the robot's effectors (the view Steve Harnad objects to so
vehemently), 

* the syntax of the symbols in the formal system is real, in that
  it describes the causal relationship between successive states of the
  physical machine, and

* the semantics of the symbols are also real in that the input symbols are 
  causally related to the patterns that the world creates in the sensor data,
  and the effects of the system's actions are causally linked to the symbols 
  that result in those actions.

The denotations of the symbols have no corresponding reality. They are a
matter of arbitrary social consensus (in the case of symbols emitted by the
system to communicate with others, including humans), and may not be defined
at all (who's to decide what denotation some symbol "X" has in some
production "yXzz --> yXyzz" buried deep in the symbol system away from
public inspection? There is no privileged, unique denotation). I could
define other arbitrary extrinsic properties of symbols ad nauseum (for
instance, "blurbness", which is the property of all symbols which remind me
of a hamburger -- I clearly know the blurbness of an arbitrary symbol, and I
know that I have a mind if anyone does, therefore since neither you nor an
arbitrary formal system can know the blurbness of the symbols you use (the
English words you emit if it happens that you aren't just a formal system
anyways) only I have a mind. Q. E. D.)

You want the supporters of AI to accept the bizarre notion that Entity A can
insist on defining the denotations of symbols produced by Entity B,
regardless of the behavior of Entity B. I see no reason to accept that at
all. I certainly don't accept it when the two entities are humans. I see no
reason to accept it when one of the entities is a computer, and you have
provided no reason other than "When formal logicians play with symbol
systems, this is what they do."

By the way, *who* is "making a private change in what is conventionally
denoted by a symbol" when the programmer insists that the robot means green
when it says "red" (and acts in a way that indicates that it does)?
Certainly not the robot.

Karl Kluge (kck@g.cs.cmu.edu)
--