[comp.ai] Chinese room argument

roelw@cs.vu.nl (02/20/89)

As I understand it, the Chinese room argument is based on the simple and
fundamental fact that computers (and, in general, Turing machines) manipulate
symbols on the basis of their form only. What the meaning of these symbols is,
if they have any, is irrelevant for the rules of manipulation.
  The same fundamental idea is the foundation of formal semantics explained
in any textbook on logic: define a language L as a set of meaningless symbols,
and independently of that define an interpretation function from L to a 
mathematical structure S which assigns meaning to the words (or formulas, or
sentences, or terms, etc.) in this language. Constants are assigned elements 
of $S$, predicates are assigned sets of tuples, etc. This leads to the 
definition of truth of a word (or formula etc.) in $S$ with respect to I.
  We can now add a derivation relation |- over F(L) x L which formalizes the 
idea that { w1, ..., wn } implies w. Interesting questions to ask about
|- are whether it preserves truth, allows one to derive all true formulas, etc.
  A computer must somehow implement |-. Because |- is defined over formulas 
independently of the meaning of the formulas w.r.t an S or an I, but is defined
on the basis of the syntactic structure of the formulas only, the computer
manipulates the symbols independently of their meaning as well.
  With respect to the person in the Chinese room manipulating symbols, the
people outside the room may have no idea what the strings of karakters mean
either (in which case there is no I) or they may have different ideas about 
what the correct I is (in which case there are different possible I's) or they
may switch interpretation functions; all of this does not affect the process
in the room one bit. There is no way the process in the room has access to the
meaning of the symbols it manipulates; this is built into the experiment from
the start.
  This implies that Harnad is right in saying that the argument works only for
symbol-manipulating processes.
  So far there should be no disagreement. The problems begin when we start to
figure out what to conclude from this argument. Conclusions which are 
warranted, I think, but are also relatively uninteresting,  are that 
computers manipulate symbols according to their syntactic structure, and that 
we may vary the interpretation of the symbols manipulated. All interesting 
conclusions contain a term not contained in the argument above, such as 
"understanding." To be able to conclude that a symbol-manipulating process cannot implement understanding (or, stronger even, that it cannot *simulate* 
understanding) we need an extra premiss connecting the term *understanding* 
with the concept of symbol-manipulation based on syntactic structure. There 
are two standpoints, which seem to divide people into two groups which do not 
understand each other (this recursive structure of the problem is not totally 
accidental).
  1. Understanding cannot be realized in a symbol-manipulating process;
  2. Understanding can be realized by a symbol-manipulating process.
A major reason for 1. is that in order to understand a word (sentence etc.) 
in a language, we must know its meaning, which is precisely what a Turing 
machine is not able to do. A major reason advanced for 2. is that, as a matter
of fact, our understanding is realized in symbol-manipulation; as far as I 
know there has been no evidence for this empirical claim.
  This puts the evidence in favor of claim 1, although I realize that 1 has not
been *proved*. Outside mathematics very little can be proved, although we may 
show things by repeatable experiments, or argue them from plausible principles,
or, as happens too often, make our point by shouting it, or by deriding 
people with opther viewpoints, burn books by writers we don't like, etc. It 
seems to me futile to expect it to be *proven*, using a plausible definition 
of "understanding", that understanding a sentence cannot be realized without 
knowing the meaning of the words. Perhaps those who require a proof of this 
think these things must be proved precisely because they think that thinking 
is a symbol-manipulating process.

I would like to ask both people in favor of 1 and in favor of 2 why they 
think it is so important to believe 1 or 2; the debate sometimes resembles a 
religious discussion. It obviously matters a lot to people whether 1 or 2 is 
believed. What difference does it make?

Roel Wieringa
Vrije Universiteit
de Boelelaan 1081
1081 HV Amsterdam.
uucp: roelw@cs.vu.nl

fn@jung.harlqn.uucp (Mr Beeblebrox) (02/28/89)

Having read Searle's Chinese Room argument many years ago I may be a little
rusty and I haven't waded through all of the postings about it either.
However I remember thinking on first reading the article that one possible
argument is  as follows :-

1. ...
An agent uses the grammar and other 'symbol manipulation' rules to answer
questions in an 'unknown' langauge qhich are posed in an 'unknown' language.
This is pure symbol manipulation and could be performed by machine but no
'understanding' of the language could be made by the machine.

2. ...
The agent has to either 'see' or 'feel' or 'hear' the instructions and
in so doing will 'learn' (not necessarily completely correctly) some of
the associations between the symbols coming in and those going out.
Is this an intelligent act ?
If YES then I argue that the machine could do likewise and so be deemed to
be intelligent.
If NO then proceed with the argument in this manner :-

3. ..
The agent has the ability to associate symbols coming in to some correct
combinations going out. Outside of the room the agent 'experiences' the same
or similar INPUT sequences of symbols and 'observes' similar or possibly new
OUTPUT in the form of actions and events. The agent thereby LEARNS the use
of these symbols and can COMMUNICATE desires (ok and a really big jump here)
emotions.
Would this be intelligent behaviour ?
I believe it is a natural progression from the room argument and does show
signs of what a number of people would call intelligence.

I hope that I have been clear enough here and I am sure a number of you will
disagree with my argument. I don't mean it to be water tight as at the end
of the day my own feeling is that the question of intelligence and awareness
(or consciousness if you live in the West where such a word exists) is one
of belief. You either believe in the uniqueness of the mind or you don't

Thank's for your time

roelw@cs.vu.nl (03/04/89)

It is part of the definition of a Turing machine (TM) that it manipulates
symbols on the basis of their form only. The denotation of the symbols, and of
expressions built from these symbols, is not relevant for the outcome of the 
TM computation.

How then could a universal TM (i.e. a computer) fed with a program which can
answer questions in Chinese ever come to "know" the denotation of the symbols
it is manipulating? The outcome of its computation is invariant under changes 
of denotation of the symbols it manipulates; the people programming the UTM may
change the denotation of symbol xyz from chair to table or to anything
else, without it making the slightest difference to the computation.

In the heated discussion so far, I have seen no answer to the above simple 
argument.

It seems to me that those who believe a UTM could be programmed into
understanding the meaning of the symbols it manipulates, either 
1. use a nonstandard definition of what a TM is, or 
2. use a nonstandard definition of what the denotation of an expression is, or
3. are in the grip of an ideology which prevents them from seeing a simple 
truth.

Roel Wieringa

mike@arizona.edu (Mike Coffin) (03/04/89)

From article <2121@star.cs.vu.nl>, by roelw@cs.vu.nl:

> How then could a universal TM (i.e. a computer) fed with a program
> which can answer questions in Chinese ever come to "know" the
> denotation of the symbols it is manipulating? The outcome of its
> computation is invariant under changes of denotation of the symbols
> it manipulates; the people programming the UTM may change the
> denotation of symbol xyz from chair to table or to anything else,
> without it making the slightest difference to the computation.

What makes you think that a program sophisticated enough to answer
questions in Chinese is going to represent a chair or a table as a
single symbol?  The concept of a "chair" could be a very complicated,
interconnected mesh of symbols, some of which might contribute to, and
be shared by, many other objects and concepts.  No single symbol
would "means" anything, except in its relationship to 10^9 or so other
symbols.

I'm beginning to think that the real trick in Searle's thought
experiment is to get us to provisionally accept the fact that a human
is a fast enough computer to have any hope of simulating another
person.  This immediately leads to a mental image of an algorithm
about as complicated as a recipe from a French cookbook, or maybe a
set of instructions from Heathkit.  Once he has installed this mental
image of a "symbolic program", Searle then appeals to your intuition
--- how could such a thing understand?  When you break this mind-set,
and start thinking about a parallel processor with 10^12 independent
processors, madly calculating and communicating in extremely
complicated patterns, it is not so clear what it could or couldn't do.
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/06/89)

mike@arizona.edu (Mike Coffin) of U of Arizona CS Dept, Tucson writes:

" I'm beginning to think that the real trick in Searle's thought
" experiment is to get us to provisionally accept the fact that a human
" is a fast enough computer to have any hope of simulating another
" person... When you break this mind-set, and start thinking about a
" parallel processor with 10^12 independent processors, madly calculating
" and communicating in extremely complicated patterns, it is not so clear
" what it could or couldn't do.

(1) The "mind-set" is inherited from symbolic AI ("Strong AI"), which
is not committed to parallel processing; that's also what Searle's
Argument is directed at.

(2) Gesturing to parallel processing is not enough; however, if there
is a PRINCIPLED reason why a computation essential for passing the LTT
would be impossible to execute serially, then such a model would indeed
be immune to Searle's Chinese Room Argument, just as the TTT (with its
nonsymbolic transducers and effectors) is, though for less intuitively
compelling reasons.

(3) Forget about speed and complexity: That's just hand-waving.

Refs:
Searle J. (1980) Minds, Brains and Progams. Behavioral and Brain
                 Sciences 3: 417-457.
Harnad S. (1989) Minds, Machines and Searle. Journal of Experimental
                 and Theoretical Artificial Intelligence" 1: 5-25
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

mike@arizona.edu (Mike Coffin) (03/06/89)

From article <Mar.5.13.14.35.1989.4402@elbereth.rutgers.edu> (Stevan Harnad):
> (3) Forget about speed and complexity: That's just hand-waving.

Oh, I agree it is just handwaving.  But as I now (with your help)
understand Searle's argument, on the very bottom line there is an
appeal to intuition.  That intuition is that "there isn't anyone home"
--- that the only entity in the Chinese room capable of understanding
is Searle himself.  That intuition is powerful, as long as you're
thinking in terms of relatively simple tasks: baking a cake or doing
tensor calculus.  THIS is what putting a human in the box does: it
tends to make you think of a relatively simple task that your
intuition might apply to.  But simulating a Chinese speaker is not a
task for which our intuition provides any guidance.  It is as far
outside everyday experience as quantum mechanics --- another field
where intuition is useless and deceptive.


-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

roelw@cs.vu.nl (03/06/89)

In responds to my question how a computer can "know" the denotation (reference)
of a symbol, if it is possible for the programmers, users. etc. to change the
denotation without affecting the output of the computations, 
kck@g.gp.cs.cmu.edu (Karl Kluge) wrote:

> I can change the denotation of the symbol "symbol" in the above passage from
> "symbol" to "soup can". That wouldn't make the slightest difference in the
> process by which you generated the passage

True. My production of a piece of text is not affected by your changing the
denotation of the symbols occuring in it. However, this does *not* imply that I
process the text in the same way as a computer processes it.

> how did *you* ever come to "know" the denotations of words/symbols in your
> mind?

I don't know; we should turn to empirical research on development psychology to
get some answers. I get the impression that you make the following assumptions:
1. I have a mind.
2. There are symbols in my mind.
Assumption 1 is metaphysical (there is no conclusive empirical or logical proof
of it, yet we (at least I) strongly believe in this assumption). My question to
you is to state, independently of assumption 2, what you mean by "mind." Is it
a different thing from my body? Can one exist without the other?

Assumption 2 begs the question whether thought works by the manipulation of
symbols. If it does, then apparently a symbol-manipulation device can learn to
speak a natural language such as Chinese (or Dutch in my case). However, the
truth of this hypotheses has not been shown empirically and Searle's argument
is designed to show its implausibility by a thought experiment. You shouldn't
assume the truth of 2 in criticizing Searle's experiment.

> In general, it only makes sense to talk about the programmer "changing the
> denotation of a symbol in a program" when that change produces corresponding
> changes it the program's output behavior, i.e. "changing the denotation of
> symbol 'xyz' from 'hide behind the nearest rock' to 'cover yourself with
> barbacue sauce and jump up and down and yell'" only makes sense if the
> generation of the symbol "xyz" in the program produces the corresponding
> difference in behavior. 

No. Read any textbook on formal languages or logic and you will find a clean
separation between the syntax and semantics of a language. Whether we can make
sense of the output of a symbol-manipulation process is completely irrelevant
to the rules of symbol-manipulation.

Also, note that I used the word "denotation" (synonymous with "reference") and
not "connotation" (synonymous with "sense"). The formal semantics of
connotation is much more difficult than that of denotation, and I think we can
argue about Searle's experiment using the first concept alone.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) agrees with Karl. But then he writes in
<3388@uhccux.uhcc.hawaii.edu>:

> couldn't we reach the conclusion more immediately by
> considering the outcome to be given in terms of the denotations of any
> symbols it contains.  Then it is not, in general, invariant.

The output of a symbol-manipulation process is a string of symbols. Let's call
this O. The denotation of the string is given by 1. the denotation of the 
symbols in it and 2. rules for constructing the denotation of a well-formed 
string from those of its components. Let's call these rules D. Your proposal
is to call the pair (O, D) the outcome of the process. Obviously, this is not
indendent of D. However, O *is* indendent from D; it depends only on 1. the
string of input symbols 2. the rules for manipulating those symbols, and these
in turn are independent from D.

mike@arizona.edu (Mike Coffin) writes in <2121@star.cs.vu.nl>:
> What makes you think that a program sophisticated enough to answer
> questions in Chinese is going to represent a chair or a table as a
> single symbol?  

Well, nothing makes me think that. I wrote:

 "The denotation of the symbols, and of expressions built from these symbols,
  is not relevant for the outcome of the TM computation."

Exressions are built from symbols according to the rules for well-formed
expressions of a language. If you wish, replace "expressions" by "formulas."
These could be even higher-order formulas, but I assume for the moment that
they have a finite size (=number of symbols in them). 

Roel Wieringa
Dept. of Math. & Comp. Science
Vrije Universiteit
de Boelelaan 1081
1081 HV Amsterdam

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/08/89)

From article <2125@star.cs.vu.nl>, by roelw@cs.vu.nl:
" Let's call these rules D. Your [Greg's] proposal
" is to call the pair (O, D) the outcome of the process. Obviously, this is not
" indendent of D. However, O *is* indendent from D; it depends only on 1. the
" string of input symbols 2. the rules for manipulating those symbols, and these
" in turn are independent from D.

Yes, that's what I meant, though it's an odd way to put it (since
the denotational rules haven't been applied).  But the point is
that that's the appropriate view to take of the outcome -- in
terms of it's denotation.  Look, instead of thinking about
varying the denotations of given symbols, think about varying
the symbols with given denotations.  That doesn't make any
difference to the computation, either.  We need to judge the
appropriateness of a computation.  If you're a TM, I can't see
inside you to examine your symbols, and I don't think I'd want
to.  In responding to you, I can only be concerned with the
meaning of your behavior (as I take the meaning to be).

" mike@arizona.edu (Mike Coffin) writes in <2121@star.cs.vu.nl>:
" > What makes you think that a program sophisticated enough to answer
" > questions in Chinese is going to represent a chair or a table as a
" > single symbol?  
" 
" Well, nothing makes me think that. I wrote:
" 
"  "The denotation of the symbols, and of expressions built from these symbols,
"   is not relevant for the outcome of the TM computation."

For a given TM, and in a certain sense of 'relevant', I guess
that's so.  But suppose I exchanged the denotations of your
(complex of) symbols for 'red' and 'green'.  You might have
a traffic accident.  Isn't that relevant?  Your computations
wouldn't change, but their appropriateness would.

		Greg, lee@uhccux.uhcc.hawaii.edu

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/08/89)

There has been a certain amount of euphoria about my statement that
no respondent has so far given evidence of having adequately
understood Searle's Argument or my own. Don't get carried away. The
respondents (and they are for the most part the same small number
of people, coming back over and over again) are, after all, the ones
who continue to disagree. And on all the evidence I have seen, it
is because they haven't grasped the points being made (though some
progress has been made, and acknowledged, in a few cases). Now, I
have no way of confirming it, but it is not unlikely that among the much
larger number of people who have been READING these discussions, but
not participating in them, there are a few who have understood. In
any case, I think I may even have found an active participant who has
done so:

roelw@cs.vu.nl (Roel Wieringa) of
Dept. of Math. & Comp. Science, Vrije Universiteit, Amsterdam wrote:

" it is possible for the programmers, users. etc. to change the
" denotation without affecting the output of the computations...
" Read any textbook on formal languages or logic and you will find a
" clean separation between the syntax and semantics of a language.
" Whether we can make sense of the output of a symbol-manipulation
" process is completely irrelevant to the rules of symbol-manipulation.

Agreement is uninteresting and uninformative, however, so let me ask
Roel Wieringa about the following potential objection (which I have made
in a paper in preparation called "The Origin of Words"): Philosophers
have made two kinds of nondemonstrative conjectures about "swapping."
One was about swapping experiences (the "inverted spectrum" conjecture:
could someone among us pass indistinguishably even though whenever
he saw or spoke of what we refer to as "green" he actually saw what we
refer to as "red" and vice versa). The second "swapping" conjecture is
about meaning: Could the meanings of some (or all) of the words in a
natural language be coherently swapped or permuted while leaving ALL
behavior and discourse unchanged? (This is Quine's celebrated thesis of
the "underdetermination of radical translation.")

I feel logical and intuitive pulls in both directions in both cases.
Certainly in a toy domain or a circumscribed artificial language you
could show that there were semantic "duals" under which the entire
domain was syntactically invariant. (How many? A few? A finite number?
An infinite number?) But does this make sense for ALL of perception and
its accompanying behavior and judgments (in the inverted-spectrum case)
and for a full natural language and ALL possible discourse and behavior
(in the swapped-meaning case)? (If so, how many duals might there be?
And is there a way of proving this.)

I don't think it's NECESSARY for the case against symbolic
functionalism that the answer be "Yes, swapping is always possible,"
but obviously it wouldn't hurt. What do you think?
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

bwk@mbunix.mitre.org (Barry W. Kort) (03/08/89)

Karl Kluge, who is evidently a silicon-based symbol cruncher, writes:

 > My production of a piece of text is not affected by your changing the
 > denotation of the symbols occuring in it.  However, this does *not*
 > imply that I process the text in the same way as a computer processes it.
 > How did *you* ever come to "know" the denotations of  words/symbols
 > in your mind?

To which Roel Wieringa responds:

 > I don't know; we should turn to empirical research on developmental
 > psychology to get some answers.

Consider a child who hears an adult utter the phonetic sequence,
/char/, while pointing to this object:

	|
	|
	|___
	|   |
	|   |

Years later, the child hears her teacher utter /char/ while scribbling
this cryptic rune on the blackboard:

	CHAIR

It seems to me that the child "understands" when she makes the
connection between the tangible, visible object upon which she
is sitting, the utterance, /char/, and the scribble "CHAIR".

The problem with the LTT, is that only one symbol reposes in the
artificial mind, instead of the three inter-related representations
of the same real-world object.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (03/08/89)

In article <Mar.7.16.52.13.1989.1586@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

 > Could the meanings of some (or all) of the words in a natural language
 > be coherently swapped or permuted while leaving ALL behavior and
 > discourse unchanged? 

During WWI, American codebreakers gradually broke the Japanese code,
which used one symbol per word in the lexicon.  However, there was one
word that eluded definition.  It only appeared in one context:
Country A is "X" but country B is not "X".  The codebreakers decided
that "X" meant "friendly".  After the war, they obtained the
Japanese codebook and found that "X" meant "sincere".

If you read Michener's _The Source_, you will note his use of the
name "El Shaddai" for God.  It is instructive to substitute phrases
such as "common sense" or "the process of enlightenment" for "El
Shaddai".  The passages seem to parse just as well.

--Barry Kort

roelw@cs.vu.nl (03/09/89)

In article <3399@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:

> suppose I exchanged the denotations of your
> (complex of) symbols for 'red' and 'green'.  You might have
> a traffic accident.  Isn't that relevant?  Your computations
> wouldn't change, but their appropriateness would.

Let's not suppose what is to be argued, that my (or anyone else's) thinking
is a symbol-manipulating process. So let's do a few other thought experiments
instead:

1. Do the experiment with a symbol-manipulating robot. To be specific, take a
universal Turing machine (UTM) and put on its tape (a) a description of a
particular TM who behaves as a traffic participant and (b) a description of the
input to this TM. Now change the denotation of the symbols on the tape of this
UTM, e.g. let the symbol "red" denote green and let "green" denote red.
There is of course no difference to the computation of the UTM.

2. Put the UTM in a box equiped with video cameras, wheels and a motor and let
it drive in the street. Again there is no difference to its computation if you
interpret "green" as red etc. (You may put any other symbol-manipulating device
in a box with the same result.)

The general result is that a symbol-manipulating device is not affected by the
denotation you or anyone else give to the symbols it manipulates.

Of course, it is a different matter if the programmer wrote "green" where s/he
should have written "red". The robot will probably cause a traffic accident by
that.

3. Talk with someone, using the word "green" to mean red and "red" to mean
green, without the other person's knowing about this change in denotation. S/he
quickly will find something weird about your conversation; the worst possible
outcome would be that s/he thinks you are nuts; the best outcome would be that
you agree to use words, say "reen" and "gred", to stand for what you both agree
to be red and green.

What this shows, I think, is that the denotation of public symbols is publicly
known and that if you make a private change in what is conventionally denoted 
by a symbol, you will get social problems. Conventions about what the 
denotations of symbols are, are not (merely) in the head of one individual but
are social institutions. 

This leads to Harnad's question:

> Philosophers
> have made two kinds of nondemonstrative conjectures about "swapping."
> One was about swapping experiences (the "inverted spectrum" conjecture:
> could someone among us pass indistinguishably even though whenever
> he saw or spoke of what we refer to as "green" he actually saw what we
> refer to as "red" and vice versa). The second "swapping" conjecture is
> about meaning: Could the meanings of some (or all) of the words in a
> natural language be coherently swapped or permuted while leaving ALL
> behavior and discourse unchanged? (This is Quine's celebrated thesis of
> the "underdetermination of radical translation.")

> I feel logical and intuitive pulls in both directions in both cases.

I think that logically, it is possible that we will never notice the swapping,
as long as we talk about domains where swapping has not occurred. If you swap 
the denotation of "red" and "green", and we proceed talking about politics, I 
may not notice anything. Until of course you mention the Red party in Germany, 
where I would talk about the Green party in Germany. There is no *proof* that
you have not swapped denotations, but I think it is crucial that we could sort
out a difference when we encounter one -actually, that we would recognize a
difference as denotation swap to begin with.


Roel Wieringa

roelw@cs.vu.nl (03/09/89)

In <8903061329.aa02322@hansw.cs.vu.nl>, Weigand Hans <hansw@cs.vu.nl> writes

> The intuitive meaning of "meaning" is much closer related to connotation
> than to denotation (witness our ability to talk meaningfull about
> non-existent things).

I agree, but my argument (thinking cannot be symbol-manipulation because a
symbol-manipulation process is independent of the denotation of the symbols
used) works if being able to know the connotaation of a symbol implies being
able to know the denotation of the symbol.

> Can you explain what you mean by saying that "the rules for manipulating 
> those symbols are independent from D"? Evidently, these rules have been
> designed by scientists knowing Chinese. So in an obvious sense, they highly
> depend on D. Since these rules stem from humans, who attribute to them
> a certain meaning, the rules are not meaningless. The meaning is
> carried over to the symbols they manipulate. 

The last sentence seems to be a modern brand of mystics. The process by which
people found the grammar rules of Chinese is completely different from the
process by which a symbol-manipulating machine manipulates symbols using this
grammar.

Roel Wieringa

lambert@cwi.nl (Lambert Meertens) (03/10/89)

In article <46017@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
) Consider a child who hears an adult utter the phonetic sequence,
) /char/, while pointing to this object:
) 
) 	|
) 	|
) 	|___
) 	|   |
) 	|   |
) 
) Years later, the child hears her teacher utter /char/ while scribbling
) this cryptic rune on the blackboard:
) 
) 	CHAIR
) 
) It seems to me that the child "understands" when she makes the
) connection between the tangible, visible object upon which she
) is sitting, the utterance, /char/, and the scribble "CHAIR".
) 
) The problem with the LTT, is that only one symbol reposes in the
) artificial mind, instead of the three inter-related representations
) of the same real-world object.

We could ask the LTT candidate:

    Which letter looks more like a chair, "h" or "j"?

    What rhymes with "chair", "choir" or "heir"?

and more refined questions, to examine if he/she/it really "understands"
what a chair is.

I see no reason why an artificial construct could not avail of several
alternative symbolic representations, some of which are suitable for
answering such questions; in fact, writing a program that could entertain
such dialogues for a limited domain of discourse seems quite feasible to
me.  A far harder task would be to make a program that could sensibly
discuss why some jokes are funnier than others, even though that is much
more "linguistic".

-- 

--Lambert Meertens, CWI, Amsterdam; lambert@cwi.nl

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/10/89)

From article <2139@star.cs.vu.nl>, by roelw@cs.vu.nl:
"...
"2. Put the UTM in a box equiped with video cameras, wheels and a motor
"and let it drive in the street.  Again there is no difference to its
"computation if you interpret "green" as red etc.  (You may put any other
"symbol-manipulating device in a box with the same result.)
"
"The general result is that a symbol-manipulating device is not affected
"by the denotation you or anyone else give to the symbols it manipulates.

Correct that there is no difference to the computation.  Incorrect
that the device is unaffected.  It interacts with the world.  Traffic
light changes to red -- UTM interprets this as 'green', proceeds
across intersection.  Crash.

"Of course, it is a different matter if the programmer wrote "green"
"where s/he should have written "red".  The robot will probably cause a
"traffic accident by that.

Not different -- same.  I begin to wonder if we mean the same thing by the
term 'denotation'.

"3. Talk with someone, using the word "green" to mean red and "red" to
"mean green, without the other person's knowing about this change in
"denotation.  S/he quickly will find something weird about your ...
"
"What this shows, I think, is that the denotation of public symbols is publicly
"known and that if you make a private change in what is conventionally denoted 
"by a symbol, you will get social problems. Conventions about what the 
"denotations of symbols are, are not (merely) in the head of one individual but
"are social institutions. 

Precisely.  So you do understand.  But the robot is somehow different
from persons in this regard ??

		Greg, lee@uhccux.uhcc.hawaii.edu

sarima@gryphon.COM (Stan Friesen) (03/11/89)

In article <2125@star.cs.vu.nl> roelw@cs.vu.nl () writes:
>
>> In general, it only makes sense to talk about the programmer "changing the
>> denotation of a symbol in a program" when that change produces corresponding
>> changes it the program's output behavior, i.e. "changing the denotation of
>> symbol 'xyz' from 'hide behind the nearest rock' to 'cover yourself with
>> barbacue sauce and jump up and down and yell'" only makes sense if the
>> generation of the symbol "xyz" in the program produces the corresponding
>> difference in behavior. 
>
>No. Read any textbook on formal languages or logic and you will find a clean
>separation between the syntax and semantics of a language. Whether we can make
>sense of the output of a symbol-manipulation process is completely irrelevant
>to the rules of symbol-manipulation.
>
	Indeed, and I consider this to be one of the most serious deficiencies
of standard linguistic theory.  I do not believe an adequate, complete theory
of natural linguistic competence is possible without dealing with the inter-
action between "semantics" and "syntax".  It is clear, to a historian, that
the seperation was originally instituted to break an intractable problem up
into simpler components.  Now, tradition has elevated this pragmatic split
into a "fact" of linguistics.  As far as "formal" languages  are concerned,
they have almost no applicability to natural language processing, since
they are based on math rather than natural language.
	Changing the "denotation" of a linguistic symbol changes the conditions
under which it is acceptable in normal discourse.  After all if I said
"I put on my desk and went to an idea" you would say I was talking nonense.
Yet, if I have redifined "desk" to mean "coat" and "idea" to mean "concert"
it is perfectly legitimate!  Thus in "simulating" a competent speaker a
Chinese Room *must* deal with denotation in its rules of operation or it
is moste certainly *not* going to fool any real Chinese.

>
>The output of a symbol-manipulation process is a string of symbols. Let's call
>this O. The denotation of the string is given by 1. the denotation of the 
>symbols in it and 2. rules for constructing the denotation of a well-formed 
>string from those of its components. Let's call these rules D. Your proposal
>is to call the pair (O, D) the outcome of the process. Obviously, this is not
>indendent of D. However, O *is* indendent from D; it depends only on 1. the
>string of input symbols 2. the rules for manipulating those symbols, and these
>in turn are independent from D.

	True in a *formal* sense, but whether O is considered acceptible as
a natural language string by a native speaker *does* depend on (O, D), and
this is what the Chinese Room is supposed to be capable of.  Thus, the the
rules must only produce strings, O,  which have acceptable mappings to
denotations, D.  And any change in the set of denotations changes the rules
needed for determining acceptable pairs.
-- 
Sarima Cardolandion			sarima@gryphon.CTS.COM
aka Stanley Friesen			rutgers!marque!gryphon!sarima
					Sherman Oaks, CA

ray@bcsaic.UUCP (Ray Allis) (03/12/89)

 Roel Wieringa points out,

   "It is part of the definition of a Turing machine (TM) that it manipulates
   symbols on the basis of their form only.  The denotation of the symbols,
   and of expressions built from these symbols, is not relevant for the
   outcome of the TM computation."

I, too, have pointed out several times that symbols are deliberately stripped
of their connotations and denotations before they are submitted to "symbol
processing".  Symbol processors *do not* handle meanings, by design.  It is
*specified* that meanings have no place in symbol processing; they get in the
way.  

*After* the processing (predicate calculus, rule-based expert systems,
whatever) *we humans* may attach meaning to whatever symbols emerge.  (I
encounter heavy resistance about here, usually).  There is no meaning carried
through the "symbol grinder"; there is nothing present in the symbol which
determines its meaning.  The meaning of a symbol exists in a mind, in the
association of the symbol with the mind's personal experience.

Drew McDermott says:

   "I hope it's obvious that the rules I and the others are envisioning do a
   good deal more than the Schankian script appliers Searle was describing. 
   I will grant that those rules wouldn't have had experiences."  

It doesn't MATTER how complex the rules are in the Chinese Room, if the room
only receives symbols as input.  It makes no difference how sophisticated the
processing of the symbols inside the Chinese Room, the meanings were all left
outside the door, along with any associations between meaning and symbol. 
None of that enters the process.  When symbols are returned from inside the
Room, some mind associates or assigns meaning to them.  And, fuel for further
discussion, there is *no guarantee* that those associations are "the same" as
any which might have existed before.  

This came from USENET rec.games.chess:

Remarks by M. Valvo re: the US Amateur Team Championship East 2/18-20 1989

    "I was the operator for Deep Thought and I was amazed at its poor play in
	the openings.  At first I thought it was because of its black repertoire,
	but, while that is also true, I realized its understanding of how to play
	openings on its own was nearly non existant."

[I would remove the word "nearly".] 

    "They play perfect tactical chess for anything within their range.  If
    something exists outside their sight, they are helpless."

Regarding openings: "The Alekhine, which Deep Thought seems to like, is
	not a good choice.  It requires concepts.  If you play over the Alekhines
	played by Deep Thought, it is clear that once it is out of the book, it
	flounders around"

After adjustment for the anthropomorphisms, it is apparent that even if chess
programs are winning games at master level, something is missing.  Chess is a
logical system, a classic symbol system, all syntax and no semantics.  The
system is self-contained: there are markers or tokens, and a complete set of
rules for their manipulation.  There is no "denotation" or "connotation"
(required) from outside the system.  It would be susceptible to play by
strict deductive logic if the necessary calculations could only be performed
in less than the remaining lifetime of the Universe.  

We're quite sure that human chess players do not (can not) calculate game
play to anything like the extent computer programs do.  And yet the human
players are still quite competitive; the programs have not yet overwhelmed
humans.  No, humans use metaphors like "get into fire fights whenever
possible".  Humans "understand" the analogy between the volatile battle
called a "fire fight" and a particular approach to move choice in chess. 
Humans use many other analogies and metaphors to play chess.  One side of
each of these comes from human experience.  The meanings of phrases like
"fire fight" are not part of the definition of chess, but the denotations and
connotations of that phrase and hundreds of others are indispensible in the
guidance of human play.  But denotations and connotations are not allowed
into the symbol-processing computer.  

Considering why chess programs and machine translators and "expert systems"
are not very satisfying led me to see that symbol processing is not enough,
by itself, to produce human thinking.  We think with the denotations and
connotations, which is to say our experience.  Symbol manipulation (i.e. 
logic) is a tool which helps us think; it doesn't think by itself.

I think Gary Schiltz's story of passing a calculus course without
"understanding" calculus and the Feynman anecdote are excellent evidence for
the existence of systems which simulate understanding without possessing it. 
Of course those deceptions were not sustainable.  How about

   "The past decade has seen the creation of a substantial number of AI
   programs that are capable of making discoveries at a non-trivial
   (professional) level.  Such programs include Meta-Dendral, AM and EURISKO,
   BACON and its associates (DALTON, GLAUBER, STAHL), and KEDADA."
   
Does any reasonable person believe these programs "understand" anything at
all?

The present argument is whether any device (in this case the "Chinese Room")
using only symbols, can "understand".  Even with a very relaxed definition of
"understand" the answer is (to me) clearly "No".  This conclusion is in
direct opposition to the PSSH.  It says that, if the goal was really human-
like intelligence, decades of work on symbol-only systems has been futile. 
You just can't get there that way.  You can't walk to the moon.

What is at issue here is explanation of the fact that decades of work based
on the Physical Symbol System Hypothesis (PSSH), various Predicate Calculi
and the Clockwork Hypothesis have failed to produce intelligent machines. 
Even if you continue to resist defining terms of discussion such as
"intelligent", "understand" or "common sense", it's hard to deny that fact.

Stevan Harnad talks about symbol grounding, I talk about non-symbolic
analogs, Roel Wieringa points out that symbolic systems are not affected by
denotations or connotations of the symbols.  I can't speak for them, but I'm
questioning some of the fundamental assumptions of AI here, both research and
application.

Now, if you've read this far, I have a question.  But first, some heresi..
er, um, hypotheses:

    Human "natural" languages are not symbol systems; nothing useful can be
	done with only the symbols.  It's the meanings that are important. 
	Directly translating from the symbols of one language (e.g. English) to
	the symbols of another (e.g. Chinese) without recourse to denotations and
	connotations is nonsensical.  (This really isn't arguable, is it?)

    Thinking and understanding have to do with (non-symbolic) physical and
	chemical events in our central nervous system (brain).  Neural nets
	(biologically inspired) are interesting precisely for these non-symbolic
	attributes.  Rules are an after-the-fact description of those events. 
	Rules cause nothing. 

I have tried to present these ideas in other places than comp.ai, and in
general, they are even less accepted than here.  To all confirmed, hard-over
PSSH believers, is there anything that would convince you that there's more
to humans than symbols?  (Something short of an actual, working
implementation please, as that will take some time.)  Is there some evidence
that would cause you to re-inspect your conviction that the "Systems Reply"
is sufficient?   What part of the process by which I came to see the
inadequacy of the symbol processing approach can I explain more clearly?

ray@atc.boeing.com - 

Disclaimer: redundant; my employer has already disclaimed me.

mike@arizona.edu (Mike Coffin) (03/14/89)

From article <10704@bcsaic.UUCP>, by ray@bcsaic.UUCP (Ray Allis):
> Is there some evidence
> that would cause you to re-inspect your conviction that the "Systems Reply"
> is sufficient?   What part of the process by which I came to see the
> inadequacy of the symbol processing approach can I explain more clearly?

Sure.  Convince me that no symbol-pushing engine can simulate pieces
of my brain, if you make the pieces small enough.  My belief in the
systems reply is based exactly on this:

1) I have in my possesion a system that seems to understand and think:
   my brain.  (My wife might argue about that...)
2) The brain (and the rest of the body) is made up of physical parts:
   electrons, atoms, molecules, cells, organs, etc.
3) I see no reason, in principle, that such parts can't be simulated
   to any desired precision, given powerful enough computers.  Not
   necessarily Turing machines; we may need random bits.
4) Given such simulators, I see no reason, in principle, that I can't
   begin replacing my biological parts with simulated parts.
   Obviously I will need some chemical peripherals to interface the
   two systems.
5) Given that the simulations are accurate enough, I see no reason that
   at some point in the process of replacement I will cease to
   understand: e.g., that with 23.999% of my brain simulated, I understand,
   but with 24.000% I cease understanding.
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

rar@ZOOKS.ADS.COM (Bob Riemenschneider) (03/14/89)

Mike Coffin presents what seems to me a pretty poor argument for the
possibility of "understanding through symbol pushing".

=>   1) I have in my possesion a system that seems to understand and think:
=>      my brain.  (My wife might argue about that...)
=>   2) The brain (and the rest of the body) is made up of physical parts:
=>      electrons, atoms, molecules, cells, organs, etc.
=>   3) I see no reason, in principle, that such parts can't be simulated
=>      to any desired precision, given powerful enough computers.  Not
=>      necessarily Turing machines; we may need random bits.

Mike: 

So far, so good.  You haven't shown that *only* the physical properties of
the brain are relevant to understanding, that (say) divine intervention
isn't required, but I'm inclined to believe that replacing a brain by a
physically identical object wouldn't affect understanding.

=>   4) Given such simulators, I see no reason, in principle, that I can't
=>      begin replacing my biological parts with simulated parts.
=>      Obviously I will need some chemical peripherals to interface the
=>      two systems.

I can see lots of reasons why it might be impossible to perform such
"replacement".  For instance, suppose that relatively fine physical structure
turns out to be relevant to intelligence--i.e., that simulation at the 
electron level turns out to be required.  So you pick some electron to
be replaced, and simulate it on your computer.  You then remove the electron,
only to realize that hooking the simulation in doesn't make sense.  A
simulation of an electron doesn't have the right physical properties to
influence other electrons in the same way that an electron does.  E.g.,
the remaining electrons have to "feel" the same electrical repulsion 
that they would if the electron hadn't been removed.  Electrons repel
electrons, simulated electrons simulated-repel simulated electrons, but
simulated electrons do not repel actual electrons any more than simulations
of hurricanes blow down actual trees.

This sort of bit-by-bit replacement argument only works if the replacement
pieces have the right causal powers.  But we don't know what causal powers
are relevant to understanding.  That's part of the problem.

=>   5) Given that the simulations are accurate enough, I see no reason that
=>      at some point in the process of replacement I will cease to
=>      understand: e.g., that with 23.999% of my brain simulated, I understand,
=>      but with 24.000% I cease understanding.

Even if everything followed to this point, this at most shows that either
understanding isn't lost, or it's implausible to regard understanding as an 
all-or-nothing thing.  ("I see no reason to believe that at some point in
the process of pulling out hairs I will become bald.")  But, there's good
reason to say understanding isn't all-or-nothing anyway.

By the way, if I recall correctly, Searle believes that simulation of
intelligence is possible, and so is creation of an artifact that understands.
He's trying to show that true understanding will not through an extension
of certain programming techniques, so your whole argument doesn't have
much relevance to the conclusions that were supposed to be drawn from
the Chinese Room.

							-- rar

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/14/89)

From article <10704@bcsaic.UUCP>, by ray@bcsaic.UUCP (Ray Allis):
# ...
# Human "natural" languages are not symbol systems; nothing useful can be
# done with only the symbols.  It's the meanings that are important.
# Directly translating from the symbols of one language (e.g. English) to
# the symbols of another (e.g. Chinese) without recourse to denotations
# and connotations is nonsensical.  (This really isn't arguable, is it?)

It's not nonsensical at all.  Lots of people have had the experience
of translating an article in a language they don't know with a
dictionary.  Not fully, but for some types of articles you can
get most of the gist.  For that matter, when you use a dictionary
for your own language it's mostly just symbol-symbol correspondences
you're finding out about.  Though dictionaries do commonly have
some encyclopedic information, too.

I am not saying that full or even good translation can be done this way
-- just that sometimes, some sort of translation can.  And I'm not
saying this shows that natural languages are merely symbol systems.
Rather, I'm saying you can't show that they're not by appealing to the
supposed impossibility of symbolic translation.

# Thinking and understanding have to do with (non-symbolic) physical and
# chemical events in our central nervous system (brain).  ...

That can hardly be so, since such events can be taken as symbols
for the states of the world that evoke them.

		Greg, lee@uhccux.uhcc.hawaii.edu

roelw@cs.vu.nl (03/21/89)

In <3432@uhccux.uhcc.hawaii.edu>, lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes
in response to my earlier posting:

>"The general result is that a symbol-manipulating device is not affected
>"by the denotation you or anyone else give to the symbols it manipulates.
>
>Correct that there is no difference to the computation.  Incorrect
>that the device is unaffected.  It interacts with the world.  Traffic
>light changes to red -- UTM interprets this as 'green', proceeds
>across intersection.  Crash.

That's true, I should have written that the *computation* is not affected,
instead of the *device* not being affected. But does this imply that the
symbol-manipulation process has access to the denotation of the symbols?
I said earlier that meanings are not merely in the head but are also social
institutions. You replied:

>Precisely.  So you do understand.  But the robot is somehow different
>from persons in this regard ??

That's precisely what is at issue, and what the Chinese room argument tries to
show, viz. that people *are* different in this regard. I think this is the
reason why the dispute runs around in circles most of the time: The statement
that a symbol-manipulation process can realize thought is trivially true once
we assume that human beings think by manipulating symbols. Under that
asumption, there is no difference between a symbol-manipulating robot and a
human being. But the assumption has a peculiar status:

1. If it is true, it is not analytically true, i.e. it does not follow from the
meaning of "thought" and "symbol manipulation." So the assumption that people
think by manipulating symbols is a synthetic proposition.
2. Those who claim it to be true, claim it to be a priori true. If it is
claimed to be true, rather than a working hypothesis which directs research,
then its truth is claimed not on the basis of empirically validated facts
(though it is hoped that these facts which be found in the future).

So the assumption that people think by manipulating symbols is a synthetic a
priori, and this causes people who believe it to interpret every argument or
fact in a way diametrically opposed to the way disbelievers interpret them.

Roel Wieringa

roelw@cs.vu.nl (03/21/89)

In <4494@pt.cs.cmu.edu>, kck@g.gp.cs.cmu.edu (Karl Kluge) answers a previous
posting of mine, 

>> The general result is that a symbol-manipulating device is not affected by
>> the denotation you or anyone else give to the symbols it manipulates.
>
>If you are talking about the process by which strings of output symbols get
>produced, this is true but irrelevant. The process by which *you* generate
>the strings of symbols that form your posts (regardless of whether your mind
>is describable by a formal system) is not affected by the denotation I or
>anyone else give to the symbols you produce. That is not a demonstration
>that you do not "know" the denotations of those words, yet you want us to
>believe that it is a demonstration that a UTM does not (and cannot) "know"
>the denotations of its symbols.

I think that the fact that symbol-manipulation processes manipulate symbols on
the basis of their syntactic structure only is extremely relevant, once you
stop assuming what is to be argued, viz. that people think by manipulating
symbols. The fact that my thought process is not affected by the denotation
which other people assign to the symbols in which I express my thought is not
(by me at least) intended to show that I know the denotation of these symbols.
I am not trying to show, or prove, that people can think.
The issue is not whether people think, but whether symbol-manipulating
porocesses can think. The Chinese room argument tries to make plausible that
they cannot.

>> What this shows, I think, is that the denotation of public symbols is
>> publicly known and that if you make a private change in what is
>> conventionally denoted by a symbol, you will get social problems.
>> Conventions about what the denotations of symbols are, are not (merely) in
>> the head of one individual but are social institutions. 
>
>First, let's acknowledge the distinction (popping up explicitly for the
>first time) between the "public" symbols produced by a system (the words
>sent along the teletype to the subject in the Turing Test, for instance),
>and it's "private" symbols (gensyms produced by LISP, for instance). 

The distinction I make is between the public and private denotation of a public
symbol, i.e. the symbol (e.g. word) which I use to express my thought. I am 
not assuming that I have private symbols "in" which I think. However, for
symbol-manipulating system like a computer, I agree with the following:

>There are no social conventions, and
>hence no privileged denotations for the system's "private" symbols.

Here there is a confusion again:

> As far as
>I can tell, the only indication that I "know" the denotations of the words I
>use is
>
>1) I have some goal in mind (I'd like a bowl of ice cream)
>2) I emit a string of symbols ("John, would you get me a bowl of ice cream
>   while you're getting yourself one?"), and
>3) Things in the environment react in such a way that my goal in emitting the
>   string of symbols I did is satisfied (my apartmentmate brings me a bowl of
>   ice cream).
>
>Since this happens fairly consistently, I assume that the denotations I have
>for the words I use roughly overlaps the denotations of those words in the
>minds of those I talk to, but that's the most I can conclude. 

What you confuse is knowing a denotation of a symbol and agreeing with other
people about what the denotation of the symbol is (during the discourse).

After showing that symbols can be realized physically, and that their
operational semantics can be realized physically in a causal process, you
remark that 

>The denotations of the symbols have no corresponding reality.

Correct, but I don't see what that has to do with the problem of whether a
symbol-manipulation process can think. Unless, of course, you assume that our
brains realize thinking by realizing a symbol-manipulating process.

Roel Wieringa

hansw@cs.vu.nl (Hans Weigand) (03/21/89)

In article <2185@star.cs.vu.nl> roelw@cs.vu.nl () writes:

>What you (KK) confuse is knowing a denotation of a symbol and agreeing with other
>people about what the denotation of the symbol is (during the discourse).

I agree (know?) that knowing (the denotation of ) a symbol is not the 
same as agreeing with other people about it. But there is an important 
overlap: you cannot know (the denotation of) a symbol without
agreeing with others (witness Wittgenstein). The reverse is not true: 
two communicating processes can agree on a certain message format without 
knowing the denotation subjectively.

SCIRE TUUM NIHIL EST NISI TE SCIRE HOC SCIAT ALTER (Persius)
(your knowing is void if the other does not not know that you know it)

-
Hans Weigand
Dept of Mathematics and Computer Science
Free University, Amsterdam

roelw@cs.vu.nl (03/27/89)

In <4532@pt.cs.cmu.edu>, kck@g.gp.cs.cmu.edu (Karl Kluge) writes:

>What do you mean by "knowing the denotations of the symbols it
>manipulates",

By "symbol" I mean any abstract or concrete entity which 1. is
distniguishable from other entities and 2. can be mechanically
recognized as being the same symbol every time it occurs. This
last requirement assumes that there is a distinction between a
symbol and its occurrences, and that no human faculty is needed
to classify occurrences as occurrences of the same symbol. By the
"denotation" of a symbol I mean an abstract or concrete entity
which is assigned to the symbol. This is in itself a meaningless
exercise, but acquires meaning from a context such as proving
program correctness or formally proving a theorem. The assignment
of a denotation to a symbol can be formalized as a mathematical
function; given a set SYM of symbols and a set DEN of possible
denotations we can specify any function in [SYM -> DEN] as
denotation function.

"Knowledge" is not formally defined in my statement; but we have
some knowledge of what it means, from first-hand experience,
psychological theory, etc. What I mean by my statement is that
with respect to the symbols it manipulates, a computer is in the
position of a human being who manipulates symbols on a piece of
paper without knowing "the" denotation of these symbols. More
precisely: 
1. I assume "knowing" means something for human beings, although
we cannot say precisely what it means; 
2. I don't assume that the computer "knows" something in the same
sense that the human being knows something (versus assuming that
the computer cannot know anything, for that is what I want to
show, whithout assuming it); 
3. a computation is a symbol manipulation for which a denotation
function has been selected; if the computation is sound, the
output has a denotation if the input has; 
4. the human being need not be aware of the denotation function
selected in order to carry out the symbol manipulation;
5. so it may be possible to interpret a single symbol-
manipulating process as a computation of an answer to a Chinese
question and, using a different denotation function, as the
computation of the solution to a differential equation, and,
using a different function again, as the computation of the
square root of a number. This is only imposible for those symbol-
manipulations for which it can be proven that there is only one
denotation function for which the computation is sound. (Cf.
Reiter's closed-world hypotheses for databases, which restrict
the set of possible models of the theory to a singleton set of 1
model).

Given this independence of syntac and semantics, there is no
sense in which a computer can "know" the denotations of the
symbols it manipulates; simply because in general is not a single
priviliged denotation. (Reiter's closed-world DB's are hardly a
candidate for thinking computers).

The hard part is how a denotation function of a set of symbols
links with our ability to think and know things. I have nothing
to say about that, except that the onus of proof (or plausible
argument) is on those who say that we think by manipulating
symbols.

Note that of the set DEN is again a set of symbols, we can define
the denotation function formally and even store (a finite part
of) it in a computer and manipulate it. This is what a computer
actually does. This shifts the problem to what the denotation of
the symbols in DEN is. I am not convinced that symbol grounding
is at all the answer for a machine to transcend symbol-
manipulation. The letters I know type are symbols which are
stored in a computer and manipulated on the basis of their
syntactic (and physically recognizable) form. They are grounded
in my hitting certain physical keys, but this does not make my
computer intelligent.

I don't think proof is attainable here. The error may be that the
desire to prove things may cloud the view of the possibility that
rational argument is possible outside the realm of provable
statements, and this clouding may open the way for irrationality.

> If we can agree that people can "think" without "knowing" the
> notations of the symbols they use (regardless of whether people
> are "symbols all the way down"), then why is the ability of a
> TM to "know" the denotations of its symbols relevent to the
> question of whether it can think?

The question is whether people think by manipulating symbols. If
so, then we may ask whether they can do so without knowing the
denotation of these symbols. (presumably yes, for we are not
aware of manipulating symbols when thinking). Not assuming that
people are symbols all the way down, let's take an
uncontroversial example: a mathematician rehearsing a proof s/he
is going to explain for a class. Can s/he do this whithout
knowing what the symbols stand for? I would say not. But I agree
that this is not a proof that s/he does not actually manipulate
symbols "all the way down" without knowing their denotation.
Neither do I think such a proof is possible, or required.

> I don't understand your concern about the system's "knowing"
> this arbitrary property, [the denotation of a symbol] when it
> can (by hypothesis) do all the right things in terms of
> interacting with the world without "knowing"/representing this
> property.

I am concerned with the independence of syntax from
(denotational) semantics because I think that it the crux of
Searle's argument. However, Searle's argument is not a proof and
many people are not convinced (let alone converted) by it. (It
would almost be a contradiction in terms if it were a proof, for
then it would be formalizable, and hence verifiable by a
computer).

I am more concerned however with the tendency to identify thought
with the capability to causally interact with the world in the
proper way. "The proper way" is often taken to mean
"indistinguishable from the way human beings interact." This last
assumption is made in the Turing test (in any of its forms). I
disagree with this on two points:
1. There is more to human thought than causal interaction with
the world. To demand that everything that exists must be
describable as causal interaction is a form of philosophical
idealism, for it is the requirement that what exists must be
knowable in a certain way. There is no reason the universe should
be constituted such that we, insignificant parts of it, can know
it in a certain way. This does not mean that the drive to explain
events causally is bad, or wrong, etc., just that it has limits,
and the explanation of human subjectivity may be one of those
limits. I agree mostly with Thomas Nagel's book "The View From
Nowhere" on this (Oxford UP 1986). See also his "What is it like
to be a bat", reprinted in "Mortal Questions," Cambridge UP,
1979, 165-180.
2. To interpret "the proper way of interacting with the world"
with "acting indistinguishably from human beings" is to confuse
ontology with epistemology. Searle also say as much; the question
is not how we know that a person things, nor how we can logically
or empirically justify such knowledge claims, but what it is we
ascribe to a person when we say that s/he thinks.

Roel Wieringa