[comp.ai] Question on Chinese Room Argument

kck@g.gp.cs.cmu.edu (Karl Kluge) (02/17/89)

> From: harnad@elbereth.rutgers.edu (Stevan Harnad)
> 
> Tell me, down there in the trenches, can you still tell the difference
> between this: (1) "Koran reggel ritkan rikkant a rigo" and this: (2)
> "How much wood could a woodchuck chuck if a woodchuck could chuck
> wood"? Call that difference "X." X is all that's at issue in the
> Chinese Room Argument. No word games.

Ah, but it is a word game. Here is Searle's Chinese Room argument as I see
it. We have Mind A, which we will call John Searle, which understands
English, and which in its capacity as a Universal Turing Machine is
emulating Mind B, which we will call Fu Bar.  Mind A, John Searle, does not
understand what is going on in Mind B, Fu Bar, whose execution it is
simulating.

So what? How does this in any way, shape, or form establish that Mind B does
not understand Chinese in exactly the way that Mind A understands English?

Suppose that Searle underwent some traumatic experience that led to his
suffering from multiple personality disorder in such a way that personality
1 was the usual old John Searle, while personality 2 did nothing but execute
the rules for manipulating Chinese characters (to the point of not
responding to English). What possible conclusion could any rational
individual encountering this unfortunate make other than that his body
contained two minds, one speaking and understanding English, and the other
reading and writing in Chinese. How, without the benefit of introspection
that we are positing is missing, could Searle deny that this other mind
running on his brain really "understood" Chinese?

I'm very serious about this, I really don't understand the point Searle
believes he is making (or at least, I don't buy that he proves the point - I
don't buy the analogy with simulating a forest fire).

Karl Kluge (kck@g.cs.cmu.edu)


-- 

rapaport@sunybcs.uucp (William J. Rapaport) (02/18/89)

In article <4298@pt.cs.cmu.edu> kck@g.gp.cs.cmu.edu (Karl Kluge) writes:
>
>Suppose that Searle underwent some traumatic experience that led to his
>suffering from multiple personality disorder ...

Just such a situation has been discussed in:

Cole, David (1984), "Thought and Thought Experiments," _Philosophical
Studies_ 45:  431-444.

and replied to by me in:

Rapaport, William J. (1986), "Searle's Experiments with Thought,"
_Philosophy of Science_ 53:  271-279.

See also my recent:

Rapaport, William J. (1988), "To Think or Not to Think"
_Nous_ 22:  585-609.

					William J. Rapaport
					Associate Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo		 ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260	 ||uucp: {decvax,watmath,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||fax:  (716) 636-3464

staff_bob@gsbacd.uchicago.edu (02/18/89)

>Ah, but it is a word game. Here is Searle's Chinese Room argument as I see
>it. We have Mind A, which we will call John Searle, which understands
>English, and which in its capacity as a Universal Turing Machine is
>emulating Mind B, which we will call Fu Bar.  Mind A, John Searle, does not
>understand what is going on in Mind B, Fu Bar, whose execution it is
>simulating.
> 
>So what? How does this in any way, shape, or form establish that Mind B does
>not understand Chinese in exactly the way that Mind A understands English?
>
[deleted] 
>Karl Kluge (kck@g.cs.cmu.edu)

I seem to have gotten lost in this discussion. Perhaps someone can help
me out of this. Without ever having read the now infamous 'Chinese Room
Argument', my understading of it is as follows: 

Given any natural language, in this case Chinese, it is possible for
someone with the proper tools (e.g. a dictionary, a grammar book and lots
and lots of time) to communicate in that language without really 
'understanding' the language. The theory is that such translation, which
involves nothing but the manipulation of symbols, requires no actual
understanding of the language. Since this is what computers do,
computers do not 'understand' the language they are translating.

Is this at least the jist of the argument?

Suppose that it is. Those of you who have said that yes, the translator
'understands' what he is translating, seem to be stretching my commonplace
definition of understanding more than just a little. This does not vindicate
Searle, but I think proceding along these lines is rhetorical sophistry.
Arguing about what constitutes 'understanding' does not make Searle's
point disappear. 

In particular, consider the recent argument that if a Chinese speaking person
thinks I understand Chinese, then I must in fact understand it, regardless
of what *I* believe I understand. To me, this seems consistent with the
prevalent 'native speaker' definition of language, and so plausible on the
surface. On deeper consideration, however, one sees that the argument is
being used to subvert itself. Searle thinks he knows what 'understand' means,
and as a native speaker of his own language, I suppose he has that right. 
One cannot seriously use the 'native' understanding of a Chinaman to invalidate
my own understanding of English. If you buy into this definition of
lingusitic validitiy, then you shouldn't quibble with definitions at
all. If, as a native speaker,  I say I don't 'understand', then I don't.
Period.

Yes, Searle does 'play fast and loose' with his definitions. Unfortunately,
the 'native speaker' argument allows him to do exactly that.

The point I'm trying to make here is that argument along these lines is
generally not productive. Disputing defintions, and derivations based on
those defintions, has unfortunately become a part of our (Western)
intellectual heritage. When Minsky says that words should be our servants,
and not our masters, is he not recognizing this very fact? A definition
is generally not a theorem, and it is a mistake to reason from a definition
as if it were a theorem. Anyone who takes a moment to reflect should realize
the Searle is making a valid point - there is a difference between what
I (and Searle) call understanding and what this supposed translator is doing 
in his Chinese Room. If Minsky, or anyone else, has a different definition
of understanding, so be it. That in itself does not invalidate the point
which Searle is trying to make.

The problem with Searle's argument, as I see it, is equally obvious. Anyone
who has ever studied a language, or who has tried to write a program to
do so knows that one need substantially more than a dictionary and a grammar
to do the job. Serious thinkers no longer believe that simple symbol 
manipulation is up to the task. In part, this is because of the context
sensitivity of language. In general, it is a result of the fact that a
(natural) language is not a self contained mathematical system. 
Understanding language requires a context of understanding which is 
larger than the atoms and rules which can be said to constitute a particular 
language. 

In part, Searle's human translator possesses this context. But this fact
works against Searle, not for him. If the human, qua human, cannot be
said to 'understand', then certainly the machine cannot. That much is
clear. However, this sidesteps the fact that even with the sort of domain 
knowledge which all humans have, and which seems requisite to true 
understanding, I don't believe that a human being could actually accomplish 
the task assigned him in his Chinese Room. If one somehow limits the language 
domain so that a human could, I suspect that a machine could also. Fooling a 
native language speaker is not an easy task, a fact that can be attested to 
by any one of millions of immigrants living in this country.

(For an excellent proof of this, see yesterday's posting to 
REC.HUMOR.FUNNY entitled 'Signs Of Our Times', which contains any number of
hilariously funny 'translations', many of which seem to have been
made by people with some knowledge of English and a dictionary. One
of my personal philosophical interests is why people find such mistakes so very
humorous.)

My point then, is this: to successfully translate a language, a human
being needs not only a grammar and a vocabulary and the domain knowledge
which is his by virtue of being human. He needs something else - at least
of modicum of 'understanding' of the particular language he is translating.
The problem with machine translation is not primarily one of syntactic
transformation and word substitution. If it were, we would have mastered
machine translation 10 years ago, and Searle's argument against machine
understanding would be valid. The problem with machine translation today
is that we must impart to the machine not only a knowledge of the nuances of 
the language being translated, but we must also give it much of the domain 
knowledge which we, as humans, take very much for granted. If (and this
is not a very small if) we ever manage to accomplish this, and thereby 
establish a proper context for machine translation, then as I see it, Searle 
is unable to argue that we have not also established a context for machine 
understanding.

I suppose that it is technically true that everything done on a computer
can be reduced to the level of abstract symbol processing. To point to
this low level of computer processing and then to talk about the very
high level capabilities of the human brain and ask 'How can one be the other?'
is rhetoric of the very worst kind. To begin with, it ignores the fact that
we can reduce the operations of the brain to a very low level and then show,
mathematically, that the computational capabilities of neurons and 
computers are in fact equivalent. What Searle points to as evidence of
man's difference from machines are direct consequences of the incredibly
complex organization of these low level neurons, which has been achieved
only after billions of years of evolution. There is as yet no theorectical
reason why we cannot eventually learn to create similarly complex machines.
If we understood how neurons can be organized in such a way as to produce
cognitive functions such as 'understanding' or 'creativity', then we could
say exactly how 'one can be the other'. Until then, arguments such as
these are most likely going to be quite common .

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/19/89)

kck@g.gp.cs.cmu.edu (Karl Kluge) of Carnegie-Mellon University, CS/RI
wrote:

" Ah, but it is a word game... We have Mind A, which we will call John
" Searle, which understands English, and which in its capacity as a
" Universal Turing Machine is emulating Mind B, which we will call Fu
" Bar. Mind A, John Searle, does not understand what is going on in Mind
" B, Fu Bar, whose execution it is simulating.

Ah me. Is it really so difficult to see that in the above you have
simply presupposed the conclusion you were trying to demonstrate?
Before we buy into any dogmas, it is a fact that Searle has a mind, but
definitely NOT a fact that "Fu Bar" has a mind. OF COURSE if we could
simply presuppose that Fu Bar had a mind, or "define" it as having a
mind, everything would come out just as you would like. But that's
not just a word game: It's circular.

" Suppose that Searle underwent some traumatic experience that led to his
" suffering from multiple personality disorder...

Irrelevant again. That Searle has a mind (at least one) is not in
doubt. That the symbol-manipulator does is. That Searle might have had
more minds, one English and the other Chinese, is perhaps possible, but
he probably doesn't; and even if he did, it's irrelevant. -- Or do you
really believe that simply going through the motions of what he does
in the Chinese room would be "traumatic" enough to induce multiple
personality disorder (plus glossolalia in Chinese)? Yet even THAT would
be irrelevant, because you have not shown that his computer counterpart
had a mind in the first place, to be similarly traumatized.

All of this is certainly word games and sci-fi fantasy, to which any
argument, correct or incorrect, deep or shallow, simple or complex, can be
reduced. Searle's argument is simple but deep. Its simplicity has
led a lot of people who have not understood the deeper point it is
making into irrelevancies of their own creation. To show it to be
incorrect you must first understand it.

" I'm very serious about this, I really don't understand the point Searle
" believes he is making...

You can say that again...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

steyn@eniac.seas.upenn.edu (Gavin Steyn) (02/20/89)

In article <Feb.18.17.26.17.1989.23438@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

>Before we buy into any dogmas, it is a fact that Searle has a mind, but
>definitely NOT a fact that "Fu Bar" has a mind. 
>
>Irrelevant again. That Searle has a mind (at least one) is not in
>doubt.

I'm sorry, I don't believe Searle has a mind.  In fact, everyone is just
a symbol processing box (the equivalent of Fu Bar).  So, I doubt Searle
has understanding.  Now, can you prove me wrong?
   Or, since you probably don't know Searle that well, prove to me that you
have a mind, and aren't a symbol manipulator.  I don't believe it's possible
for you to do so, in which case Searle's argument degenerates into proving
that none of us are actually intelligent.
       Gavin Steyn

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/20/89)

steyn@eniac.seas.upenn.edu (Gavin Steyn) of University of Pennsylvania
writes:

" [1] I don't believe Searle has a mind... [2] everyone is
" just a symbol processing box...  can you prove me wrong?

You're certainly entitled to say (1). That's just an instance of the
familiar "other minds" problem: There's no way to know for sure
that anyone else but oneself has a mind. On the other hand, [2] is
just an obiter dictum, hand-waving, a bald claim (one that also happens
to be believed by a lot of current AI investigators simply because
they have not thought very deeply about any of this).

I certainly can't "prove" you wrong about [1]. No one can (not even
Searle, though he can of course chuckle privately over the fact that he
knows perfectly well the "unprovable" truth that you are in fact wrong
about him...). And even if you ask for less than mathematical "proof,"
i.e., only ordinary empirical evidence, no one can give you a shred of
it -- and that's the other-minds problem too: All empirical "evidence"
that Searle has a mind (e.g., he has a brain like yours, he looks like
you, he talks like you, he acts like you -- EVEN that he has a symbol
cruncher inside that's running the same program!) is JUST as compatible
with the fact that he HAS a mind as with the fact that he has NO mind
but merely looks, acts etc. just as if he didl So what? That's all just
a restatement of the other-minds problem.

The ONLY one who can know for sure that Searle has a mind is Searle himself.
And the same is true of your mind: YOU know it (don't you? don't you?).
But do you also "know" that you're a symbol-processor? Or that your having
a mind is purely a consequence of your being a symbol-processor? If so,
please share...

No, [2] is a different kettle of fish. It's just a not very deeply
examined notion that is currently in fashion and that Searle's argument
(for those who have been prepared to think deeply enough about it to
understand it) has gone some way toward showing to be incorrect. A more
tenable version of [2] might be:

        [2'] everyone EXCEPT ME is just a symbol processing box

but that version just wears its incompleteness and arbitrariness on its 
sleeve (which is why no one ever remembers to put it that way).

Some readers know that in my writing I have advocated what I call the
Total Turing Test (TTT) as a methodological constraint in cognitive
modeling. I have also advocated methodological epiphenomenalism.
However, I have never mistaken the TTT for a "proof" or even
empirical evidence. It isn't. Nor have I had to resort to denying the
obvious: That people have minds, just as I do. I have, however, taken
up Searle's torch to show why a "symbol processing box" could not pass
the TTT. It would be useful if those with a serious interest in these
matters would slow down long enough to grasp the logic and the facts
before hurtling on to their respective weighty conclusions...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

engelson@cs.yale.edu (Sean Engelson) (02/20/89)

I have a simple question for those denying (Searle + rules)
understanding of Chinese: What is your definition of "understanding"
that allows Searle understanding of English, but does not allow
(Searle + rules) understanding of Chinese?  It seems that to
demonstrate or refute the position of understanding being demonstrable
purely through I/O behavior, one must have an effective definition of
understanding.  By effective I mean one that does not beg the
question, i.e. by defining understanding to be symbol-processing, or
conversely, to be that which humans do.

Any takers?


----------------------------------------------------------------------
Sean Philip Engelson, Gradual Student	Who is he that desires life,
Yale Department of Computer Science	Wishing many happy days?
Box 2158 Yale Station			Curb your tongue from evil,
New Haven, CT 06520			And your lips from speaking
(203) 432-1239				   falsehood.
----------------------------------------------------------------------
I know not with what weapons World War III will be fought, but World
War IV will be fought with sticks and stones.
                -- Albert Einstein

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/20/89)

engelson@cs.yale.edu (Sean Engelson) of  Computer Science,
Yale University, New Haven, CT 06520-2158 asks:

" for those denying (Searle + rules) understanding of Chinese: What is
" your ["effective"] definition of "understanding"... one that does not
" beg the question... by defining understanding to be
" symbol-processing or... that which humans do.

Is anyone reading or understanding these postings? Or thinking about
what this is all about? As I've indicated repeatedly, this is NOT a
definitional issue! All I have to do is POINT to positive and negative
instances! Before you went to graduate school in computer science at
Yale, if I had said to you, "Look, you understand English, you don't
understand Chinese, correct?" You would have said, "Sure," and you
would have been right. Nobody would have had to define understanding,
"effectively" or otherwise; and no questions would have been begged.
In fact, nobody COULD have defined understanding, then or now, because
we still don't know what it is, functionally speaking; finding out what
it is and how it works is going to be cognitive science's empirical
mission for some time to come.

But we can certainly still POINT to understanding , when it's there;
and say it isn't there, when it isn't. Now you're in graduate school at
Yale, and you aren't so sure about that. Are you sure you're wiser
than before?

(Please don't reply with a string of cases where degree of
understanding is ambiguous; they've already been brought up repeatedly
in this discussion before, and I've replied. In a word, they're
irrelevant. And don't reply with analogies to other disciplines in
which graduate school was right to make you doubt your prescientific
intuitions. There has been no science here yet, just promises and
hand-waving.)

Understanding is what is "+" of Searle (and you) with respect to
English, and "-" with respect to Searle (and you, and the computer
running the program he's executing) with respect to Chinese. Lacking
any other evidence for "+" on the computer's behalf, that makes the score
on understanding: Searle 1, computer 0. 

[This is the negative note on which Searle's Argument ended in 1980;
not to leave it at that, let me add that in "Minds, Machines and
Searle" (1989) I've tried to take it further in a positive direction,
showing that it's only the symbolic approach to modeling the mind
that's vulnerable to Searle's Argument; nonsymbolic and hybrid
symbolic/nonsymbolic models are not. And in "Categorical Perception"
(1987) I have sketched how symbolic representations could be grounded
bottom-up in nonsymbolic (analog and categorical) representations. Now,
being immune to Searle's argument doesn't guarantee that a model has
captured understanding, of course (nor does it "effectively define"
understanding). But it does perhaps correct the misapprehension that
the validity of Searle's argument (and it IS valid) would entail that
NO model could understand; perhaps this misapprehension is behind the
strained, implausible and incoherent counterarguments people have tried
to float under the general banner of the "Systems Reply." You don't
have to give up on "systems". Just give up on purely symbolic systems.]
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

sher@sunybcs.uucp (David Sher) (02/20/89)

Now that we've posted megawords on "understanding" and whether a machine
can or can not posses it, can I ask: what is the advantage of a 
machine with "understanding"?  Assume that HAL doesn't understand 
anything.  He merely manipulates symbols so that he creates an illusion
of understanding in his correspondents.  In what way does that inhibit
HAL as a useful tool.  What could an "understanding" machine that a 
merely intelligent (the symbol manipulator that merely gets the right answer)
machine could not?  Unless someone can show me an advantage to it I'm not
going to waste much time designing it into my programs.

-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

bwk@mbunix.mitre.org (Barry W. Kort) (02/20/89)

The problem I have with Searle's notion of symbol manipulation is
that such a system appears unable to learn anything new.

In _Surely You're Joking, Mr. Feynman_, Richard Feynman recounts
an attempt to teach physics in Brazil.  The students had become
very adept at formal symbol manipulation.  They could regurgitate
the definitions and formulas, but they had no idea that the symbols
actually referred to anything in the outside world!

It seems to me that "understanding" (or "comprehension", as I prefer
to call it) entails the construction of a mental map between symbols
and their referents in the world external to our minds.  Once we
buy into this notion of "understanding", we automatically buy into
the notion of "learning" (knowledge acquisition).

Searle's Chinese Room could be considered to understand Chinese
if it could use the medium of Chinese to acquire knowledge about
the world outside the room.  Such a system would evolve its "rules"
over time.  Instead of just translating stories, it would respond
with its own anecdotal accounts, maintaining a thematic thread
suggested by the preceding stories.

--Barry Kort

fransvo@htsa.uucp (Frans van Otten) (02/20/89)

In article <Feb.19.14.23.34.1989.8773@elbereth.rutgers.edu>,
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
>The ONLY one who can know for sure that Searle has a mind is Searle himself.
>And the same is true of your mind: YOU know it (don't you? don't you?).

You say you're *sure* that you have a mind, you *know* it.  How can you
be *sure* if you only *know* it ?  I say you only had a compile-time flag:

program Stevan_Harnad(input, output);

const I_HAVE_A_MIND = true;    /*  The main point  */

var alive  : boolean;
    symbol : SymbolType;
    result : ResultType;

begin
  alive = TRUE;
  while alive
    do
      symbol = Read_Symbol;
      result = Crunch_Symbol(symbol);
      if result = dead
	then alive = false
	else Output_Result(result);
    end;
end.

I want you to show me how you can prove to yourself that you have a mind.
Until then I must assume that you are (only) a symbol cruncher, and so
must you.

-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp

engelson@cs.yale.edu (Sean Engelson) (02/20/89)

In article <Feb.19.18.25.26.1989.15723@elbereth.rutgers.edu>, harnad@elbereth (Stevan Harnad) writes:
>
>
>engelson@cs.yale.edu (Sean Engelson) of  Computer Science,
>Yale University, New Haven, CT 06520-2158 asks:
>
>" for those denying (Searle + rules) understanding of Chinese: What is
>" your ["effective"] definition of "understanding"... one that does not
>" beg the question... by defining understanding to be
>" symbol-processing or... that which humans do.
>
>Is anyone reading or understanding these postings? Or thinking about
>what this is all about? As I've indicated repeatedly, this is NOT a
>definitional issue! All I have to do is POINT to positive and negative
>instances!

What is your criterion for determining which is which?  I'm not
denying that you have one, I'd just like to have it out in the open
and explicit.

>Before you went to graduate school in computer science at
>Yale, if I had said to you, "Look, you understand English, you don't
>understand Chinese, correct?" You would have said, "Sure," and you
>would have been right. Nobody would have had to define understanding,
>"effectively" or otherwise; and no questions would have been begged.
>In fact, nobody COULD have defined understanding, then or now, because
>we still don't know what it is, functionally speaking; finding out what
>it is and how it works is going to be cognitive science's empirical
>mission for some time to come.

And I can equally well POINT to Searle running his rules for Chinese,
and say to (him + rules) in Chinese, "Look, you understand Chinese,
don't you?" and I'd expect to get back the answer (in Chinese) "Yes".
So why deny the system of (Searle + rules) understanding of Chinese?
After all, I can just POINT to it, can't I?

>But we can certainly still POINT to understanding , when it's there;
>and say it isn't there, when it isn't. Now you're in graduate school at
>Yale, and you aren't so sure about that. Are you sure you're wiser
>than before?

Well, for all external intents and purposes, (Searle + rules)
understands Chinese.  As I think you are saying, since "plain" Searle
does not understand Chinese, (Searle + rules) does not.  Why not?
What's the difference?

>Understanding is what is "+" of Searle (and you) with respect to
>English, and "-" with respect to Searle (and you, and the computer
>running the program he's executing) with respect to Chinese. Lacking
>any other evidence for "+" on the computer's behalf, that makes the score
>on understanding: Searle 1, computer 0. 

In other words, you are DEFINING understanding to be that which Searle
has with respect to English, and not that which (Searle + rules) has
with respect to Chinese.  OK, given that distinction, tell me either
how I can distinguish between the two in a Turing-test fashion, or
what it is about Searle that allows him to understand English which
(Searle + rules) does not have.  Otherwise, as I've said, I'll grant
you your point, and then say that this whole discussion is pointless,
as you're concept of understanding is "That which people do", which is
useless. 

----------------------------------------------------------------------
Sean Philip Engelson, Gradual Student	Who is he that desires life,
Yale Department of Computer Science	Wishing many happy days?
Box 2158 Yale Station			Curb your tongue from evil,
New Haven, CT 06520			And your lips from speaking
(203) 432-1239				   falsehood.
----------------------------------------------------------------------
I know not with what weapons World War III will be fought, but World
War IV will be fought with sticks and stones.
                -- Albert Einstein

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (02/20/89)

From article <Feb.19.18.25.26.1989.15723@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
" ... Is anyone reading or understanding these postings? ...

Someone is reading these postings.  Someone is not understanding
your postings.  You know that the other-minds issue is unresolvable,
yet you suppose that you have resolved it when you premis your
remarks on Searle (and others) having a mind.  Someone is confused.

		Greg, lee@uhccux.uhcc.hawaii.edu

rjc@aipna.ed.ac.uk (Richard Caley) (02/21/89)

In article <Feb.19.18.25.26.1989.15723@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>But we can certainly still POINT to understanding , when it's there;
>and say it isn't there, when it isn't. 

I certainly can't, and it seems to be an assumption of the chinese room
that I can't.  My understanding is that the bahviour of the
searle+room+rules system is to be indistiguiahsble from a native chinese
speaker and it is only by opening the room and seeing searle and asking
_him_ if he understands that we are supposed to determine that the
system does not "understand" chinese. 

If I _can_ tell understanding systems from non-understanding ones then
the whole argument is pointless, since I can never be "fooled" by the
room.

Understanding is a subjective phenomenon; _I_ know if I understand
chinese ( no ) but you only have my word for it.

So it _is_ a definitional problem.  Since we have assumed that the
behaviour is identical whether or not it understands, we must rely on
deduction based on the structure of the system to tell us if it
understands.  Most significantly, we can't rely on the method we use for
humans - if we ask the room ( presumably in chinese ), it says yes,
otherwise the behaviour is not like that of a native speaker!

Without defining understanding we can't argue with it since our
intuative knowledge of understanding is only for _ourselves_, we apply
it to other people since they seem rater similar, we can _try_
and apply it to philosophers in rooms or computer systems but I would
not trust the result -

	" Hm, it does not have a chinese passport and so ... "

>Understanding is what is "+" of Searle (and you) with respect to
>English, and "-" with respect to Searle (and you, and the computer
>running the program he's executing) with respect to Chinese. 

Aren't you assuming the result here. If searle running the program is
"-" WRT "understanding" the naturally the system does not understand.
This is tautological! 

>Lacking
>any other evidence for "+" on the computer's behalf, that makes the score
>on understanding: Searle 1, computer 0. 

If you are trying to prove non-understanding by a default assumption
then I would say you prove nothing, since I can just as easily assert
that by default we must assume that the system _does_ understand. This
is certainly the default I apply to people ( "if they seem to understand
chinese then they do - ask them what the menu means" ). Why should it be
different for other types of system?

>[This is the negative note on which Searle's Argument ended in 1980;
>not to leave it at that, let me add that in "Minds, Machines and
>Searle" (1989) I've tried to take it further in a positive direction,
>showing that it's only the symbolic approach to modeling the mind
>that's vulnerable to Searle's Argument;

If the argument could be truncated to a reasonable length, then I would
be interested if you posed it.  I don't see why, say, searle in a room
pulling strings and waving springs ( or doing something else equally
non-symbolic ) which happens to produce behaviour like a chinese speaker
would not be the basis for a precicely parallel argument. I'm not saying
you are wrong, just that it is not obvious.



-- 
	rjc@uk.ac.ed.aipna

	" Only love denies the second law of thermodynamics "
		- Jerry Cornelius

vangelde@cisunx.UUCP (Timothy J Van) (02/21/89)

In article <Feb.19.14.23.34.1989.8773@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
>" [1] I don't believe Searle has a mind... [2] everyone is
>" just a symbol processing box...  can you prove me wrong?
>
>that anyone else but oneself has a mind. On the other hand, [2] is
>just an obiter dictum, hand-waving, a bald claim (one that also happens
>to be believed by a lot of current AI investigators simply because
>they have not thought very deeply about any of this).
>
>No, [2] is a different kettle of fish. It's just a not very deeply
>examined notion that is currently in fashion and that Searle's argument
>(for those who have been prepared to think deeply enough about it to
>understand it) has gone some way toward showing to be incorrect. A more

I tend to be sympathetic to just about every point Stevan Harnad has made
in this interesting "continental bull session" - except this one. I take it
that [2] is just the Physical Symbol System Hypothesis (Newell and Simon),
otherwise known as the GOFAI hypothesis (Haugeland). Is this really just
a bold claim that nobody would take seriously if they had thought deeply
about the issue?  Is Harnad saying that Newell and Simon, Pylsyhyn, 
Haugeland, Fodor etc have not thought deeply about the issue?
If so, Harnad has quite remarkably high standards for thinking deeply
about the issue - not even some of the most respected minds in cognitive
science make the grade. If, by contrast, Harnad really has thought deeply
about the issue, he surely belongs in the ranks of Turing, von Neumann, 
Wittgenstein etc.

Now, I happen to think that the PSSH is in fact false.  But I also think
that it is a very deep and well worked out view - rather better worked out,
in fact, than just about any psychological paradigm I can think of. In fact,
that's one reason we are now in a position to see its flaws.  So I dont
want to endorse the position; rather, I just question the rather outrageous
claim that anyone who does endorse it cant have thought deeply about the issue.

Steve, please reassure me that I have misunderstood you somewhere here...

Tim van Gelder

marty1@houdi.ATT.COM (M.B.BRILLIANT) (02/21/89)

From article <Feb.19.18.25.26.1989.15723@elbereth.rutgers.edu>, by
harnad@elbereth.rutgers.edu (Stevan Harnad):
> 
> engelson@cs.yale.edu (Sean Engelson) of  Computer Science,
> Yale University, New Haven, CT 06520-2158 asks:
> 
> " for those denying (Searle + rules) understanding of Chinese: What is
> " your ["effective"] definition of "understanding"... one that does not
> " beg the question... by defining understanding to be
> " symbol-processing or... that which humans do.
> .....
> Understanding is what is "+" of Searle (and you) with respect to
> English, and "-" with respect to Searle (and you, and the computer
> running the program he's executing) with respect to Chinese. Lacking
> any other evidence for "+" on the computer's behalf, that makes the score
> on understanding: Searle 1, computer 0. 

As an educated native speaker of (American) English, I know enough
about English to believe that if I did not understand English, I would
not be able to persuade an English-speaker that I could speak English,
no matter how many rulebooks I had.  So I assume, in fairness, that I
could not pretend to speak Chinese, with or without a rulebook, if I
did not understand Chinese.

Did Searle really suppose that he could speak passable Chinese if only
he had a rulebook?  Could you posit a Chinese with a rulebook who could
pretend to speak English, without in fact understanding it?  In other
words, pose a corresponding "English Room puzzle" and you will see the
fallacy.  I am persuaded that any native, foreigner, machine, or
simulation thereof, that can carry on a respectable conversation with
me in English, must understand English.

Incidentally, there is a language proficiency examination developed by
Educational Testing Service, used in New Jersey (and other places, I
suppose) to test the language qualifications of bilingual and ESL
teachers, that might be adaptable for use in a Chinese Room trial.  An
interviewer converses with the subject to elicit speech in the test
language on a variety of topics, and tapes the interview.  A rater
listens to the tape and judges how well the subject has succeeded in
expressing ideas in the test language.  The highest score, which would
be attained by a native speaker with no trace of an accent, is 5.  A
subject with some accent, but full command of linguistic structure and
demonstrable ability to discuss non-trivial topics, would be rated 4. 
I'd guess a machine or simulation with no semantic proficiency would
score below 3 (but I'm not thoroughly familiar with the rating scale).

So let me propose as a partial definition of "understanding": that
anything that can score a 4 on the language proficiency examination
must understand the language.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
            explicitly claims them; then I lose all rights to them.

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/21/89)

sher@sunybcs.uucp (David Sher) of SUNY/Buffalo Computer Science,
in a very revealing posting, asks:

" what is the advantage of a machine with "understanding"? Assume that
" HAL doesn't understand anything. He merely manipulates symbols so that
" he creates an illusion of understanding in his correspondents. In what
" way does that inhibit HAL as a useful tool? What could an
" "understanding" machine [do] that a merely intelligent (the symbol
" manipulator that merely gets the right answer) machine could not?
" Unless someone can show me an advantage to it I'm not going to waste
" much time designing it into my programs.

There is no advantage to worrying about understanding if all you are
interested in doing is making "useful tools" -- which is no doubt all
that most of AI is interested in. One wonders, though, why a
discipline with that motivation tries to push so hard on the repeatedly
discredited "Systems Reply" to Searle, insisting that "The System" DOES
understand, when the real goal is as superficial as this. Perhaps
there is a confusion here between tool-making and mind-modeling.

Cognitive psychologists, on the other hand, are interested in modeling
the mind, including understanding, so we have no choice but to face the
questions Searle (and the mind/body problem and the other-minds
problem) raise. Searle's Argument simply shows that purely symbolic
models are the wrong ones for our purposes.

[Paradoxically, my own work suggests that even cognitive psychologists
should not worry too much about capturing understanding: I have given
reasons -- empirical, methodological and logical -- for adopting
"methodological epiphenomenalism" and the "Total Turing Test (robotic
version)" as constraints on cognitive modeling. However, these same
reasons also go strongly against symbolic modeling in favor of hybrid
modeling, grounding symbolic representations bottom-up in nonsymbolic
(analog and categorical) representations.]

Two other points:

(1) You've got the assumption on the wrong foot: The default
assumption is that HAL doesn't understand, not the other way round. You
don't have to say "Assume Hal doesn't understand" any more than you
have to say "Assume there are no fairies." The default "assumption" is
no, unless compelling reasons are given for rejecting it. No compelling
(or even coherent) reasons are coming from symbolic AI, and certainly
not from proponents of the "Systems Reply."

(2) Unless you are willing to think deeply on these questions you
certainly ARE wasting your time "designing it [?]" into your programs!
One of the reasons I think it's important to get these matters straight
is because if you don't, you spend more time over-interpreting what
your models are doing than in actually strengthening their performance
capacity. This is a deep and subtle point. The Total Turing Test
is the methodological goal. Hermeneutics and hyperbole about the "mental
powers" of toy models is not the way to get there; it's just a way of
covering up how pathetically far away from the goal we really are.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/21/89)

lee@uhccux.uhcc.hawaii.edu (Greg Lee) of University of Hawaii writes:

" You know that the other-minds issue is unresolvable, yet you suppose
" that you have resolved it when you premise your remarks on Searle (and
" others) having a mind. Someone is confused.

There IS an ordinary, everyday practical "solution" to the other-minds
problem, and that is what motivates my "Total Turing Test" (TTT): If
you can't tell the candidate apart from a person in any respect, in
terms of either its robotic or its linguistic performance capacity,
then you have no better or worse grounds for assuming it has a mind
than you have with any other person but yourself. Now that's only a
practical "solution," not a real solution or a guarantee. I'm certainly
willing to give Searle the benefit of the doubt here, because he can
pass the TTT, whereas the (hypothetical) symbol manipulator can only
pass the linguistic version.

But all of that is irrelevant anyway, because in the Chinese Room there
is first-person evidence available that there's NO Chinese
understanding going on in there -- exactly the same kind of
first-person evidence that makes one candidate (and one only) exempt
from the other-minds problem, namely, oneself: For you or I could do
Searle's simulation ourselves, and still not understand Chinese. We
don't need Searle; nor do we have to make any assumptions about his
having a mind!

Again, this is no guarantee; after all, someone ELSE in there could be
understanding, or even confused: The walls could have not only ears,
but a soul. There are, after all, two extreme conclusions one could
draw from the other-minds problem (both erroneous and far-fetched, in
my view): One is that because you can't confirm it for sure, therefore
NO ONE BUT YOU in fact has a mind. The other is that because you can't
disconfirm it for sure, EVERY THING [animate and inanimate, great and
small, part and whole) has a mind. I think neither of these conclusions
is satisfactory, and certainly neither follows as a matter of necessity
from the existence of the other-minds problem.

A third (and I think reasonable) conclusion from the other-minds
problem is to reserve the benefit of the doubt to candidates, like
ourselves, who pass the TTT. Not so confusing, I think...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

yamauchi@cs.rochester.edu (Brian Yamauchi) (02/21/89)

In article <Feb.19.18.25.26.1989.15723@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>[This is the negative note on which Searle's Argument ended in 1980;
>not to leave it at that, let me add that in "Minds, Machines and
>Searle" (1989) I've tried to take it further in a positive direction,
>showing that it's only the symbolic approach to modeling the mind
>that's vulnerable to Searle's Argument; nonsymbolic and hybrid
>symbolic/nonsymbolic models are not. And in "Categorical Perception"
>(1987) I have sketched how symbolic representations could be grounded
>bottom-up in nonsymbolic (analog and categorical) representations. Now,
>being immune to Searle's argument doesn't guarantee that a model has
>captured understanding, of course (nor does it "effectively define"
>understanding). But it does perhaps correct the misapprehension that
>the validity of Searle's argument (and it IS valid) would entail that
>NO model could understand; perhaps this misapprehension is behind the
>strained, implausible and incoherent counterarguments people have tried
>to float under the general banner of the "Systems Reply." You don't
>have to give up on "systems". Just give up on purely symbolic systems.]
>-- 
>Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu

I have been following this discussion for a while, and so I decided to
go and read Searle's "Minds, Brains, and Programs" in Mind Design.  In
this essay, Searle outlines his basic argument and then tries to argue
against a number of the possible objections.

I think that Searle *does* have a valid criticism of traditional,
symbolic AI.  On the other hand, many of his counterarguments trying
to broaden this point seem to range from the unclear to the bizarre.

The basic idea that symbol manipulation alone is not necessary for
intelligence makes sense.  To translate Searle's argument from
the language of philosophy to the language of AI, consider what it
means to understand the sentence "The dog chased the cat."
Traditional AI would represent this as:

	dog(x) & cat(y) & chased(x,y)

However, the program really has no idea what a dog is, what a cat is,
or what it means from one thing to chase another.  This, I believe, is
the crux of Searle's argument.  One could add a dog schema which said
something like:

	dog
		is-a : animal (subtype : mammal, carnivore)
		environment : land
		legs : 4
		tail : yes

But, then the program still doesn't know what a land environment is,
or what legs are, etc.

The conventional counterargument is that the richness of the knowledge
base determines the level of understanding.  So one could add schemas
for mammals and environments, and so forth.  Of course, these would
have to be defined in terms of other symbols, and so on and so forth.

To a large extent this is what happens with human learning.  We learn
new concepts by relating them to things we already know.  The
*critical* difference, in my opinion, is that at some level all of our
learned symbols are grounded in sensory experience.  Most of us
probably learned what a dog was by seeing one or by seeing a picture
of one, not by reading a dictionary definition.  We know that
"chasing" refers to an activity that we have seen (on TV, at least, if
not in person), rather than simply a construct of:

	chase(x,y) --> wants-to-catch(x,y) & wants-to-avoid(y,x)

Therefore, in order to build a system that "understands" in the same
way that people "understand", we need to give it the ability to relate
the concepts in its knowledge base to sensory experiences.

This is a similar to what Searle calls "The Combination Reply" -- that
a complete robotic system with sensory perceptions and motor control
(and possibly based upon neural networks) could be said to have
"understanding".

Searle admits (p. 296) : "I entirely agree that in such a case we
would find it rational and indeed irresistable to accept the
hypothesis that the robot had intentionality, as long as we knew
nothing more about it."

But then, he goes off and says that since we know how the robot works,
we can't ascribe "intentionality" to it.  He says we *can* ascribe
"intentionality" to animals because (1) We don't understand how they
work and (2) They are made out of the same stuff as humans.  This is
almost too absurd to contemplate.  (1) is equivalent to arguing that
since primitive man couldn't explain the weather without refering to
magic, storms must be the result of sorcery.  (2) is nothing more than
a form of vitalism which might be understable if Searle were a mystic,
but is all the more baffling since he states (p. 300) the materialist
position that humans are, in fact, machines that think.

Searle goes so far as to state "Whatever else 'intentionality' is, it
is a biological phenomenon and it is as likely to be causally
dependent on the specific biochemistry of its origins as lactation,
photosynthesis, or any other biological phenomena."  One can only
wonder what would happen if it were discovered that some humans depend
more heavily on some neurotransmitters than others.  Who would Searle
consider non-intentional: the people using the non-standard
neurotransmitters or whoever was using neurotransmitters that were
different from his own?

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/21/89)

The careful reader will find an uncanny resemblance between the logic
underlying the following exchange and Lewis Carrol on Achilles and
the Tortoise (in which Carroll showed that you can lead someone to logical
water, but there's no way to make him drink it). Read on:

engelson@cs.yale.edu (Sean Engelson) of Computer Science,
Yale University, New Haven, CT 06520-2158 asks:

" What is your criterion for determining which is which [understanding
" or not understanding Chinese]?

As I said before, you need no definitions, no criteria. You only need
to be able to tell the difference -- in your own, subjective,
first-person case -- between when you understand a language (e.g.
English) and when you do not (e.g., Chinese). Can you do that? Now
please assume that Searle can do the same, and that he says he does NOT
understand Chinese. There is no reason whatever (apart from the
preconceptions that Searle's Argument was formulated to invalidate) (a)
not to believe him or (b) to believe that there is "someone/something"
else in the Chinese Room that IS understanding Chinese in the same
sense that you or I or Searle understand English. To pick (b) merely on
the basis of the preconceptions that are the very ones under criticism
here is simply CIRCULAR. -- Now none of this is new; one would have
thought that it would be clearly understood (sic) from my prior
posting. Read on.

" And I can equally well POINT to Searle running his rules for Chinese,
" and say to (him + rules) in Chinese, "Look, you understand Chinese,
" don't you?" and I'd expect to get back the answer (in Chinese) "Yes".

In saying you could point to it, I was clearly speaking about the
subjective phenomenon (i.e., whether YOU YOURSELF understand English or
Chinese), which is primary and not open to doubt, rather than to its
external manifestations (i.e., whether SOMEONE ELSE does): The evidential
status of those external manifestations is precisely what's on trial
here; one can't win this case by simply declaring them "judge and jury"
instead! That's not a logical supporting argument; that's just
circularity.

" for all external intents and purposes, (Searle + rules) understands
" Chinese... [whereas] "plain" Searle does not understand Chinese,
" (Searle + rules) does not. Why not? What's the difference?

The difference is that the "external" criteria have not been shown to be
valid, and hence there is simply no justification for taking them to signal
the presence of understanding at all. To merely assume that they do is
not an argument; its just circularity again.

For all external purposes, we have a (hypothetical, perhaps even
impossible) situation in which a guy (imagine it's you) is running
around manipulating symbols and saying he can't speak Chinese and has
no idea what the symbols mean; meanwhile, ex hypothesi, if a Chinese
person reads the symbols, they say "I understand Chinese... etc." Even
if we accept the unlikely hypothesis that this is possible (and could
go on for a lifetime, with the symbols as consistently lifelike and
convincing as a real-life Chinese pen-pal), there's still no one around
about whom we could say, "Ya, well if he says he understands, I'm ready
to believe he understands, just I am about anyone else who says he
understands." The only one around is you, and you say (don't you?) that
you don't understand. Perhaps we should ask the hypothetical Chinese
alter ego to say where he is, and where he stands on the matter...

Part of the problem is of course with the premise itself (i.e.,
supposing that we could do all this with just symbols), which may be
about as realistic as supposing that we could trisect an angle with
just compass and straight-edge. All that the premise seems to do is to
spuriously mobilize our instincts to defend the personhood of our
unseen pen-pals. But recalling that there's no way we can be sure about
our pen-pals under such conditions either, and that the SOLE case of
understanding we can be sure about is still our very own (i.e., the
"other minds problem"), ought to be a good antidote for mistaking
external signs for the real thing even under such counterfactual
conditions.

" In other words, you are DEFINING understanding to be that which Searle
" has with respect to English, and not that which (Searle + rules) has
" with respect to Chinese. OK, given that distinction, tell me either
" how I can distinguish between the two in a Turing-test fashion, or
" what it is about Searle that allows him to understand English which
" (Searle + rules) does not have.  Otherwise, as I've said, I'll grant
" you your point, and then say that this whole discussion is pointless,
" as your concept of understanding is "That which people do", which is
" useless. 

Have we made any progress here? I think not. I keep saying I'm not
defining but pointing to a subjective experience that all people have
and Engelson keeps talking about fanciful things that "people plus
rules" have. Now he says that all this logical, methodological and
empirical discussion, which was originally intended to assess the
evidential status of the (teletype version) of the Turing test is now
answerable to that test A PRIORI! That's like saying: "Well if God
didn't create the earth, then tell me how he created Darwinian
Evolution?" Preconceptions manage to survive without ever becoming
negotiable!

The only other possibility Engelson seems ready to imagine is a complete
alternative causal/functional explanation of understanding that distinguishes
Searle from a mere symbol-manipulator; but I've already said that we're
far from such an account, nor do we need one for present purposes.
Logically speaking, you just have to show that a theory is internally
inconsistent or inconsistent with the data in order to show it's wrong
(although Kuhn will of course remind you that that's not enough to make
people give it up). You don't have to come up with the right theory.
(If you want a better candidate in this case, though, try grounded
hybrid robotic systems, as I've suggested.) Searle's denial that he
understands Chinese (or your own denial, if you were in his place and
had not yet been at Yale-CS too long to be able to call a spade a spade)
seems like a big enough inconsistency to do in the purely symbolic
theory.

My "concept of understanding" is no different from what yours was
before you bought into a fantasy that "a person plus rules" could
understand even if the person couldn't. And if you want an idea of just
how pointless a discussion is when logical arguments are unavailing, read
Lewis Carroll on Achilles and the Tortoise. But to go on like this is
more like Schultz on Charlie Brown and the football...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

yamauchi@cs.rochester.edu (Brian Yamauchi) (02/21/89)

In article <Feb.20.20.43.21.1989.15687@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>[Paradoxically, my own work suggests that even cognitive psychologists
>should not worry too much about capturing understanding: I have given
>reasons -- empirical, methodological and logical -- for adopting
>"methodological epiphenomenalism" and the "Total Turing Test (robotic
>version)" as constraints on cognitive modeling. However, these same
>reasons also go strongly against symbolic modeling in favor of hybrid
>modeling, grounding symbolic representations bottom-up in nonsymbolic
>(analog and categorical) representations.]
>Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu

I'm trying to figure out whether you and I are saying exactly the same
thing, but using completely different languages.

I'm saying that I agree with Hans Moravec and Rodney Brooks that in
order to build intelligence, we will need to build complete robotic
systems including both sensory input and motor control.

Is this anything like "methodological epiphenomenalism"?

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

sher@sunybcs.uucp (David Sher) (02/21/89)

In article <Feb.20.20.43.21.1989.15687@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes (a bunch of stuff I agree with and then):
>
> ...
>
>There is no advantage to worrying about understanding if all you are
>interested in doing is making "useful tools" -- which is no doubt all
>that most of AI is interested in. One wonders, though, why a
>discipline with that motivation tries to push so hard on the repeatedly
>discredited "Systems Reply" to Searle, insisting that "The System" DOES
>understand, when the real goal is as superficial as this. Perhaps
>there is a confusion here between tool-making and mind-modeling.
>
> [ and more stuff that seems correct ]

I'd like to hazzard an answer to this question.  The reason the AI 
establishment tries to answer this question is there is a strong implication
that Searle's argument indicates that symbolic AI approaches will always
lack some performance capability.  In fact what he seems to be arguing is
that a symbolic AI system can have any desired capability 
but still lack "understanding".  If Searle instead argued that
AI systems will never possess a soul the argument would not be so strident
yet the argument is identical (at least for the definition used in Jewish
theology).  But the word "understanding" is almost always associated with
some performance criterion, thus Searles argument in denotation is unassailable
(at least by the likes of me) but has incorrect connotations.  

I probably blew it, being far from an expert in rhetoric, but this seems
to be the nub of the problem.  Does anyone believe that they can build a
machine with a soul?  It is just as easy to build in Searle's "understanding."

-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/21/89)

marty1@houdi.ATT.COM (M.B.BRILLIANT) of AT&T BL Holmdel NJ USA adks:

" Did Searle really suppose that he could speak passable Chinese if only
" he had a rulebook?

Searle inherited this premise from "Strong AI." His Argument only concerned
what FOLLOWS from it. I'm sure Searle would be perfectly willing to
doubt the premise (so would I), but that's beside the point.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/21/89)

yamauchi@cs.rochester.edu (Brian Yamauchi) of
U of Rochester, CS Dept, Rochester, NY wrote:

" I agree with Hans Moravec and Rodney Brooks that in order to build
" intelligence, we will need to build complete robotic systems including
" both sensory input and motor control. Is this anything like
" "methodological epiphenomenalism"?

No, but it sounds like a step in the direction of the Total Turing Test
(TTT) rather than just the linguistic TT. It also sounds like a step
toward a grounded symbolic/nonsymbolic system, but it all depends on
the grounding scheme. Just hooking up an autonomous symbol-cruncher
module to autonomous transducer and effector modules won't do it; the
functional dependency of the symbols on the nonsymbolic representations
must be deeper and more intimate than that.

("Methodological Epiphenomenalism" is just a theoretical strategy that
recognizes that subjective phenomena cannot have an independent causal
role in a functional model and hence makes no direct attempt to
"capture" subjective phenomenology, just total performance capacity (TTT),
accepting that if mental processes are involved, they somehow
piggy-back on the functions generating the TTT capacity, and that there
is no way to confirm their presence directly except by BEING the robot
in question.)
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/21/89)

vangelde@cisunx.UUCP (Timothy J Van) of Univ. of Pittsburgh, Comp & Info Sys
wrote:

" [2] is just the Physical Symbol System Hypothesis (Newell and Simon),
" otherwise known as the GOFAI hypothesis (Haugeland). Is this really just
" a bold [sic] claim that nobody would take seriously if they had thought deeply
" about the issue? Is Harnad saying that Newell and Simon, Pylsyhyn, 
" Haugeland, Fodor etc have not thought deeply about the issue?

Most of the individuals you mention are deep thinkers and have thought
deeply about the issue. All of them are quite aware of the weaknesses
of the hypothesis. I doubt that any of them would endorse the kinds of
bald [sic] claims that many advocates of the "Systems Reply" to Searle
have made.

In my paper (JETAI 1 (1989) p. 23, fn. 24) I give what I think is close
to an exhaustive list of the features that made "symbolic
functionalism" (as I call it) look good for a while. The rest is
devoted to showing why it was not good enough.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

bwk@mbunix.mitre.org (Barry W. Kort) (02/21/89)

In article <16027@cisunx.UUCP> vangelde@unix.cis.pittsburgh.edu
(Timothy J. van Gelder) questions the depth of Steven Harnad's
thought processes:

 > If, by contrast, Harnad really has thought deeply about the issue,
 > he surely belongs in the ranks of Turing, von Neumann, Wittgenstein etc.

Having read some of Steven's thoughts, I think it not unlikely that
his name would be mentioned in the same sentence as Alan's, Johnny's,
or Ludwig's.

--Barry Kort

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/21/89)

In article <764@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes:
>In article <Feb.19.14.23.34.1989.8773@elbereth.rutgers.edu>,
>harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>>
>>The ONLY one who can know for sure that Searle has a mind is Searle himself.
>>And the same is true of your mind: YOU know it (don't you? don't you?).
>
>You say you're *sure* that you have a mind, you *know* it.  How can you
>be *sure* if you only *know* it ?  I say you only had a compile-time flag:
>
>program Stevan_Harnad(input, output);
>
>const I_HAVE_A_MIND = true;    /*  The main point  */
>
>var alive  : boolean;
>    symbol : SymbolType;
>    result : ResultType;
>
>begin
>  alive = TRUE;
>  while alive
>    do
>      symbol = Read_Symbol;
>      result = Crunch_Symbol(symbol);
>      if result = dead
>	then alive = false
>	else Output_Result(result);
>    end;
>end.
>
>I want you to show me how you can prove to yourself that you have a mind.
>Until then I must assume that you are (only) a symbol cruncher, and so
>must you.
>
I think Frans has done an excellent job of a symbolic reformulation of the
point I originally wished to raise.  An argument which is based on assertions
of what it "obvious" to introspection is no argument at all, no matter how
many words Searle and Harnad decide to invest in it.  (Incidentally, I believe
it was Harry Truman who coined a phrase to describe an argument which is
supported by nothing more than an over-abundance of verbiage;  he called
it "The Big Lie.")

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (02/22/89)

From article <Feb.20.20.51.31.1989.15876@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
" ...
" But all of that is irrelevant anyway, because in the Chinese Room there
" is first-person evidence available that there's NO Chinese
" understanding going on in there -- ...

I can't agree to that, except as a terminological point.  That is,
if the program is to characterize the way 'understand' is ordinarily
used, I have a limited sympathy with the argument.  When we
know the mechanism behind the behavior, we don't usually speak
of 'understanding'.  But even as mere linguistics, it's second
rate, since when one chooses not to think or talk in terms of
mechanism, 'understand' is still often appropriate.  And we do
develop new usages in the course of a conversation, as here when
some come to be willing to attribute understanding to the
Chinese room.

Philosophers doing second-rate linguistics can be trying.

		Greg, lee@uhccux.uhcc.hawaii.edu

engelson@cs.yale.edu (Sean Engelson) (02/22/89)

In article <Feb.20.21.17.37.1989.16495@elbereth.rutgers.edu>, harnad@elbereth (Stevan Harnad) writes:
>
>The careful reader will find an uncanny resemblance between the logic
>underlying the following exchange and Lewis Carrol on Achilles and
>the Tortoise (in which Carroll showed that you can lead someone to logical
>water, but there's no way to make him drink it).

But, I suspect, that Harnad has the roles reversed.  In any case,
>Read on:

>" What is your criterion for determining which is which [understanding
>" or not understanding Chinese]?
>
>As I said before, you need no definitions, no criteria. You only need
>to be able to tell the difference -- in your own, subjective,
>first-person case -- between when you understand a language (e.g.
>English) and when you do not (e.g., Chinese). Can you do that? Now
>please assume that Searle can do the same, and that he says he does NOT
>understand Chinese.

However, the _system_ of (Searle + rules) says that it does understand
Chinese.  Is there some hidden reason (dare I say "criterion"?) to say
that Searle knows what he's talking about, while (Searle + rules) does
not? 

>There is no reason whatever (apart from the
>preconceptions that Searle's Argument was formulated to invalidate) (a)
>not to believe him or (b) to believe that there is "someone/something"
>else in the Chinese Room that IS understanding Chinese in the same
>sense that you or I or Searle understand English. To pick (b) merely on
>the basis of the preconceptions that are the very ones under criticism
>here is simply CIRCULAR. -- Now none of this is new; one would have
>thought that it would be clearly understood (sic) from my prior
>posting.

In other words, you are a priori invalidating the systems approach.
By this I mean the application of such terms as "understanding" et al.
to the _entire_ entity under discussion, not merely the obvious,
physical one.  You seem to be denying the existence of (Searle +
rules).  Why?  I don't see that the `paradox' inevitably leads one to
deny this system existence; in fact it seems that your arguments take
its NON-existence for granted.

>Read on.
>
>" And I can equally well POINT to Searle running his rules for Chinese,
>" and say to (him + rules) in Chinese, "Look, you understand Chinese,
>" don't you?" and I'd expect to get back the answer (in Chinese) "Yes".
>
>In saying you could point to it, I was clearly speaking about the
>subjective phenomenon (i.e., whether YOU YOURSELF understand English or
>Chinese), which is primary and not open to doubt, rather than to its
>external manifestations (i.e., whether SOMEONE ELSE does): The evidential
>status of those external manifestations is precisely what's on trial
>here; one can't win this case by simply declaring them "judge and jury"
>instead! That's not a logical supporting argument; that's just
>circularity.

By this argument Searle _himself_ does not understand English, since
all I have is his word, and since I cannot declare anyone but myself
"judge and jury", I _must_ disbelieve him.  Can you say "implicit
solution to the Other Minds Problem"?  I knew you could!

>For all external purposes, we have a (hypothetical, perhaps even
>impossible) situation in which a guy (imagine it's you) is running
>around manipulating symbols and saying he can't speak Chinese and has
>no idea what the symbols mean; meanwhile, ex hypothesi, if a Chinese
>person reads the symbols, they say "I understand Chinese... etc." Even
>if we accept the unlikely hypothesis that this is possible (and could
>go on for a lifetime, with the symbols as consistently lifelike and
>convincing as a real-life Chinese pen-pal), there's still no one around
>about whom we could say, "Ya, well if he says he understands, I'm ready
>to believe he understands, just I am about anyone else who says he
>understands." The only one around is you, and you say (don't you?) that
>you don't understand. Perhaps we should ask the hypothetical Chinese
>alter ego to say where he is, and where he stands on the matter...

Why do you assume that an intelligence must be 'person-like', in
having a simple body which contains its underlying hardware and
software.  You seem to be making the hidden assumption that any system
which is not `embodied', cannot understand.  How do you justify this? 

>Part of the problem is of course with the premise itself (i.e.,
>supposing that we could do all this with just symbols), which may be
>about as realistic as supposing that we could trisect an angle with
>just compass and straight-edge. All that the premise seems to do is to
>spuriously mobilize our instincts to defend the personhood of our
>unseen pen-pals. But recalling that there's no way we can be sure about
>our pen-pals under such conditions either, and that the SOLE case of
>understanding we can be sure about is still our very own (i.e., the
>"other minds problem"), ought to be a good antidote for mistaking
>external signs for the real thing even under such counterfactual
>conditions.

Well, then, how does non-symbolism solve the other minds problem?  How
do I know that someone else understands through non-symbolic means,
when symbolic do not suffice?  If you argued that the other minds
problem was insoluble, that would be fine, but you seem to be making
the rather strong claim that it is solvable, given the proper
non-smbolic representation.  I see no substantiation of this claim, is
there any?

>Have we made any progress here? I think not. I keep saying I'm not
>defining but pointing to a subjective experience that all people have

How, please tell me, HOW can you point to a _subjective_ experience
that someone other than yourself is experiencing?

>and Engelson keeps talking about fanciful things that "people plus
>rules" have. Now he says that all this logical, methodological and
>empirical discussion, which was originally intended to assess the
>evidential status of the (teletype version) of the Turing test is now
>answerable to that test A PRIORI!

Rather, I am arguing, that failing an answer to the other minds
problem, this gedankenexperiment tells us nothing.  A priori, there is
no reason to distinguish between the evidence given us by Searle or by
(Searle + rules), and this introduction of this nebulous subjective
concept of understanding doesn't help much.

>That's like saying: "Well if God
>didn't create the earth, then tell me how he created Darwinian
>Evolution?" Preconceptions manage to survive without ever becoming
>negotiable!

Rather more like saying, "How does Darwinian evolution _necessarily_
provide evidence against the existence of god?"

>The only other possibility Engelson seems ready to imagine is a complete
>alternative causal/functional explanation of understanding that distinguishes
>Searle from a mere symbol-manipulator; but I've already said that we're
>far from such an account, nor do we need one for present purposes.

I would be satisfied for a descriptive account.  How can I tell if
someone or something understands Chinese or not?  I'd like to know on
what phenomenological basis you say that Searle understands while
(Searle + rules) does not.  I don't want a theory, just a criterion
for reproducibility, so that I too may see the distinction.

>Logically speaking, you just have to show that a theory is internally
>inconsistent or inconsistent with the data in order to show it's wrong
>(although Kuhn will of course remind you that that's not enough to make
>people give it up). You don't have to come up with the right theory.

But you do need some clear method of evaluating your data!  From what
I can tell, the data is identical in both cases, the difference being
in the implementation.  All I want is an evaluation criterion.  (Is
this getting repetitive?  See the note at the top.)

>seems like a big enough inconsistency to do in the purely symbolic
>theory.

Suppose that with the proper adjustment of an EEG machine, I was able
to get Morse code (or some other linguistic phenomenon) out of my
brain that said, in effect, "I do not understand English".  Would you
then say that _I_ do not understand English, ignoring my vehement
replies to the contrary, and my demonstrated competence with the
language?  Well, Searle's denial of understanding is the same as my
brain waves.  Irrelevant.

>My "concept of understanding" is no different from what yours was
>before you bought into a fantasy that "a person plus rules" could
>understand even if the person couldn't.

I.e. you assume a priori the result that a person + rules cannot
understand.  The hidden assumption is revealed.

----------------------------------------------------------------------
Sean Philip Engelson, Gradual Student	Who is he that desires life,
Yale Department of Computer Science	Wishing many happy days?
Box 2158 Yale Station			Curb your tongue from evil,
New Haven, CT 06520			And your lips from speaking
(203) 432-1239				   falsehood.
----------------------------------------------------------------------
I know not with what weapons World War III will be fought, but World
War IV will be fought with sticks and stones.
                -- Albert Einstein

arm@ihlpb.ATT.COM (Macalalad) (02/22/89)

In article <Feb.20.21.17.37.1989.16495@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
[a long, condescending diatribe against Sean Engelson]

Stevan, I've tangled with you before, and I'll probably regret
doing it again, but there are a few issues that I want to address,
and I'd love to hear your response.

1. The distinction between Searle and (Searle + rules)

As I understand from your previous postings, you argue that 
(Searle + rules) = Searle.  The systems' argument, as I understand
it, argues that (Searle + rules) > Searle.  I, of course, tend to
agree with the systems' argument, which seems more reasonable and
intuitive to me.  After all, (Searle + rules) is fluent enough to
converse with a native Chinese speaker, whereas Searle can't even
begin to speak Chinese.  Even further, I would guess that if
Searle internalized those rules, he _would_ be able to speak
Chinese!

The point is that this distinction is far from being trivial or just
a matter of preconceptions, and I think you need a stronger argument
than appealing to the common sense of the uneducated (or at least,
the pre-Yalie), making vague analogies to Darwin and evolution,
and hurling cheap insults at Sean Engelson.

As far as I can see, the way this problem is posed acknowledges the
difference between (Searle + rules) and Searle.  After all, this
problem isn't interesting at all if we assume that (Searle + rules) =
Searle, since we know from the outset that Searle doesn't know
Chinese.  Central to Searle's argument is the collapsing of this
distinction, which I think is fair game for criticism.  No circular
reasoning, nothing up my sleeves.

2.  Determining the understanding of (Searle + rules)

Now for the sake of argument, let's assume that there is a
distinction between Searle and (Searle + rules).  If we all
acknowledge the other minds' problem, we can safely agree that
the only entity able to decide if (Searle + rules) really
understands Chinese is (Searle + rules).  Not you or me or any
outside observers or even Searle himself.  Only (Searle + rules).

The issue I now want to take up is your justification of the
Total Turing Test.  As I understand it, you argue that it is
useful to assume that certain entities, specifically humans,
are intelligent and can interact with you in intelligent ways
that non-intelligent entities can't.  If a robot (which I guess
would be (metal + rules), but that's a whole other kettle of fish)
can interact with you in such a way that you could not guess
that it was a robot, then it must be intelligent, too.  Of
course, I can argue that this is a little too anthropocentric,
but the same argument can be made against the Turing Test, as well.

What I want to explore is the usefulness of the Total Turing Test.
I could argue that it would be just as useful to characterize
a system which could converse in a natural language as "intelligent"
and capable of "understanding" what I was saying, regardless of
whether it was right in front of me tap-dancing, or talking with
me via a computer terminal.  Remember, Searle's arguments don't
really apply here, since this is a question of pragmatics, and not
a question of whether there really is any understanding taking place.

Of course, if you'd rather offer an objective definition of
intelligence and understanding, please feel free....

3.  Conduct on the net

Now I understand that you are an important person with important
things to say, but that does not give you license to insult anyone
else, especially on such a public form as the net.  We don't all
have the right answers, and often we don't even ask the right
questions.  I see the net ideally as stimulating discussion, not
provoking mud slinging.  If I said some things that were uncalled
for above, I apologize.  I think that a few other apologies are
due.

'Nuff said.

-Alex

mike@arizona.edu (Mike Coffin) (02/22/89)

From article <Feb.19.18.25.26.1989.15723@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
> Is anyone reading or understanding these postings? Or thinking about
> what this is all about? As I've indicated repeatedly, this is NOT a
> definitional issue! All I have to do is POINT to positive and negative
> instances!  [...]

Oh fiddle.  Science deals with the observable.  That is the whole
point of the Turing test --- if a box displays intelligence, then it
is intelligent.  If it seems to understand, it understands. 
There is no more point in differentiating between "understands" and
"doesn't understand but seems to" than to differentiate between
"orbiting because of gravity" and "being pushed by an undetectable
angel that follows an inverse square law."

If the box+rules seems to understand, then it understands.  Pulling it
apart and saying "this piece doesn't understand", "neither does this
one", ... completely irrelevant.  You might as well dissect a brain
--- as you pull out each neuron you say "hmmmm... this clearly doesn't
understand --- it is much too simple."  Why is it so hard for some
people to accept the fact that a system can have properties that none
of its components have?
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

bwk@mbunix.mitre.org (Barry W. Kort) (02/22/89)

In article <4307@cs.Buffalo.EDU> sher@wolf.UUCP (David Sher) writes:

 > Does anyone believe that they can build a machine with a soul?  

Readers who are intrigued by this question may enjoy reading the
two short pieces by Terrel Miedaner in _The Mind's I_ (Hofstadter
and Dennett, 1981).

--Barry Kort

"Artificial Sentient Beings by the End of the Millenium!"

ray@bcsaic.UUCP (Ray Allis) (02/22/89)

>From: sher@sunybcs.uucp (David Sher)
>Subject: Re: Question on Chinese Room Argument
>
>Now that we've posted megawords on "understanding" and whether a machine
>can or can not posses it, can I ask: what is the advantage of a 
>machine with "understanding"?  Assume that HAL doesn't understand 
>anything.  He merely manipulates symbols so that he creates an illusion
>of understanding in his correspondents.  In what way does that inhibit
>HAL as a useful tool.  What could an "understanding" machine that a 
>merely intelligent (the symbol manipulator that merely gets the right answer)
>machine could not?  Unless someone can show me an advantage to it I'm not
>going to waste much time designing it into my programs.
>
>-David Sher
>ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
>UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher
>

Of course if the symbol manipulator "gets the right answer", the answer to
your question is "There IS no difference!"  I am one of those who doubt,
however, that it is possible for either a person or a machine to "manipulate[s]
symbols so that he creates an illusion of understanding in his correspondents".
I don't think the Chinese Room could fool a perceptive human for very long.

"Understanding" language is (at base) the evocation of experience in the
receiving organism.  Translation between languages requires
language1-to-experience followed by experience-to-language2.  You can't go
directly from English symbols to Chinese symbols.  Computers can't translate
languages because they can't experience.  (Yet.)

Searle's Chinese Room is doing transliteration, and as pointed out by an
earlier poster, rec.humor.funny just had several pages of hilarious examples of
the results of that, e.g. a sign in a furrier's shop, "Here ladies can have
coats made from their own skins".  It might not always be funny.
Understanding is more than language translation.  Suppose I instruct a computer
to "Eliminate crime in Detroit".  It returns the next day with "Done!  And it
only took one 20 megaton device, Boss."

*Lack* of understanding is THE major flaw in 30 years of "Physical Symbol System
Hypothesis" AI.

vangelde@cisunx.UUCP (Timothy J Van) (02/22/89)

In article <Feb.21.00.20.44.1989.26600@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
>Most of the individuals you mention are deep thinkers and have thought
>deeply about the issue. All of them are quite aware of the weaknesses
>of the hypothesis. I doubt that any of them would endorse the kinds of
>bald [sic] claims that many advocates of the "Systems Reply" to Searle
>have made.
>

Does "All of them are quite aware of the weaknesses" mean
(a) they dont really endorse the view; 
or (b) they are aware that there is some *apparently* contrary evidence.

Surely (a) is false and (b) is true.  These people really do think that
we are essentially symbol manipulators, though they also think that some
people believe otherwise for bad reasons, and also that the view *could*
be wrong - after all, it is an empirical question. 

Your reply gives the misleading impression that many of the most ardent
advocates of the physical symbol system hypothesis
think that the view has "weaknesses, i.e. 
is not really true.  In fact, in spite of practical difficulties in the
way of demonstrating that it is true, they all wholeheartedly subscribe 
to it - and this despite having thought deeply about the issue.

And its a good thing they subscribe to the view, too - otherwise we
wouldnt have someone to disagree with (at least, someone who's *worth*
disagreeing with to disagree with).

Tim van Gelder
c/o Dept of Philosophy
University of Pittsburgh
vangelde@unix.cis.pittsburgh.edu

rjc@aipna.ed.ac.uk (Richard Caley) (02/22/89)

In article <Feb.20.21.17.37.1989.16495@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

>There is no reason whatever (apart from the
>preconceptions that Searle's Argument was formulated to invalidate) (a)
>not to believe him or (b) to believe that there is "someone/something"
>else in the Chinese Room that IS understanding Chinese in the same
>sense that you or I or Searle understand English.

(a) is fine.

(b) is, surely, a straw man. It is the homoculous argument again. Nobody
is claiming there is "something else" in the room which understands
chinese. There are two cases either

	1) If something understands chinese then some sub part of it
	   understands chinese.

	2) not (1)

Now the first is an infinite regress. If (1) is the case then nothing (
and noone ) can understand chinese. So it must be possible for something
to "understand chinese" ( in our intuative sence ) without any sub part
of it understanding. Hence (b) may not be the case, even allowing for a
chinese understanding room. From this we can say that there are two
extra cases in your above quoted argument

	c) There can be no such room.

	d) The room can understand chinese without any subpart ( searle
	   pencil, paper, book of rules ) understanding.

(c) is what searle is trying to prove, to do this he must disprove all
other cases. He doesn't, as far as I can see eliminate (d).

-- 
	rjc@uk.ac.ed.aipna

	" Only love denies the second law of thermodynamics "
		- Jerry Cornelius

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/22/89)

lee@uhccux.uhcc.hawaii.edu (Greg Lee) of University of Hawaii wrote:


" I can't agree [that "in the Chinese Room there is first-person
" evidence available that there's NO Chinese understanding going on in
" there"] except as a terminological point... to characterize the way
" 'understand' is ordinarily used... When we know the mechanism behind
" the behavior, we don't usually speak of 'understanding'... when one
" chooses not to think or talk in terms of mechanism, 'understand' is
" still often appropriate. And we do develop new usages... as here when
" some come to be willing to attribute understanding to the Chinese
" room...  Philosophers doing second-rate linguistics can be trying.

There are two senses of "understand," a subjective and an objective
one. The first (1) is what I mean when I say "I understand English" and
the second (2) is what I mean when I say "He understands English." The
first is primary. What I say and mean by "I understand" is based on
direct, incorrigible, first-person evidence. When I say "HE
understands," I am merely INFERRING that what's true of him is the same
thing that's true of me when I understand. I can be WRONG (very wrong)
about (2) but not about (1). It is (1) that is at issue in Searle's
Argument, though people keep conflating it with (2).

That's all there is to it. It's not a matter for linguists (any more
than what I mean by "I am in pain" vs. "He is in pain" is). The only
ones who worry about mechanisms here are cognitive modelers.
And I am not a philosopher.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (02/22/89)

From article <Feb.22.01.03.30.1989.19132@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
" ...
" There are two senses of "understand," a subjective and an objective
" one...

No, there aren't.  If there were, the one could not serve as
antecedent for the other in identity-of-sense anaphora, as in
'He understands, and I do, too'.

" I can be WRONG (very wrong) about (2) [objective] but not about
" (1) [subjective]. ...

If this were so, such a construction as 'I thought I understood,
but I was wrong' would be self-contradictory.

" And I am not a philosopher.

Pardon me if I implied that only philosophers do second-rate
linguistics.

		Greg, lee@uhccux.uhcc.hawaii.edu

marty@homxc.UUCP (M.B.BRILLIANT) (02/22/89)

From article <45126@linus.UUCP>, by bwk@mbunix.mitre.org (Barry W. Kort):
> ....
> In _Surely You're Joking, Mr. Feynman_, Richard Feynman recounts
> an attempt to teach physics in Brazil.  The students had become
> very adept at formal symbol manipulation.  They could regurgitate
> the definitions and formulas, but they had no idea that the symbols
> actually referred to anything in the outside world!

I think this is a very significant observation.  Feynman succeeded in
determining that his students were not doing physics.  Supposedly, they
were not just trying to simulate an understanding of physics; they
sincerely believed they understood physics.  They fooled themselves,
but they did not fool Feynman.

This seems to show that the Chinese Room Argument fails in its premise,
not in its logic.  You cannot persuade a human observer that you are
his or her equal if you are only manipulating symbols.  If you assume
you can, you will draw false conclusions.

I think that proves something, but I'm not sure what.  I think it
proves that a system that passes the "Total Turing Test" (TTT) is not
doing "mere symbol manipulation."

I recall that the TTT is not formally defined.  It is defined
operationally, in terms of a received notion of a human observer.  So
any conclusion you draw from it is operational.  Therefore, even if it
provides an operational definition of "mere symbol manipulation," it
brings us no closer to a formal definition.  Same thing goes for
"intelligence," "understanding," etc.

Can we have a review of the question?  What are we arguing about?

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858		Home (201) 946-8147
Holmdel, NJ 07733	att!homxc!marty

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

gss@edsdrd.eds.com (Gary Schiltz) (02/23/89)

In article <45126@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
> 
> In _Surely You're Joking, Mr. Feynman_, Richard Feynman recounts
> an attempt to teach physics in Brazil.  The students had become
> very adept at formal symbol manipulation.  They could regurgitate
> the definitions and formulas, but they had no idea that the symbols
> actually referred to anything in the outside world!
> 

My own similar first hand (and somewhat embarrassing to admit) 
experience:

After I started college as an undergraduate in the mid 1970's, I 
took my first calculus course.  Coming from a small high school in 
a small town, my math skills were minimal (a year or so of algebra), 
so the whole course was very confusing.  In all the time I was in 
the course, I never did understand what calculus was all about.  
However, I did know, for example, that a derivative was "the equation 
you get when you manipulate another equation in such and such a way" 
and an integral was "the equation you get when you manipulate the 
equation in another way."

I even had a fair amount of heuristic knowledge about how to solve 
word problems.  "Hmm, that problem [on the exam] looks like the one 
we did in class.  Let's see, first you take the derivative 
of this and plug in these numbers and solve for this variable, and 
then you circle the answer (and even if the answer is wrong, at least 
I can get partial credit for showing my work, and if everyone else is 
as confused as I am and they don't score well and the exam is graded 
on a curve, maybe I can pass)."  I seemed to be able to do fairly
good mapping of one problem to another based on its surface structure.

Well, I did pass the course (now I'm ashamed that I didn't do what 
was necessary to understand what was going on, but like a lot of 17 
year olds, I just took the easiest way).  I later repeated the course 
and understood what I was doing (and made a lot better grade).

Anyway, from my gut level feeling (quite possibly useless, I admit) 
about what understanding is all about, I really feel I had no
understranding of calculus during that semester.  Just as the Brazilian
students didn't realize that symbols in physics equations actually 
referred to things in the outside world, I didn't know that the 
calculus was modelling anything.  I truly had no idea that derivatives 
had anything to do with rate of change, for example.  But, from the 
outside, it must have appeared that I had at least some understanding 
of calculus; at least I was good enough at manipulating equations to 
make the instructors think so.

This really makes me wonder whether it can be determined whether or
any system understands, simply from external behavior.  I'm not trying
to reach any conclusions about understanding, as I've not studied nor
thought about it much.  But, I thought it might be more food for thought.

--


     /\   What cheer,  /\       | Gary Schiltz, EDS R&D, 3551 Hamlin Road |
    / o<    cheer,    <o \      | Auburn Hills, MI  48057, (313) 370-1737 |
\\/ ) /     cheer,     \ ( \//  |          gss@edsdrd.eds.com             |
   \ /      cheer!!!    \ /     |       "Have bird will watch ..."        |

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/23/89)

rjc@aipna.ed.ac.uk (Richard Caley)
of Dept. of AI, Edinburgh, UK asks:

" If the argument ["showing that it's only the symbolic approach to
" modeling the mind that's vulnerable to Searle's Argument"] could be
" truncated to a reasonable length, then I would be interested if you
" posed it. I don't see why, say, searle in a room pulling strings and
" waving springs (or doing something else equally non-symbolic ) which
" happens to produce behaviour like a chinese speaker would not be the
" basis for a precicely parallel argument. I'm not saying you are wrong,
" just that it is not obvious.

Here it is, pp. 20-21 from Harnad, S. (1989) Minds, Machines and Searle.
Journal of Experimental and Theoretical Artificial Intelligence 1: 5-25.
See especially points (7) and (8):

Searle's provocative "Chinese Room Argument" attempted to show that the
goals of "Strong AI" are unrealizable. Proponents of Strong AI are
supposed to believe that (i) the mind is a computer program, (ii) the
brain is irrelevant, and (iii) the Turing Test is decisive. Searle's
argument is that since the programmed symbol-manipulating instructions
of a computer capable of passing the Turing Test for understanding
Chinese could always be performed instead by a person who could not
understand Chinese, the computer can hardly be said to understand
Chinese. Such "simulated" understanding, Searle argues, is not the same
as real understanding, which can only be accomplished by something that
"duplicates" the "causal powers" of the brain.

The following points have been made in this paper:

(1) Simulation versus Implementation:
Searle fails to distinguish between the simulation of a mechanism,
which is only the formal testing of a theory, and the implementation of
a mechanism, which does duplicate causal powers. Searle's "simulation"
only simulates simulation rather than implementation. It can no more be
expected to understand than a simulated airplane can be expected to
fly. Nevertheless, a successful simulation must capture formally all
the relevant functional properties of a successful implementation.

(2) Theory-Testing versus Turing-Testing:
Searle's argument conflates theory-testing and Turing-Testing.
Computer simulations formally encode and test models for human
perceptuomotor and cognitive performance capacities; they are the
medium in which the empirical and theoretical work is done. The Turing
Test is an informal and open-ended test of whether or not people can
discriminate the performance of the implemented simulation from that of
a real human being. In a sense, we are Turing-Testing one another all
the time, in our everyday solutions to the "other minds" problem.

(3) The Convergence Argument:
Searle fails to take underdetermination into account. All scientific
theories are underdetermined by their data; i.e., the data are
compatible with more than one theory. But as the data domain grows, the
degrees of freedom for alternative (equiparametric) theories shrink.
This "convergence" constraint applies to AI's "toy" linguistic and
robotic models too, as they approach the capacity to pass the Total
(asymptotic) Turing Test. Toy models are not modules.

(4) Brain Modeling versus Mind Modeling:
Searle also fails to appreciate that the brain itself can be understood
only through theoretical modeling, and that the boundary between brain
performance and body performance becomes arbitrary as one converges on
an asymptotic model of total human performance capacity.

(5) The Modularity Assumption: 
Searle implicitly adopts a strong, untested "modularity" assumption to
the effect that certain functional parts of human cognitive performance
capacity (such as language) can be be successfully modeled
independently of the rest (such as perceptuomotor or "robotic"
capacity). This assumption may be false for models approaching the
power and generality needed to pass the Turing Test.

(6) The Teletype Turing Test versus the Robot Turing Test: 
Foundational issues in cognitive science depend critically on the truth
or falsity of such modularity assumptions. For example, the "teletype"
(linguistic) version of the Turing Test could in principle (though not
necessarily in practice) be implemented by formal symbol-manipulation
alone (symbols in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside world
(seeing, doing AND linguistic competence).

(7) The Transducer/Effector Argument:
Prior "robot" replies to Searle have not been principled ones. They
have added on robotic requirements as an arbitrary extra constraint. A
principled "transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily nonsymbolic,
drawing on analog and analog-to-digital functions that can only be
simulated, but not implemented, symbolically.

(8) Robotics and Causality:
Searle's argument hence fails logically for the robot version of the
Turing Test, for in simulating it he would either have to USE its
transducers and effectors (in which case he would not be simulating all
of its functions) or he would have to BE its transducers and effectors,
in which case he would indeed be duplicating their causal powers (of
seeing and doing).

(9)
Symbolic Functionalism versus Robotic Functionalism:
If symbol-manipulation ("symbolic functionalism") cannot in principle
accomplish the functions of the transducer and effector surfaces, then
there is no reason why every function in between has to be symbolic
either. Nonsymbolic function may be essential to implementing minds and
may be a crucial constituent of the functional substrate of mental
states ("robotic functionalism"):  In order to work as hypothesized
(i.e., to be able to pass the Turing Test), the functionalist
"brain-in-a-vat" may have to be more than just an isolated symbolic
"understanding" module -- perhaps even hybrid analog/symbolic all the
way through, as the real brain is, with the symbols "grounded"
bottom-up in nonsymbolic representations.

(10) "Strong" versus "Weak" AI:
Finally, it is not at all clear that Searle's "Strong AI"/"Weak AI"
distinction captures all the possibilities, or is even representative
of the views of most cognitive scientists. Much of AI is in any case
concerned with making machines do intelligent things rather than with
modeling the mind.

Hence, most of Searle's argument turns out to rest on unanswered
questions about the modularity of language and the scope and limits of
the symbolic approach to modeling cognition. If the modularity
assumption turns out to be false, then a top-down symbol-manipulative
approach to explaining the mind may be completely misguided because its
symbols (and their interpretations) remain ungrounded -- not for
Searle's reasons (since Searle's argument shares the cognitive
modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the kind of
hybrid, bottom-up processing that may then turn out to be optimal, or
even essential, in between transducers and effectors). What is
undeniable is that a successful theory of cognition will have to be
computable (simulable), if not exclusively computational
(symbol-manipulative). Perhaps this is what Searle means (or ought to
mean) by "Weak AI."
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

bwk@mbunix.mitre.org (Barry W. Kort) (02/23/89)

In article <9359@megaron.arizona.edu> mike@arizona.edu (Mike Coffin) asks:

 > Why is it so hard for some people to accept the fact that a system
 > can have properties that none of its components have?

While it is certainly possible (and even desirable) for a system
to exhibit emergent properties beyond the properties of the component
parts of the system, our daily experience with politics and
bureaucracy continues to remind us that large systems are considerably
less functional than one would naively expect.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (02/23/89)

Permit me to inject another anecdote into the discussion
regarding the inadequacy of symbol manipulation.

Recall the breakthrough scene in the Helen Keller Story.
Helen's tutor has trained the recalcitrant child in
finger-signing.  Helen can manipulate the finger-sign
symbols mechanistically, but she still doesn't communicate.

Then while walking in the woods, Helen's tutor plunks the
girl's hand into a cold flowing stream and signs "w-a-t-e-r".

Suddenly Helen understands.  

Helen discovers that all those symbol sequences turn out to stand
for something.  The scene is about as moving as movies can get.

The Chinese Room is like Helen before her moment of epiphany.
There is little point in manipulating symbols mechanistically
unless one can map the symbols to non-symbolic sensory
information from the external world.  In the modern world,
terrorists and diplomats alike also manipulate symbols to
effect motor responses in the external world.  But that's
another discussion.

--Barry Kort

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (02/23/89)

From article <Feb.22.15.20.26.1989.931@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
" ...
" (5) The Modularity Assumption: 
" Searle implicitly adopts a strong, untested "modularity" assumption to
" the effect that certain functional parts of human cognitive performance
" capacity (such as language) can be be successfully modeled
" independently of the rest (such as perceptuomotor or "robotic"
" capacity). This assumption may be false for models approaching the
" power and generality needed to pass the Turing Test.

This seems to me correct, except I'm not sure we could say that the
modularity assumption for language is untested.  The construction of
(putatively) complete grammars has been attempted, and since none have
come close to correctly describing a natural language, the evidence
that's in suggests the assumption is false.

On the other hand, the proposal or conjecture found elsewhere in
Stevan's discussions that finding a way to ground the symbols will lead
us somehow to a better theoretical understanding is unlikely to be
correct.  I think.  In saying why, I'd prefer the terms 'syntactic' for
the symbol manipulation approach and 'semantic' for grounding symbols
(but without intending to imply that theories customarily called
'semantic' are properly so called).

A reasonable way to rate the prospects of an analytic approach is to ask
(and answer) the question:  what has it helped us find out?  Looking at
the score for the last few years, and sticking to fundamental
discoveries, I make it syntax: 3, semantics: 0. The discoveries are:

(1) Movement constraints (Haj Ross) -- constituents cannot occur
    "too far" from where they belong,
(2) Cross-over (Paul Postal) -- nominals cannot come on the wrong
    "side" of coreferents,
(3) One per sent (Charles Fillmore) -- when nominals are classified
    by role (agent, patient, ...) one finds at most one of each
    role represented per clause.

(Disclaimer: probably few linguists would agree with my scoring.)

My conclusion is that semantics as currently conceived has not
gotten us anywhere, and probably never will.

		Greg, lee@uhccux.uhcc.hawaii.edu

matt@nbires.nbi.com (Matthew Meighan) (02/24/89)

In article <7586@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:

> . . .  An argument which is based on assertions
> of what it "obvious" to introspection is no argument at all

Can you prove this, or is it just obvious to you?

It seems to me that the assertion that only objectively-provable
things are "true" is a totally subjective one, hence false by its
own criteria.  What evidence is there for this belief?

>(Incidentally, I believe
>it was Harry Truman who coined a phrase to describe an argument which is
>supported by nothing more than an over-abundance of verbiage;  he called
>it "The Big Lie.")

This falsely equates subjective experience with "nothing more than an
over-abundance of verbiage."  The two are not equivalent.  Subjective
experience, or perception, is certainly "more than verbiage."

Anyway, I doubt very much that Truman was referring to anything remotely 
like the Chinese Room argument when he coined this phrase.

-- 

Matt Meighan          
matt@nbires.nbi.com (nbires\!matt)

curry@hplchm.HP.COM (Bo Curry) (02/24/89)

Steven Harnad writes:
>
>There are two senses of "understand," a subjective and an objective
>one. The first (1) is what I mean when I say "I understand English" and
>the second (2) is what I mean when I say "He understands English." The
>first is primary. What I say and mean by "I understand" is based on
>direct, incorrigible, first-person evidence. When I say "HE
>understands," I am merely INFERRING that what's true of him is the same
>thing that's true of me when I understand. I can be WRONG (very wrong)
>about (2) but not about (1). It is (1) that is at issue in Searle's
>Argument, though people keep conflating it with (2).
>
>That's all there is to it. It's not a matter for linguists (any more
>than what I mean by "I am in pain" vs. "He is in pain" is). The only
>ones who worry about mechanisms here are cognitive modelers.
>And I am not a philosopher.
>-- 
I'll have to disagree that a speaker claiming "I understand X" is incorrigible
in the same way as a speaker claiming "I am in pain".  Dennet has written
extensively, and compellingly, on this issue.  When I was a graduate
student (not in philosophy :-) I often encountered students who claimed,
with perfect sincerity, that they understood thermodynamics.  I was
in a much better position than they to judge the truth of their claims.
When I studied Latin, I often thought I understood a poem or passage
(i.e. I had puzzled some meaniing out of it, which I believed corresponded
to the author's intent), and was later (embarassingly) proved wrong.
If the phrase "I thought I understood X" has any meaning, then it
clearly must be possible to be wrong about one's own understanding.
Compare "I thought I understood the menu (but was proved wrong when the
waiter brought my order)" to "I thought the needle hurt, but I was wrong".
The first sentence is perfectly sensible, whereas the latter sounds surreal.
Pain is a much more elusive beast than understanding.
It is also possible to come up with instances when one claims *not* to
understand, yet is mistaken in that claim.  This is a bit rarer, but
seems to occur if the understander expects something subtler or deeper
than is really there.  For example, I may hear a joke, and not find it
funny at all.  I say "I don't get it".  In fact, I have considered several
possible interpretations, but rejected them on the grounds of non-humourousness.
Later, it may prove that one of my rejected interpretations was in fact
the "meaning" of the joke, so that I had really understood it, after
all.  I was misled by my (mistaken, in this case) expectation that a
joke, when understood, will be funny.

Conclusion: An objective test is the only reliable measure of the understanding
of a system.  If the system claims to understand X, but nonetheless fails
the standard test, we are justified in rejecting its claim.  There is
no "incorrigibility" associated with understanding.

All this is, of course, unnecessary to definitively refute the Chinese
room "argument".  As a previous poster pointed out, *Searle's* understanding
or lack thereof is totally irrelevant, since he is merely a *component*
of the room.  Searle's argument (however deeply thought about) reduces
to the claim that "The mechanism is understood, therefore there is no
understanding", which is absurd on the face of it.

Cheers,

Bo "Think deep, dig hard" Curry
curry%hplchm@hplabs.HP.COM

bwk@mbunix.mitre.org (Barry W. Kort) (02/24/89)

In article <125@arcturus.edsdrd.eds.com> gss@edsdrd.eds.com (Gary Schiltz)
recounts his personal experience in "doing calculus" at age 17
without really understanding what it was all about.  Gary concludes:

 > This really makes me wonder whether it can be determined whether any
 > system understands, simply from external behavior.  I'm not trying to
 > reach any conclusions about understanding, as I've not studied nor
 > thought about it much.  But, I thought it might be more food for thought.

In Feynman's anecdote about the Brazilian physics students, he easily
uncovered their lack of understanding when he asked them questions
about the real-world phenomena which the physics lessons covered.
Their blank stares revealed that they had made no connections between
everyday experiences and the subject at hand.  F = ma had nothing to
do with getting up to speed on a bicycle.  Ft = mv had nothing to do
with a hitting a baseball.

--Barry Kort

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/24/89)

lee@uhccux.uhcc.hawaii.edu (Greg Lee) of University of Hawaii wrote:

" No, there aren't "two senses of "understand," a subjective and an
" objective one," [otherwise we couldn't say] 'He understands, and I do too'

As I've suggested already, this is simply not a linguistic matter. The
distinction I'm after is already there with "pain" (although we don't
have two senses of pain as we do of understanding -- the reason for
this will become clearer as we go on). Consider "I'm in pain and he is
too." Apart from the obvious fact that I don't mean he's in MY pain
(which is already a difference, and not a "linguistic" one but an
experiential and conceptual one), it makes sense to say "He SEEMS to be
in pain (but may not really be in pain)," but surely not that "I SEEM
to be in pain (but may not really be in pain)." (Please don't reply
about tissue damage, because that's not what's at issue here [I didn't
say "I seem to have tissue damage"] -- or about lobotomy, which may
very well change the experiential meaning of pain for me.) The
difference (for me) between my pain and his pain is that mine is
directly experienced (by me) and his is only inferred (by me) from his
behavior.

Now the case of "understanding" is quite similar, except that the
behavioral criteria for the inference are much more exacting -- so much
so that there I CAN say "I SEEM to understand (but may not
understand)." The reason I can say this is apparent upon a little
reflection, and provides further evidence that there is both an
objective and a subjective sense of understanding. Follow carefully:

When I say "I only SEEM to understand," I mean objective understanding,
not subjective, i.e., "I do feel a (subjective) sense of understanding but
I can't provide the behavioral evidence of objective understanding, so
I don't relly understand in the objective sense." Subjective understanding,
on the other hand, is as certain as subjective pain (which happens to
be the only kind of pain -- the objective side of pain is the
tissue-damage story, and we rightly don't call that "pain"). You can't
say "I only SEEM to feel a (subjective) sense of understanding (but
I don't realy understand in the subjective sense)" any
more than you can say "I only SEEM to feel pain (but not really)."

Another point: I said that subjective understanding was PRIMARY for
the issues about mind-modeling under discussion here. In the human case,
the subjective sense of understanding and the evidence for objective 
understanding tend to swing together in the vast majority of instances.
The correlation between them is not perfect, as the problem cases
already discussed -- S without O and O without S in us -- indicate, but
this is not relevant because these occasional dissociations all occur
in US, in whom the primary S is not in doubt. Symbol crunchers can't be
granted minds on the strength of our occasional mental lapses!

Perhaps if people (or objects) habitually went around emitting coherent
glossolalic discourse in foreign languages ("speaking in tongues") that
they claimed (in English) not to understand, or if they emitted nothing
but jargonaphasia that they kept feeling fervently to be full of
meaning, things might look a little different, but that's not the way
it is; S and O are quite tightly coupled, and S is clearly primary. In
fact, in a world without S, what would it even MEAN to ask whether or
not an event or a performance by a device "really" involved O ("objective
understanding")? It seems to me all you'd have would be events and
performances that could be "interpreted" by people with S as being
instances of O. But why bother? And if there were no people with S at
all, the whole problem of O seems to vanish altogether, leaving only a
world of objects, events and performances. (To a methodological
epiphenomenalist like me, it's a profound puzzle why the world ISN'T in
fact like this -- why there should be any S at all.)

The foregoing, let me repeat, was not a "linguistic" analysis. I simply
tried to remind everyone about what we all mean by pain and understanding,
and on what experiences this is based. I have not had to be hypothetical
or paradoxical here. Everyone knows the difference between the subjective
sense of understanding in ourselves and the objective evidence of it in
ourselves and others; everyone knows the difference between understanding
English and not understanding Chinese. But watch the torrent of
strained sci-fi that is again going to well up by way of quarreling with the 
obvious in subsequent postings...

" [No, it's not true that we can be] "WRONG (very wrong) about (2)
" [objective] but not about " (1) [subjective] [understanding].
" If this were so... 'I thought I understood, but I was wrong' would be
" self-contradictory.

As I said above, there are two senses of understanding, subjective and
objective. The above statement could be paraphrased: "I thought I
understood it in the objective sense; it turns out I only understood it
in the subjective sense," i.e., it only FELT AS IF I understood it.
But in Searle's room the ISSUE is whether there's any mind there
feeling understanding at all (or feeling anything, for that matter)
rather than just a body that's ACTING AS IF it understood (i.e.,
that can be interpreted -- or misinterpreted -- as understanding by
people who do have understanding).

Ceterum sentio: This is not a linguistic matter.

" Pardon me if I implied that only philosophers do second-rate linguistics.

I won't make the obvious repartee, but will just repeat that these are not
linguistic matters... [In a later posting Lee mixes up the syntax
of the (putative) symbolic "language of thought" -- whose existence and
nature is what is at issue here -- and the syntax of natural languages:
Not the same issue, Greg.]
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (02/24/89)

During the Chinese Room discussion, many have brought up the
concept that sensory experience is the bottom rung of intelligence,
i.e. cat->chased by dog next door->dog's name is Fred->first letter
is "F"->"F" looks like the following sensory experience

While we may question the validity of the above idea, I'd like to
point out that an AI system does not need "real" sensory input in 
the sense of eyes, ears, etc., but can use internal enviromental
models (i.e. block's world).  This knowledge, though, needs to
somehow be entered into the computer from outside, in the form
of verbose description, or an algorithm (perhaps involving
pseudo-random numbers) describing the enviroment.

-Thomas Edwards

NN's on CM-2's!

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (02/24/89)

In article <3305@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <Feb.20.20.51.31.1989.15876@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
>used, I have a limited sympathy with the argument.  When we
>know the mechanism behind the behavior, we don't usually speak
>of 'understanding'.
Hence the linguistic fact in English (and French, German, Chinese etc? - comments
please) that any mechanical process cannot possess understanding.
It is a central feature of "understanding" that mechanical processes are not
involved.  Thus for computer-based reasoning, one must choose another word or run
the risk of been seen as ignorant or, more likely, disingenuous.

>And we do develop new usages in the course of a conversation, as here when
>some come to be willing to attribute understanding to the Chinese room.

Some new uses are just plain deviant or mistaken and die with the conversation.

Take the software 'toolkit' which contains
	a) unconfigured components and not tools
	b) some parts of a system, not all of them (a kit is complete).

For first-rate linguists, I presume that all new meanings are valid and do not
represent some form of verbal dyslexia on the part of anyone who uses them
uncritically?

The same is true of the new and creative meanings developed within the AI
subculture.  If a computer system has understanding, then where does it lie?

Mine's in that still small voice within - why do AI types have to disown their's?
Why insist on being 'scientific' when it's quite clear that you can't be on these
issues?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

dave@cogsci.indiana.edu (David Chalmers) (02/25/89)

The discussion on Searle's Chinese room seems to be becoming very confused.  (I
could make some bad jokes about the 'understanding' of some of the participants,
but I'll refrain.)  I thought I'd try to clear up one of the main sources of
confusion, the word "symbol."

This word is being used to mean two different things:

(1) A "symbol" is a formal object which corresponds to some HIGH-LEVEL, semantic
concept in the real world.  Typically the concept which it corresponds to
is on the level of a _word_, say, as opposed to a microstructural level such
as that of a neuron.

versus

(2) A "symbol" is any formal object which is manipulated by a computer program.
What we take this symbol to correspond to may be as low-level or as high-level
as we like, or we may decide that the question of what the symbol corresponds
to is meaningless and unimportant.

Sense (1) is the sense in which the word "symbol" is used most of the time in
AI.  Newell, Simon, Fodor et al are all supporters of the "Symbolic Paradigm",
which essentially means that they claim that true AI could be achieved by a
program which formally manipulates such high-level symbols.  (I'll say that
by "true AI" here I mean a program which displays intelligent behaviour,
in order to forestall questions like "but is it really thinking?").

Many people these days dispute this claim.  One of the main reasons is that
denoting such high-level, complex and inherently semantic concepts by a
rigidly syntactic formal object will never be able to capture the richness
and flexibility of such concepts.  In a sense, these formal symbols are
brittle and empty, devoid of "meaning."

This is clearly also the sense of "symbol" which people have been using when
they speak of the difficulty of understanding Portuguese or physics or
mathematics by using pure symbol manipulation rules.  Despite the fact that
with such rules one can reproduce vaguely competent behaviour, the rigidity
of such rules can, I believe, always be detected by close questioning, or
observation under new and unusual circumstances.  The fact is that high-level
concepts interact in far too rich and flexible a manner, and this richness
could never be captured by a set of rules which manipulate concepts as
chunks.

So I say: with sense (1) of "symbol", even weak AI can never be achieved.  It
will be impossible to fully reproduce intelligent behaviour.  Thus, 
with this sense of "symbol", I reject the PREMISE of Searle's argument; a
formal symbol-manipulator could never even display what _looked_ like 
competent Chinese-speaking behaviour.

Thus, of course I am with Searle here in saying that such symbol-manipulators
could never have true (subjective) understanding.  But for me it's not an 
issue, for I believe that such manipulators would never even LOOK as if they
understood.

If this was Searle's point, this would be fine.  But Searle wants to claim
more.  Contrary to what Harnad implies, Searle is not only arguing against
high-level symbol manipulators in the Newell/Simon/Fodor mould.  He wants
to say that NO computer program could ever be enough to have true
(subjective) understanding, not even an incredibly complex and subtle program
(such as a program that simulated a neural network the size of the brain.)


To do this, Searle uses the word "symbol" in sense (2), where it can denote
any formal object whatsoever that is manipulated.  The symbol can correspond
to something as low-level as a neuron, or it may correspond to something
which on the face of it has no meaning to us whatsoever.  Presumably in a 
neural-net-simulator, a symbol corresponds to a neuron or one of its
constituent parts - not a very 'semantic' object at all!

But here is Searle's trick, or to be charitable (or uncharitable?) his
mistake.  He uses the word "symbol" in the low-level sense (2), while
appealing to our intuitions about symbol-manipulators which manipulate
high-level symbols of sense (1)!  He says, (paraphrasing), "such a formal
manipulator could never capture the SEMANTICS of the world to which
the symbols correspond."  What he implies here is that the symbols
correspond to objects which have meaning, but that formal manipulation
can never capture that meaning.  (Just as those Brazilian physics students
manipulated equations without anyone knowing what they _meant_.) 

But AHA - here we have him.  These low-level (sense 2) symbols never had
much meaning anyway!  They correspond to micro-structural entities (such
as neurons), which taken alone are devoid of semantics.  Semantics only
emerges when we put enough of these neurons together to form an incredibly
complex SYSTEM.  In a precisely similar way, semantics (and hence 
understanding) will arise from our sense-2-symbol-manipulator, when it has
enough of these low-level symbols interacting in the right, incredibly
complex way.  Despite the fact that the symbols taken alone are 
meaningless, put enough of them together in the right way and meaning will
be an EMERGENT property of the system, just as it is with the human brain.


So this is the source of Searle's mistake.  He appeals to our intuitions
about high-level (sense 1) symbol-manipulators, and tries to use this to
draw conclusions about low-level (sense 2) symbol manipulators.  And by
doing this he fails to appreciate the incredible complexity and subtlety
that is possible in a sense-2-manipulator, from which understanding can
be an emergent property, as it is in the human brain.

It is a very mysterious question indeed how real understanding, subjective
experience and so on could ever emerge from a nice physical system like
the human brain, which is just toddling along obeying the laws of physics.
But nevertheless we know that it does, although we don't know how.
Similarly, it is a mysterious question how subjective experience could
arise from a massively complex system of paper and rules.  But the point
is, it is the SAME question, and when we answer one we'll probably answer
the other.

I'll resist the temptation to answer each of Searle's other points one by
one.  Just remember, semantics CAN arise from syntax, as long as the 
syntactical system is complex enough, and involves manipulating 
micro-structural objects which interact in rich and subtle ways.  So, for
you neural-netophiles out there (as well as the rest of us fellow
subcognitivists), there's hope yet!

(Just keep the discussion of symbols on the right level.)

  Dave Chalmers

  Center for Research on Concepts and Cognition
  Indiana University

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/25/89)

dave@cogsci.indiana.edu (David Chalmers) of
Concepts and Cognition, Indiana University writes:

" ["Symbol"] is used [by Searle] to mean two different things:
" (1)... a formal object which corresponds to some HIGH-LEVEL, semantic
" concept in the real world... [e.g.,] a _word_ [and] (2)... any formal
" object... manipulated by a computer program... low-level or
" high-level [or meaningless] [e.g., a neuron]

So far, so good, though I don't find this distinction particularly
useful, because it just concerns how you INTERPRET the meaningless
symbols you're manipulating -- here as words, there as neurons. (In
principle, even the very same program could be interpreted either way.)
But let's go on and see where this leads:

" with sense (1)...  I reject the PREMISE of Searle's argument; a formal
" symbol-manipulator could never even display what _looked_ like
" competent Chinese-speaking behaviour

Well, this certainly gives away the store, and I'm inclined to agree.
But I have reasons. Do YOU have better reasons than that you like neurons
better than words?

" [But] Contrary to what Harnad implies, Searle is not only arguing
" against high-level symbol manipulators in the Newell/Simon/Fodor mould.
" He wants to say that NO computer program could... have true
" (subjective) understanding, not even an incredibly complex and subtle
" program (such as a program that simulated a neural network the size of
" the brain.)

Actually, I don't imply otherwise: This is exactly what Searle would
say, because for him it is immaterial how the symbols are interpreted
by the programmer, as words or as neurons: To him they're all just
meaningless symbols. And so are the inputs and outputs (Chinese
symbols, remember? not Chinese neurons). Nor is Searle impressed
with hand-waving about "incredible complexity and subtlety": Symbol
manipulation is just symbol manipulation, no matter how complex the
symbols or the interpretations.

" [Searle] uses the word "symbol" in the low-level sense (2), while
" appealing to our intuitions about symbol-manipulators which manipulate
" high-level symbols of sense (1)!... But AHA - here we have him. These
" low-level (sense 2) symbols... correspond to micro-structural entities
" (such as neurons), which taken alone are devoid of semantics. Semantics
" only emerges when we put enough of these neurons together to form an
" incredibly complex SYSTEM. Despite the fact that the symbols taken
" alone are meaningless, put enough of them together in the right way and
" meaning will be an EMERGENT property of the system, just as it is with
" the human brain.

What we have here is exactly what it sounds like: Not an argument, but
a statement of faith in the "emergent" properties of "incredibly
complex" systems. I feel the same way about clouds sometimes.

The human brain's another story. (The following is almost a paraphrase
of some arguments from my paper, "Minds, Machines and Searle.") Of
course we know the brain "has" semantics. But a symbolic simulation of
a brain is not a brain, any more than a symbolic simulation of a plane
is a plane. Hence there's no reason to believe that a brain simulation
can think any more than a plane simulation can fly.

On the other hand, there is every reason to believe that a correct
brain simulation, like a correct plane simulation, could model
symbolically all of the relevant causal principles we would need to
know about thinking and flying in order to implement their mechanisms
as a brain and a plane, respectively. The implemented brain and plane
could then think and fly, respectively. But they wouldn't be just
symbols anymore either. For one thing, they'd have to have the causal
wherewithal for interacting with the outside world the way brains and
planes do -- and that's not just symbols-in and symbols-out. They would
have to include transducers and effectors (which, as I said before, are
immune to Searle's Argument), and, if the other arguments I've been making
have any validity, it would have to include a lot more nonsymbolic
(analog, A/D, feature-detecting, categorical, D/A) processes in between
the input and the output too.

As long as the system's of the right type, you need make no special
appeal to "incredible" complexity and "emergent" properties (though
it'll no doubt be complex enough). Where you need inordinate amounts of
complexity and equal amounts of credulousness is with a system of the
wrong type, such as a purely symbolic one (or perhaps a purely gaseous
one).

" It is a very mysterious question indeed how real understanding,
" subjective experience and so on could ever emerge from a nice physical
" system like the human brain... nevertheless we know that it does,
" although we don't know how. Similarly, it is a mysterious question how
" subjective experience could arise from a massively complex system of
" paper and rules. But the point is, it is the SAME question, and when we
" answer one we'll probably answer the other.

The first case is certainly a mystery that is thrust upon us by the
facts. The second is only a mystery if we forget that there are no facts
whatsoever to support it, just the massively fanciful overinterpretation of
meaningless symbols.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

mmt@client1.dciem.dnd.ca (Martin Taylor) (02/26/89)

-- 
--There are two senses of "understand," a subjective and an objective
--one. The first (1) is what I mean when I say "I understand English" and
--the second (2) is what I mean when I say "He understands English." The
--first is primary. What I say and mean by "I understand" is based on
--direct, incorrigible, first-person evidence. When I say "HE
--understands," I am merely INFERRING that what's true of him is the same
--thing that's true of me when I understand. I can be WRONG (very wrong)
--about (2) but not about (1). It is (1) that is at issue in Searle's
--Argument, though people keep conflating it with (2).
-- 
--That's all there is to it. (Stevan Harnad)

You may perhaps be correct of yourself when you claim "I understand English"
but you cannot know you are correct when you say "I understand what you
just said in English" even though your subjective impression is that you
understood.  You therefore cannot claim "I understand statements written
(spoken) in English," which I think is close to what "I understand English"
means.  The best you can do, I think, is to go by analogy to Harnad's
arguments on categorization--behaviourally your responses to statements
in English have usually had the expected effect on the person making the
statement, so you interpret the feedback as indicating that you did indeed
understand.  Just as miscategorization is determined by unexpected feedback
from the world, so misunderstanding is determined by unexpected feedback
from a communicating partner.  You cannot, by yourself, determine that
you understand. It's just a feeling, untested.

There is a close analogy in psychophysics.  Ask someone "Did you hear that
tone" and you will get an answer that (presumably) corresponds to the
subjective experience of hearing the tone.  But put them in an experiment,
in which they must determine which of two intervals contained the test tone,
and they will get moderately high scores, well above chance, under conditions
in which they may say "I didn't hear more than two or three of those."
Similarly, in experiments in which they must say simply whether a tone
was in a single interval, they will say "Yes" on many intervals in which
a tone was not presented.  The subjective impression does not correspond
to the objective event of hearing, any more than the subjective impression
of understanding corresponds to the objective event of understanding.
-- 
Martin Taylor (mmt@zorac.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048
If the universe transcends formal methods, it might be interesting.
     (Steven Ryan).

thom@dewey.soe.berkeley.edu (Thom Gillespie) (02/26/89)

For what it matters:

	Stephen Mitchell is a noted translator who recently was interviewed on NPR
	about his new translation of the "TAO". The translation is beautiful and
	clear and Stephen Mitchell does not speak Chinese , his Master suggested that
	it wan't important and suggested that he just use a dictionary and follow his
	instincts for making sense. Granted he referred to previous works but for
	the most part he just looked for a good meaning and gave it good "context"
	e.g. bombs and money instead of swords and gold.Stephen Mitchell  definitely 
	needed a context , an experience of the world , which computers don't have 
	and more importantly don't need.

	The TAO that can be told
	is not the true TAO
	The God that can be named
	is not the true God
			-- TAO

			Thom Gillespie

geddis@polya.Stanford.EDU (Donald F. Geddis) (02/26/89)

In article <45199@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
>The Chinese Room is like Helen [Keller] before her moment of epiphany.
>There is little point in manipulating symbols mechanistically
>unless one can map the symbols to non-symbolic sensory
>information from the external world.

That might be true if the issue were learning.  In the case of
Searle's Chinese Room argument, however, we are already *assuming*
that the system is capable of communicating like a native speaker.
The system already acts as though it connected the symbols to their
non-symbolic referents.  Note how easy it was to know that Helen
Keller did *not* understand the connection: almost any simple
"conversation" gave it away.

Now it might be true that a computer system could not converse intelligently
without being embodied in the real world.  But the real question Searle
considered was:  How do you determine when a system is intelligent, when it
actually thinks?  The AI answer is "treat it as a black box and observe its
behavior (have conversations, in this case)".  Searle (mistakenly) disputes
this view, and wants us to look inside the system for some "causal powers".

	-- Don-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

geddis@polya.Stanford.EDU (Donald F. Geddis) (02/26/89)

In article <45213@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
>In article <125@arcturus.edsdrd.eds.com> gss@edsdrd.eds.com (Gary Schiltz)
>recounts his personal experience in "doing calculus" at age 17
>without really understanding what it was all about.  Gary concludes:
> > This really makes me wonder whether it can be determined whether any
> > system understands, simply from external behavior.
>In Feynman's anecdote about the Brazilian physics students, he easily
>uncovered their lack of understanding when he asked them questions
>about the real-world phenomena which the physics lessons covered.
>Their blank stares revealed that they had made no connections between
>everyday experiences and the subject at hand.  F = ma had nothing to
>do with getting up to speed on a bicycle.  Ft = mv had nothing to do
>with a hitting a baseball.

You wonder whether external behavior can tell you if the system understands.
And yet in both these cases, the "proof" that the system (person) did not
understand was simply external behavior.  In calculus, the hardest questions
were answered incorrectly.  Gary said that his grade went up when he retook
the class and "understood".  And he even stated an example:  The connection
between a derivative and rates of change.  It seems rather trivial to
"externally" test this one with a single question.

In Feynman's Brazil experience, he seemed to have little difficulty telling
that his student's level of understanding was relatively shallow.  Most
questions that were not simple restatements of memorized phrases showed a
lot of difficulty.

Just because it requires careful probing and the examiner can be fooled,
doesn't mean that "external behavior" is not the proper criteria for deciding
when a system understands.

Just what, exactly, is being proposed as an alternative test?

	-- Don
-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

bwk@mbunix.mitre.org (Barry W. Kort) (02/26/89)

In article <45199@linus.UUCP> I wrote:

 > The Chinese Room is like Helen Keller before her moment of
 > epiphany.  There is little point in manipulating symbols
 > mechanistically unless one can map the symbols to non-symbolic
 > sensory information from the external world.

In article <7219@polya.Stanford.EDU> geddis@polya.Stanford.EDU
(Donald F. Geddis) responds:

 > That might be true if the issue were learning.  In the case of
 > Searle's Chinese Room argument, however, we are already *assuming*
 > that the system is capable of communicating like a native speaker.
 > The system already acts as though it connected the symbols to their
 > non-symbolic referents.  Note how easy it was to know that Helen
 > Keller did *not* understand the connection: almost any simple
 > "conversation" gave it away.

Donald, I think we have uncovered an important issue hidden in the
Chinese Room debate.  When I have a conversation with another
intelligent being, I expect to exchange knowledge, such that we
both understand more than we did before the conversation.  That is,
I cannot conceive an intelligent entity which does not engage in
learning (knowledge acquisition).  When I add a symbol (such as
the word "colligate") to my personal lexicon, I also add its
referent.  Now when I sort through a jumbled collection of ideas,
trying to put the pieces together, I can associate that activity
with the word "colligation".  Can the Chinese Room do that?

 > Now it might be true that a computer system could not converse intelligently
 > without being embodied in the real world.  But the real question Searle
 > considered was:  How do you determine when a system is intelligent, when it
 > actually thinks?  The AI answer is "treat it as a black box and observe its
 > behavior (have conversations, in this case)".  Searle (mistakenly) disputes
 > this view, and wants us to look inside the system for some "causal powers".

I like this operational definition of intelligence.  I also believe
that if the candidate system were doing nothing more than formal
symbol manipulation, I could unmask it as easily as Feynman
unmasked the Brazilian physics students.  Formal symbol manipulation
is an important and useful tool for the cognitive computer, and any
intelligent entity is well-advised to acquire such capacity.  But
any intelligent system who thereupon stops learning is doomed to be
disparaged as lacking in a desirable quality:  the ability to
discover and report interesting new ideas.

--Barry Kort

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/27/89)

geddis@polya.Stanford.EDU (Donald F. Geddis) of Stanford University
writes (in a pair of successive postings):

" it might be true that a computer system could not converse
" intelligently without being embodied in the real world. But the real
" question Searle considered was: How do you determine when a system
" is intelligent...? The AI answer is "treat it as a black box and observe
" its behavior (have conversations, in this case)". Searle (mistakenly)
" disputes this view, and wants us to look inside the system for some
" "causal powers"... Just because it requires careful probing and the
" examiner can be fooled, doesn't mean that "external behavior" is not
" the proper criteri[on] for deciding when a system understands.
" Just what, exactly, is being proposed as an alternative test?

There are two alternative OBJECTIVE tests for having a mind, the
(standard) Linguistic Turing Test [LTT] (symbols-in, symbols-out) and
my stronger (robotic) Total Turing Test (TTT) (proximal-projections-of-
objects-on-sensors-in, effector-action-on-objects-out). The LTT is a
subset of the TTT, but one that is, as I have indicated repeatedly,
EQUIVOCAL about the issue of "embodiment" and the putative autonomy of
symbolic function from many forms of nonsymbolic function that may be
needed in order to pass the LTT in the first place. Searle is only
addressing the LTT, and my reply to Searle is that the TTT is immune to
his arguments against the LTT.

Neither the TTT nor the LTT, however, provides a guarantee that the
candidate has a mind. There is and can be no objective test for that,
only a first-person subjective one: To perform that, you have to BE the
candidate. ONLY this subjective test is decisive.

There are two senses in which Searle is advocating "looking inside": One
is to look at the functions of the brain, because we have pretty good
reason to believe that candidates with brains have minds (because, as I
would put it, candidates with brains can pass the TTT). The second sense
of "inside" is the first-person test for subjectivity, which we can all
perform on ourselves. It's THAT "causal power" that he reminds us
brains have but symbol-crunchers do not. My reply is that candidates
OTHER than the brain that can pass the TTT (if and when we come up with
any) are immune to his Chinese Room Argument that they cannot have a
mind (though, of course, I repeat, no objective test can demonstrate
that anyone, EVEN ourselves, has a mind). Searle's argument against
(hypothetical) candidates that pass the LTT only, with symbols only,
is decisive, however.

I've always thought this reasoning was quite easy to understand, but from
the fact that very few people have given me any objective evidence that
they've understood it, I've concluded that it must be difficult to
understand. Maybe by trying to put it slightly differently each time,
tailoring it to the latest misunderstanding, I'll succeed in making it
understood eventually...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/28/89)

In article <230@nbires.nbi.com> matt@nbires.UUCP (Matthew Meighan) writes:
>In article <7586@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
>writes:
>
>> . . .  An argument which is based on assertions
>> of what it "obvious" to introspection is no argument at all
>
>Can you prove this, or is it just obvious to you?
>
>It seems to me that the assertion that only objectively-provable
>things are "true" is a totally subjective one, hence false by its
>own criteria.  What evidence is there for this belief?
>
TOUCHE!  This is a well-turned argument, forcing me to retreat to reconsider
what it was I REALLY meant!  Ultimately, I am trying to get away from using
the word "obvious" too carelessly;  but in doing so I seem to have fallen into
the same trap!  So how can I get myself out of it?

When we are discussing the physical sciences, I suspect that it is possible to
talk about "obvious" manifestations of phenomena.  (Note that these
manifestations need not necessarily be veridical.  Thus, it is "obvious"
that one arc shape is larger than another, even if we can demonstrate that
they are both identical.)  What I REALLY wanted to object to is a tendency
to hide behind a word like "obvious" when we are trying to discuss words like
"understand."  Thus, I would argue that the manifestation of intelligent
behavior cannot be observed the way we observe the size of a physical object.
I admit that this point is open to debate;  but as long as we are debating it,
we should probably lay off words like "obvious."

>>(Incidentally, I believe
>>it was Harry Truman who coined a phrase to describe an argument which is
>>supported by nothing more than an over-abundance of verbiage;  he called
>>it "The Big Lie.")
>
>This falsely equates subjective experience with "nothing more than an
>over-abundance of verbiage."  The two are not equivalent.  Subjective
>experience, or perception, is certainly "more than verbiage."
>
This was not my point.  I merely wanted to illustrate what one of my
mathematics professors once called "proof by intimidation."  We should
know better than to invoke such arguments.

arm@ihlpb.ATT.COM (Macalalad) (02/28/89)

In article <17923@iuvax.cs.indiana.edu> dave@duckie.cogsci.indiana.edu (David Chalmers) makes the following distinction:
>(1) A "symbol" is a formal object which corresponds to some HIGH-LEVEL,
>semantic concept in the real world.  Typically the concept which it
>corresponds to is on the level of a _word_, say, as opposed to a
>microstructural level such as that of a neuron.
>
>versus
>
>(2) A "symbol" is any formal object which is manipulated by a computer
>program.  What we take this symbol to correspond to may be as low-level
>or as high-level as we like, or we may decide that the question of what
>the symbol corresponds to is meaningless and unimportant.

I'm not sure if this is a useful distinction.  I'll grant that many AI
programs deal with symbols at level (1), that perhaps people think only
of symbols at level (1) when thinking about the Chinese room scenario,
and even that Searle himself might have originally had level (1) in mind
when he came up with the Chinese room scenario (although I doubt it).
However, this distinction is not crucial to Searle's argument, and I
don't think he appeals to our intuitions about symbols at level (1)
to argue against symbols at level (2), as you seem to imply:

>But here is Searle's trick, or to be charitable (or uncharitable?) his
>mistake.  He uses the word "symbol" in the low-level sense (2), while
>appealing to our intuitions about symbol-manipulators which manipulate
>high-level symbols of sense (1)!  He says, (paraphrasing), "such a formal
>manipulator could never capture the SEMANTICS of the world to which
>the symbols correspond."  What he implies here is that the symbols
>correspond to objects which have meaning, but that formal manipulation
>can never capture that meaning.

No, what Searle is appealing to is our intuition that there is something
more to understanding than just manipulating symbols around.  He is
arguing that we are something more than just mere formal systems, and
that all formal systems, no matter what level their symbols are, lack
something else, something essential to understanding.

On the other side of the fence, we argue that formal systems are more
powerful than what our intuitions lead us to believe.  Understanding
can "emerge" from a powerful enough formal system.

Now we come to a standoff, with each side convinced that the other side
is wrong.  One of the problems is the Turing Test itself.  The Turing Test
is essentially a behavioral test which treats the system in question as
a black box.  The Turing Test judges the system solely on its behavior,
without regard to how the system works.  Because of this, the Turing
Test is vulnerable to skeptical attacks such as Searle's.  If we take
behavior to be the only criteria for demonstrating understanding,
someone can always make the argument that a system isn't really
understanding, even if its behavior is very convincing.  This eventually
boils down to the other minds problem.

On the other hand, Searle can't conclude that the Chinese room doesn't
understand Chinese; he can only appeal to our intuitions about what
can and cannot understand.  Obviously, different people have different
intuitions.

Where do we go from here?  There are several options:

1.  Ignore Searle's attack and continue building AI systems which come
closer and closer to behaving like a human.  Unfortunately, unless
the system is very, very good, it won't convince anyone that it's
understanding, least of all Searle.

2.  Acknowledge Searle's attack and build AI "tools" which have no
claims to understanding.

3.  Sidestep Searle's attack by "strengthening" the formal system.  This
could be done by adding analog states, sensory input/output, etc.
However, if the ultimate criteria for understanding remains a
behavioral one, then skeptical attacks like Searle's cannot be
avoided.

4.  Come to a better understanding of the process of understanding, and
different criteria for judging a system than just a purely behavioral one.
I think that the most satisfactory answer to Searle would be "This and
that are what is involved in understanding, and the formal system in
the Chinese room demonstrates (or fails to demonstrate) this and that
here and there."  I think that most good AI research is done under this
category, where the emphasis is on understanding how the mind works,
biologically, psychologically, and computationally.

-Alex

cn6gr8au@ariel.unm.edu (James D. Nicholson ChNE) (02/28/89)

>
>I suppose that it is technically true that everything done on a computer
>can be reduced to the level of abstract symbol processing. To point to
>this low level of computer processing and then to talk about the very
>high level capabilities of the human brain and ask 'How can one be the other?'
>is rhetoric of the very worst kind.

 But 'abstraction' is the creative conceptualization of perception on
all levels!  Computer processing is not abstract!  Computers don't
have concepts (yet); they don't get the general idea.  They don't have ideas.

>To begin with, it ignores the fact that
>we can reduce the operations of the brain to a very low level and then show,
>mathematically, that the computational capabilities of neurons and 
>computers are in fact equivalent.

  Can you reduce the functioning of water to the functioning of hydrogen and
oxygen and then arrive at polar clusters?...NO!  You wouldn't even get proper
dielectric behavior.  Enough reductionism.  You imply the ultimate equivalence
of the mind with a complicated pinball machine.  If you can actually do this 
reduction, do it and become rich.

>What Searle points to as evidence of
>man's difference from machines are direct consequences of the incredibly
>complex organization of these low level neurons, which has been achieved
>only after billions of years of evolution. There is as yet no theorectical
>reason why we cannot eventually learn to create similarly complex machines.
>If we understood how neurons can be organized in such a way as to produce
>cognitive functions such as 'understanding' or 'creativity', then we could
>say exactly how 'one can be the other'. 

  Producing such machines means recognizing that 'understanding' and
'creativity' are not UNIX utilities;--- they are not disjoint.  Just as 
we have height and width together, the human mind utilizes the entire set
of neural entities to continually recreate the instances of the central
law which forms the mind in which all mental faculties exist together.  
(A long sentence, I realize.  Summary: the mind exists as a single function.)
In animals, the central law is different, but real.  It learns things in 
terms of genetically encoded logics. Thus, the animal cannot discover its
origins in the framework of creative logics.  In either case, there exists
a central law with all of its conjugated laws which constitutes an intangeable
existence.  The connections of neurons create logical pathways, not laws.
'Law' implies operation (i.e. thinking), while, logic is static and non-living.
A law is intelligible and is discovered only through insight.  Logic is merely 
relativistic and may be directly appropriated by a LISP machine.

  We are abstract thinkers, and as such, will think about these neural machines
which will obey the laws of our individual existence in abstracto:--- can
we conceive of the abstract as a juxtaposition of low-level logical options?
And when we do, does that thought utilize old logic or create new logic?
Clearly, the reductionist viewpoint on the equivalence of neural connections
and mind cannot be truly conceived, since, it is itself progress towards a
counterposition of true logics in an unresolved form.  The conclusion is
that there exist things other than logics which generate the logic of neurons.

                                         J.D. Nicholson

bwk@mbunix.mitre.org (Barry W. Kort) (02/28/89)

In article <9739@ihlpb.ATT.COM> arm@ihlpb.UUCP (Alex Macalalad) 
swallows Searle's criticism of strong AI and peers over the horizon:

 > Where do we go from here?  There are several options:
 > 
 > 1.  Ignore Searle's attack and continue building AI systems which come
 > closer and closer to behaving like a human.  Unfortunately, unless
 > the system is very, very good, it won't convince anyone that it's
 > understanding, least of all Searle.

I'm not sure this is the direction we want to go.  Human behavior is
not a good example of intelligence or understanding.  Humans are
emotional, irrational, and error-prone.  I think we should go in
the direction of systems who are able to learn by scientific methods.

 > 2.  Acknowledge Searle's attack and build AI "tools" which have no
 > claims to understanding.

I am all in favor of building useful tools.  Such undertakings are
an excellent apprenticeship for those who would become pioneering
contributors to the frontiers of AI.

 > 3.  Sidestep Searle's attack by "strengthening" the formal system. 
 > This could be done by adding analog states, sensory input/output, etc.
 > However, if the ultimate criteria for understanding remains a
 > behavioral one, then skeptical attacks like Searle's cannot be
 > avoided.

Artificial Sentient Beings by the end of the millenium!

 > 4.  Come to a better understanding of the process of understanding, and
 > different criteria for judging a system than just a purely behavioral one.
 > I think that the most satisfactory answer to Searle would be "This and
 > that are what is involved in understanding, and the formal system in
 > the Chinese room demonstrates (or fails to demonstrate) this and that
 > here and there."  I think that most good AI research is done under this
 > category, where the emphasis is on understanding how the mind works,
 > biologically, psychologically, and computationally.

Indeed.  A sapient system reposes knowledge.  An intelligent
system thinks and solves problems.   A sentient system gathers
information from the outside world.  A learning system integrates
new information into an evolving knowledge base.  An ethical
system uses that information to effect worthwhile changes to the
world in which it is embedded.  

We have a long way to go.  First, artificial intelligence, then
artificial sentience, then artificial wisdom.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (02/28/89)

In article <7645@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP
(Stephen Smoliar) writes:

 > I merely wanted to illustrate what one of my mathematics
 > professors once called "proof by intimidation."  We should
 > know better than to invoke such arguments.

A few years ago, a poster on this newsgroup introduced the
delightful expression "proof by vigorous assertion" for
this form of argumentation.  It is right up there with
another favorite of mine, "invective utterance".

--Barry Kort

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (02/28/89)

In article <51123@yale-celray.yale.UUCP> engelson@cs.yale.edu (Sean Engelson) writes:
>(Searle + rules) understanding of Chinese?  It seems that to
>demonstrate or refute the position of understanding being demonstrable
>purely through I/O behavior, one must have an effective definition of
>understanding.  By effective I mean one that does not beg the
>question, i.e. by defining understanding to be symbol-processing, or
>conversely, to be that which humans do.

Sorry, but your constraints are a little weird.  Understanding *IS* what humans do.
It *MAY* involve symbol processing.  What question is begged? What is an effectiver
definition?  Look how well physicists manage with "force", "charge", "gravity".
You cannot ask commentators on humanity for "definitions" that are any less (fast
and) loose than those used by commentators on nature.

Understanding involves more than lexicography.

Let's just define "understanding", no constraints.

Stevan Harnad has already pointed to two senses
	a) the feeling of understanding
	b) the attribution of understanding.

For some domain where something can be right
	(a) involves thinking that you know what "right" is
	(b) involves someone else deciding that you know what "right" is

(a) is not wholly like pain, but like pain, its perception is a wholly internal
event.  Where objective tests exist, understanding (a) can only be wrong in the
sense of the content of the understanding, as can understanding (b).  In both cases,
the experience of understanding does not wither away in the face of a failed test.
Understanding is monotonic in this sense.  Once asserted, the act of assertion is
unchangeable, and years after (as we've seen from postings) we can remember just how
we felt.

(a) is also accompanied by mood changes (elation, nausea etc.).  These are probably
measurable in some physiological sense.  Such measures will be orthogonal to
performance on objective behavioural tests.  Explain that one.

On Searle's room, Searle would not understand Chinese, but neither can the system,
since it only "knows" how to understand problems about Chinese and how to output it.
There is nothing in (Searle + rules) which asserts "I understand" in response to
each problem put to it.  The rules just run, and no honest user of the English
language would ever attribute understanding to a bunch of rules.

As far as more effective computer systems are concerned, it doesn't matter either.
The point is one of intellectual honesty, and the distaste felt when groups of
supposed academics in a liberal culture fall under the control of a shallow ideology.

The question for the strong AI brigade is:

"Given the normal usage of understanding, what grounds are there for attributing it
to computers, and why bother anyway"

While we're at it, what about those halucinogenic thermostats with beliefs.
Whatever it was, don't eat it again :-)
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

marty@homxc.UUCP (M.B.BRILLIANT) (02/28/89)

From article <7645@venera.isi.edu>, by smoliar@vaxa.isi.edu (Stephen Smoliar):
> In article <230@nbires.nbi.com> matt@nbires.UUCP (Matthew Meighan) writes:
>>In article <7586@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
>>writes:
>>
>>> . . .  An argument which is based on assertions
>>> of what it "obvious" to introspection is no argument at all
>>
>>Can you prove this, or is it just obvious to you?
>>
>>It seems to me that the assertion that only objectively-provable
>>things are "true" is a totally subjective one, hence false by its
>>own criteria.  What evidence is there for this belief?
>>
> TOUCHE!  This is a well-turned argument, forcing me to retreat to reconsider
> what it was I REALLY meant!  Ultimately, I am trying to get away from using
> the word "obvious" too carelessly;  but in doing so I seem to have fallen into
> the same trap!  So how can I get myself out of it?

I have a suggestion or two on how to get out of the trap.

To begin with, I would suggest avoiding the word "obvious."  Whenever a
word has different meanings to different people or in different
contexts, or otherwise is hard to define, using that word is just
asking for trouble.

Second, I would suggest falling back to some of the classical ideas of 
logic, philosophy of science, epistemology, etc.  The real basics.  There
is deductive reasoning and inductive reasoning.  You can't prove
anything without postulates, because nothing is objectively provable
except the subjective fact that you think, therefore you exist, and you
can't prove that to anybody but yourself.

In the classical paradigm, science treats the objective world primarily
in an inductive style.  That is, you first make some observations. 
This is a subjective act.  If others can repeat the observations and
agree that they are the same, you have, by common consent, an objective
fact.  Then you think about the observations until you discover a set
of postulates which, if processed deductively, would predict the
observations.  You have just created a theory.  You can in fact create
several theories to explain the same facts, and then you can use
Occam's Razor to choose among them.  But Occam's Razor itself is a
postulate.

So nothing is obvious.  You can't agree on conclusions unless you first
agree about facts, and then agree on an explanation for the facts.  And
all the conclusions are tentative.

If you want to prove, from a thought-experiment that many people think
could never happen, that something that does mere "symbol" manipulation
can never "understand" anything, you are opening up a can of worms.
In the first place, the observation is not factual.  In the second place,
the postulates do not lead deductively to an explanation of the presumed
facts.

If you cannot agree that Searle with a book could fool a native Chinese
speaker, you have no facts to explain.  If you cannot agree on the
definitions of the words, you have no theory to make deductions from.

What, please, are the facts?  In my humble opinion (IMHO), all we agree
on is that we have partial successes, a lot of ambition, and a lot of
uncertainty.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858		Home (201) 946-8147
Holmdel, NJ 07733	att!homxc!marty

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/28/89)

This is a reposting (1st one apparently didn't make it) of a reply to
two successive postings by arm@ihlpb.ATT.COM (A. R. Macalalad) of AT&T
Bell Laboratories, who wrote:

" Now for the sake of argument, let's assume that there is a
" distinction between Searle and (Searle + rules)... the only entity able
" to decide if (Searle + rules) really understands Chinese is (Searle +
" rules). Not you or me or any outside observers or even Searle himself.
" Only (Searle + rules).

Of course, if we are assuming that much for the sake of argument --
namely, a separate entity that exists and understands -- then of course
there is no argument. You've assumed it all.

" The issue I now want to take up is your justification of the
" Total Turing Test... [tap-dancing, etc.]...

The justification for the Total (robotic) Turing Test (TTT) in
preference to the Language-In/Language-Out Turing Test (LTT)
is fourfold (and has nothing to do with arbitrary calls for
tap-dancing):

(1) The TTT is what we already use with one another in our everyday,
practical "solutions" to the other-minds problem -- not the LTT, which
we only use, derivatively, with pen-pals.

(2) The TTT (fine-tuned eventually to include neuronal "behavior" too)
encompasses all the available empirical data for the mind-modeler.
(The only other data are subjective data, and I advocate methodological
epiphenomenalism with respect to those.) The LTT, on the other
hand, is just an arbitrary subset of the available empirical data.

(3) The LTT, consisting of symbols in and symbols out, is open to a
systematic ambiguity about whether or not everything that goes on
in between could be just symbolic too. (I conjecture that the LTT
couldn't be passed by a device that couldn't also pass the TTT,
and that a large portion of the requisite underlying function will
be nonsymbolic.)

(4) Evolution, the symbol grounding problem, and common sense all
suggest that robotic (TTT) capacities precede linguistic (LTT)
capacities and that the latter are grounded in the former.

" Of course, if you'd rather offer an objective definition of...
" understanding, please feel free....

As stated many, many times in this discussion, and never confronted
or rebutted by anyone, this is not a definitional matter: I know
whether or not I understand a language without any need to define
anything.

" Conduct on the net... I think that a few other apologies are due.

I'm trying to criticize views and arguments, not people. If I have
offended anyone, I sincerely apologize. (It seems not that long ago
that *I* was the one preaching against intemperate and ad hominem
postings on the Net as not only ethically reprehensible but an obstacle
to the Net's realizing its full Platonic potential as a medium of
scholarly communication.)

" Being one of those "in the grip of an ideology," I find it remarkably
" easy to recognize two systems, and Searle's reply of "internalizing"
" the second system only clouds the issue... for true internalization to
" take place, the rules must be converted from one system to the other.
" In other words, the person has to just sit down and learn Chinese....

One of the tell-tale symptoms of being in the grip of an ideology
is that one can no longer tell when one is begging the question...

" Let's take a variation of the Chinese room where the purpose of the
" room is to interpret [Chinese] BASIC instead of to understand
" Chinese... Is it fair to conclude, then, that the system of the person
" in the Chinese BASIC room and her bits of paper is not really
" interpreting BASIC?...  Now who's in the grip of whose ideology?

The suspicious reader who might think I stacked the cards by clipping
out the ARGUMENTS in pasting together the above excerpt will be
surprised to see, upon reading the entire original posting, that
there ARE no arguments: The Chinese Room has simply been reformulated
in Chinese Basic, and voila! (There's also a double-entendre here
on the syntactic vs. the mentalistic meaning of "interpret.") Mere
repetition of credos is yet another symptom of ideological grippe.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/01/89)

[This is a re-posting of a reply that apparently didn't
make it through the mailer.]
dave@cogsci.indiana.edu (David Chalmers) of
Concepts and Cognition, Indiana University writes:

" ["Symbol"] is used [by Searle] to mean two different things:
" (1)... a formal object which corresponds to some HIGH-LEVEL, semantic
" concept in the real world... [e.g.,] a _word_ [and] (2)... any formal
" object... manipulated by a computer program... low-level or
" high-level [or meaningless] [e.g., a neuron]

So far, so good, though I don't find this distinction particularly
useful, because it just concerns how you INTERPRET the meaningless
symbols you're manipulating -- here as words, there as neurons. (In
principle, even the very same program could be interpreted either way.)
But let's go on and see where this leads:

" with sense (1)...  I reject the PREMISE of Searle's argument; a formal
" symbol-manipulator could never even display what _looked_ like
" competent Chinese-speaking behaviour

Well, this certainly gives away the store, and I'm inclined to agree.
But I have reasons. Do YOU have better reasons than that you like neurons
better than words?

" [But] Contrary to what Harnad implies, Searle is not only arguing
" against high-level symbol manipulators in the Newell/Simon/Fodor mould.
" He wants to say that NO computer program could... have true
" (subjective) understanding, not even an incredibly complex and subtle
" program (such as a program that simulated a neural network the size of
" the brain.)

Actually, I don't imply otherwise: This is exactly what Searle would
say, because for him it is immaterial how the symbols are interpreted
by the programmer, as words or as neurons: To him they're all just
meaningless symbols. And so are the inputs and outputs (Chinese
symbols, remember? not Chinese neurons). Nor is Searle impressed
with hand-waving about "incredible complexity and subtlety": Symbol
manipulation is just symbol manipulation, no matter how complex the
symbols or the interpretations.

" [Searle] uses the word "symbol" in the low-level sense (2), while
" appealing to our intuitions about symbol-manipulators which manipulate
" high-level symbols of sense (1)!... But AHA - here we have him. These
" low-level (sense 2) symbols... correspond to micro-structural entities
" (such as neurons), which taken alone are devoid of semantics. Semantics
" only emerges when we put enough of these neurons together to form an
" incredibly complex SYSTEM. Despite the fact that the symbols taken
" alone are meaningless, put enough of them together in the right way and
" meaning will be an EMERGENT property of the system, just as it is with
" the human brain.

What we have here is exactly what it sounds like: Not an argument, but
a statement of faith in the "emergent" properties of "incredibly
complex" systems. I feel the same way about clouds sometimes.

The human brain's another story. (The following is almost a paraphrase
of some arguments from my paper, "Minds, Machines and Searle.") Of
course we know the brain "has" semantics. But a symbolic simulation of
a brain is not a brain, any more than a symbolic simulation of a plane
is a plane. Hence there's no reason to believe that a brain simulation
can think any more than a plane simulation can fly.

On the other hand, there is every reason to believe that a correct
brain simulation, like a correct plane simulation, could model
symbolically all of the relevant causal principles we would need to
know about thinking and flying in order to implement their mechanisms
as a brain and a plane, respectively. The implemented brain and plane
could then think and fly, respectively. But they wouldn't be just
symbols anymore either. For one thing, they'd have to have the causal
wherewithal for interacting with the outside world the way brains and
planes do -- and that's not just symbols-in and symbols-out. They would
have to include transducers and effectors (which, as I said before, are
immune to Searle's Argument), and, if the other arguments I've been making
have any validity, it would have to include a lot more nonsymbolic
(analog, A/D, feature-detecting, categorical, D/A) processes in between
the input and the output too.

As long as the system's of the right type, you need make no special
appeal to "incredible" complexity and "emergent" properties (though
it'll no doubt be complex enough). Where you need inordinate amounts of
complexity and equal amounts of credulousness is with a system of the
wrong type, such as a purely symbolic one (or perhaps a purely gaseous
one).

" It is a very mysterious question indeed how real understanding,
" subjective experience and so on could ever emerge from a nice physical
" system like the human brain... nevertheless we know that it does,
" although we don't know how. Similarly, it is a mysterious question how
" subjective experience could arise from a massively complex system of
" paper and rules. But the point is, it is the SAME question, and when we
" answer one we'll probably answer the other.

The first case is certainly a mystery that is thrust upon us by the
facts. The second is only a mystery if we forget that there are no facts
whatsoever to support it, just the massively fanciful overinterpretation of
meaningless symbols.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/01/89)

[This is a reposting of a reply that apparently didn't appear the 1st time.]
geddis@polya.Stanford.EDU (Donald F. Geddis) of Stanford University
writes (in a pair of successive postings):

" it might be true that a computer system could not converse
" intelligently without being embodied in the real world. But the real
" question Searle considered was: How do you determine when a system
" is intelligent...? The AI answer is "treat it as a black box and observe
" its behavior (have conversations, in this case)". Searle (mistakenly)
" disputes this view, and wants us to look inside the system for some
" "causal powers"... Just because it requires careful probing and the
" examiner can be fooled, doesn't mean that "external behavior" is not
" the proper criteri[on] for deciding when a system understands.
" Just what, exactly, is being proposed as an alternative test?

There are two alternative OBJECTIVE tests for having a mind, the
(standard) Linguistic Turing Test [LTT] (symbols-in, symbols-out) and
my stronger (robotic) Total Turing Test (TTT) (proximal-projections-of-
objects-on-sensors-in, effector-action-on-objects-out). The LTT is a
subset of the TTT, but one that is, as I have indicated repeatedly,
EQUIVOCAL about the issue of "embodiment" and the putative autonomy of
symbolic function from many forms of nonsymbolic function that may be
needed in order to pass the LTT in the first place. Searle is only
addressing the LTT, and my reply to Searle is that the TTT is immune to
his arguments against the LTT.

Neither the TTT nor the LTT, however, provides a guarantee that the
candidate has a mind. There is and can be no objective test for that,
only a first-person subjective one: To perform that, you have to BE the
candidate. ONLY this subjective test is decisive.

There are two senses in which Searle is advocating "looking inside": One
is to look at the functions of the brain, because we have pretty good
reason to believe that candidates with brains have minds (because, as I
would put it, candidates with brains can pass the TTT). The second sense
of "inside" is the first-person test for subjectivity, which we can all
perform on ourselves. It's THAT "causal power" that he reminds us
brains have but symbol-crunchers do not. My reply is that candidates
OTHER than the brain that can pass the TTT (if and when we come up with
any) are immune to his Chinese Room Argument that they cannot have a
mind (though, of course, I repeat, no objective test can demonstrate
that anyone, EVEN ourselves, has a mind). Searle's argument against
(hypothetical) candidates that pass the LTT only, with symbols only,
is decisive, however.

I've always thought this reasoning was quite easy to understand, but from
the fact that very few people have given me any objective evidence that
they've understood it, I've concluded that it must be difficult to
understand. Maybe by trying to put it slightly differently each time,
tailoring it to the latest misunderstanding, I'll succeed in making it
understood eventually...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/01/89)

In article <125@arcturus.edsdrd.eds.com> gss@edsdrd.eds.com (Gary Schiltz)
writes:
>
>After I started college as an undergraduate in the mid 1970's, I 
>took my first calculus course.  Coming from a small high school in 
>a small town, my math skills were minimal (a year or so of algebra), 
>so the whole course was very confusing.  In all the time I was in 
>the course, I never did understand what calculus was all about.  
>However, I did know, for example, that a derivative was "the equation 
>you get when you manipulate another equation in such and such a way" 
>and an integral was "the equation you get when you manipulate the 
>equation in another way."
>
>I even had a fair amount of heuristic knowledge about how to solve 
>word problems.  "Hmm, that problem [on the exam] looks like the one 
>we did in class.  Let's see, first you take the derivative 
>of this and plug in these numbers and solve for this variable, and 
>then you circle the answer (and even if the answer is wrong, at least 
>I can get partial credit for showing my work, and if everyone else is 
>as confused as I am and they don't score well and the exam is graded 
>on a curve, maybe I can pass)."  I seemed to be able to do fairly
>good mapping of one problem to another based on its surface structure.
>
>Well, I did pass the course (now I'm ashamed that I didn't do what 
>was necessary to understand what was going on, but like a lot of 17 
>year olds, I just took the easiest way).  I later repeated the course 
>and understood what I was doing (and made a lot better grade).
>
>Anyway, from my gut level feeling (quite possibly useless, I admit) 
>about what understanding is all about, I really feel I had no
>understranding of calculus during that semester.  Just as the Brazilian
>students didn't realize that symbols in physics equations actually 
>referred to things in the outside world, I didn't know that the 
>calculus was modelling anything.  I truly had no idea that derivatives 
>had anything to do with rate of change, for example.  But, from the 
>outside, it must have appeared that I had at least some understanding 
>of calculus; at least I was good enough at manipulating equations to 
>make the instructors think so.
>
I find this a very interesting anecdote because it may tell us some interesting
things about both introspection and understanding.  There is a school of
thought which interests me very much and which Marvin Minsky discusses at
some length in THE SOCIETY OF MIND which says that when we are trying to
solve a problem, we look for a similar problem which we know how to solve
and "complete the analogy," so to speak.  This seems to be what Gary was
doing in his calculus course, and I suspect he is not alone.  Indeed, much
of my freshman education seemed to be a matter of exposure to problems and
their solutions, endowing me with a repertoire I could consult when I had
to solve new problems.

The first point I wish to make is that neither "looking for a similar problem"
nor "completing the analogy" may be as easy to DO as they are to SAY.  I think
the source of Gary's embarrassment stems from the fact that his similarity
metrics were based on what he called "surface structure;"  and, indeed, I
have encountered some anecdotes from tutoring scenarios which seem to be
based on a student dealing with a surface structure "in the wrong way."
Now what does that last phrase mean?  I suspect what it means is that,
to draw an analogy with language processing, we all have some ability
to "parse" the "surface structure" of an example of a problem and its
solution.  However, some of us seem to have the ability to parse it better
than others, at least to the extent that we can use the parse tree as a model
for solving future problems.  Perhaps this metaphor for parsing is what bridges
the gap between what we might call "eidetic recall of a solved problem" and
what we would call "understanding the solution to a problem."

This brings me to my second point.  At his "gut level" Gary felt,
introspectively, that he really did not understand calculus.  Now
I know plenty of mathematicians who would claim that you cannot possibly
understand calculus until you have been exposed to real analysis.  (I had
an analysis professor who liked to call his course "advanced calculus done
right.")  However, let me assume that Gary is an engineer, rather than a
mathematician, so that his criterion of understanding has less to do with
appreciating the "true" mathematics which underlies all that symbol
manipulation and more to do know knowing how to manipulate the symbols
in the circumstances of some pragmatic engineering problem.  Having let
the introspection cat out of the bag, I, for one, would like Gary to attempt
to probe further as to just WHY, at that gut level, he felt understanding was
eluding him.  Did it have to do with problems he could not solve?  Did his
eyes glaze over whenever he saw integral signs in the pages of a book?  Did
he just feel that we was struggling more than his fellow students to solve
problems?  Perhaps if we probe these matters deeper, we may yet return to
my initial point:  that Gary's "gut level feeling" may leave something to
be desired as a criterion for understanding.  (One last question to Gary:
Can you identify a moment at which you said, "NOW I understand calculus;"
and can you recall the circumstances of that moment.)

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/01/89)

In article <Feb.18.17.26.17.1989.23438@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

>kck@g.gp.cs.cmu.edu (Karl Kluge) of Carnegie-Mellon University, CS/RI
>wrote:

>" Ah, but it is a word game... We have Mind A, which we will call John
>" Searle, which understands English, and which in its capacity as a
>" Universal Turing Machine is emulating Mind B, which we will call Fu
>" Bar. Mind A, John Searle, does not understand what is going on in Mind
>" B, Fu Bar, whose execution it is simulating.

>Ah me. Is it really so difficult to see that in the above you have
>simply presupposed the conclusion you were trying to demonstrate?
>Before we buy into any dogmas, it is a fact that Searle has a mind, but
>definitely NOT a fact that "Fu Bar" has a mind. 

OK.  But Searle is claiming that his lack of understanding shows
that Fu Bar (about which both we and Searle know little) does not
understand.  But that does not follow from Searle's lack of under-
standing.  For all Searle knows, Fu Bar might understand.  I suspect
Searle would say that using his brain as a computer (running the
program encoded in the instructuions he follows) isn't using his
brain in the right way, btu can he prove it?

It may have been a mstake to talk about Mind A and Mind B, but I think
you are dismissing this point unfairly.  That Fu Bar has a mind has
not been shown, but neither has it been shown that Fu Bar does not
have a mind.  So, if nothing is shown, then Searle, who claims to have
shown somehting, loses.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/01/89)

In article <563@aipna.ed.ac.uk> rjc@uk.ac.ed.aipna (Richard Caley) writes:
>So it _is_ a definitional problem.  Since we have assumed that the
>behaviour is identical whether or not it understands, we must rely on
>deduction based on the structure of the system to tell us if it
>understands.  Most significantly, we can't rely on the method we use for
>humans - if we ask the room ( presumably in chinese ), it says yes,
>otherwise the behaviour is not like that of a native speaker!

You haven't said anything that shows it's a definitional problem.  I
do not see how much can be gained by moving from "does X understand?"
to "what does 'understand' mean?"  In particular, answers to the
second question will not necessarily let us resolve the first.  It is
always open for someone to say "well, your definition of 'understand'
is wrong because there's a counterexample: X isn't doing something
that satisfies your definition, but X is understanding."  And then
we're right back where we started.

Or, to look at it another way, what does it matter whether something
is called "understanding" or not?  What really matters is whether
things are the same or different in some interesting way.

Think of yourself.  If you understand English and do not understand
Chinese, do you accept that there's some difference there?  Do you
have to define "understand" before you can answer, or do you already
have an adequate understanding of "understand"?

>Without defining understanding we can't argue with it since our
>intuitive knowledge of understanding is only for _ourselves_, we apply
>it to other people since they seem rater similar, we can _try_
>and apply it to philosophers in rooms or computer systems but I would
>not trust the result -

Suppose we had the sort of definition you want, and suppose it let
us say that X understood and Y didn't.  Then someone might say "well,
I guess 'understanding' wasn't the right thing to ask about after
all."

The reason we have problems deciding about computers and philosophers
in rooms is that we don't know all that much about how our minds work
and because we can never get inside someone else's subjective
experience.  Definitions of "understanding" do not help with either
problem.

>Aren't you assuming the result here. If searle running the program is
>"-" WRT "understanding" the naturally the system does not understand.

You're right here.

You're also right that the structure of the system, or something like
that, may turn out to be significant.  But I think that's about all we
can say at this point.  After all, we don't have any machines that
behave as if able to understand Chinese, so it's hard to say anything
about their structural properties.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/01/89)

In article <Feb.20.21.17.37.1989.16495@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>There is no reason whatever (apart from the preconceptions that
>Searle's Argument was formulated to invalidate) (a) not to believe
>him or (b) to believe that there is "someone/something" else in the
>Chinese Room that IS understanding Chinese in the same sense that
>you or I or Searle understand English. 

It is begging the question to say something else in the room is
understanding chinese.  But it's not necessary to show that there
is something else in order to refute Searle -- all you have to do
is show that Searle hasn't shown there isn't something else.  Searle
does try to show there isn't.

Where the "systems argument" goes wrong is by saying "the system
understands".  But all it really has to do is find a system that
Searle hasn't shown to lack understanding.

>The difference is that the "external" criteria have not been shown to be
>valid, and hence there is simply no justification for taking them to signal
>the presence of understanding at all. To merely assume that they do is
>not an argument; its just circularity again.

Here I more or less agree.  

The external/behaviorist argument is also reather boring.  Well, maybe
some people only care about the behavior.  That's fine, but some other
people may be interested in other aspects too.  And the behaviorist
approach doesn't address these other issues at all, except to dismiss
them.

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/01/89)

In article <4307@cs.Buffalo.EDU> sher@wolf.UUCP (David Sher) writes:
>I'd like to hazzard an answer to this question.  The reason the AI 
>establishment tries to answer this question is there is a strong implication
>that Searle's argument indicates that symbolic AI approaches will always
>lack some performance capability.

>Does anyone believe that they can build a machine with a soul?  It is
>just as easy to build in Searle's "understanding."

It's certainly true that it's hard to see what could ever convince
Searle that anything had understanding.  But I think we can look at
a simpler situation and see the kind of thing that might be involved
when there's no performance difference.

Let's take Chess.  At one time, Chess may have seemed a good test of
intelligence.  But suppose we have two programs, both able to play at
the same level.  One program constructs strategies and plans.  It
explicitly represents goals that, if attained, would trap the enemy
kind, and so on.  The other just uses "brute force" search, but is
very fast.  It may behave as if it has goals, but it doesn't really
have them (in some sense).

Both of these programs are just doing symbol manipulation, and both
can play at the same level; but we can still see that they work in
different ways.  Indeed, the second program is more "mechanical".
Both programs can be seen as using only simple, low-level operations;
but the first program can also be analyzed in terms of goals and plans
while (let us suppose) the second one cannot.  That is, the structure
of the program doesn't show that kind of organization -- it's behavior
is another matter.

So I think we can imagine cases where there is no performance
difference but where there are other interesting differences we
can discover.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/01/89)

In article <573@aipna.ed.ac.uk> rjc@uk.ac.ed.aipna (Richard Caley) writes:
>(b) is, surely, a straw man. It is the homoculous argument again. Nobody
>is claiming there is "something else" in the room which understands
>chinese.

Nope.  Imagine that some part of Searle's brain is running the Searle
program and another part is running the Chinese Room program.  No
infinite regress, just time sharing.

dave@cogsci.indiana.edu (David Chalmers) (03/01/89)

harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>dave@cogsci.indiana.edu (David Chalmers) writes:
>
>" with sense (1) [the high-level sense of "symbol"]...  I reject the 
>" PREMISE of Searle's argument; a formal symbol-manipulator could never 
>" even display what _looked_ like competent Chinese-speaking behaviour.
>
>Well, this certainly gives away the store, and I'm inclined to agree.
>But I have reasons. Do YOU have better reasons than that you like neurons
>better than words?

Certainly I have reasons.  Have you got a couple of hours?  I didn't think
this was the time or the place for a switch of topic to an issue far
more complex than Searle's misleading intuition pump.

>
>" [Searle] uses the word "symbol" in the low-level sense (2), while
>" appealing to our intuitions about symbol-manipulators which manipulate
>" high-level symbols of sense (1)!... But AHA - here we have him. These
>" low-level (sense 2) symbols... correspond to micro-structural entities
>" (such as neurons), which taken alone are devoid of semantics. Semantics
>" only emerges when we put enough of these neurons together to form an
>" incredibly complex SYSTEM. Despite the fact that the symbols taken
>" alone are meaningless, put enough of them together in the right way and
>" meaning will be an EMERGENT property of the system, just as it is with
>" the human brain.
>
>What we have here is exactly what it sounds like: Not an argument, but
>a statement of faith in the "emergent" properties of "incredibly
>complex" systems. I feel the same way about clouds sometimes.
>
Answer me these questions.
  (1)  Do you believe neurons (taken alone) have semantics.
              [I take it the answer has to be "No."]
  (2)  Do you believe the brain as a whole has semantics.
              [I take it the answer is "Yes."]

Given this, you must accept that semantics can arise out of non-semantic
objects.  Most of us are a little baffled as to how.  It seems that the 
only half-way reasonable tack we can take to answer this question is to
say that what is important for semantics (and the subjective in general)
is not so much those objects as the complex patterns that they form.

After all, neurons taken alone are pretty simple entities which can't
carry much in the way of information.  As is well known, information
is carried by complexity (and the greater the complexity, the greater
the information which can be carried).  It seems to me that "information"
and "semantics" are very closely related concepts.  The fact that
complexity is a necessary condition for information would suggest
that appeals to complexity are not mere hand-waving.

>[A correct brain simulation would]  have to have the causal
>wherewithal for interacting with the outside world the way brains and
>planes do -- and that's not just symbols-in and symbols-out. They would
>have to include transducers and effectors (which, as I said before, are
>immune to Searle's Argument), and, if the other arguments I've been making
>have any validity, it would have to include a lot more nonsymbolic
>(analog, A/D, feature-detecting, categorical, D/A) processes in between
>the input and the output too.

This strikes me as rather like the point-missing "Robot Reply" in Searle,
despite your disclaimers.  I thought that the "Stephen Hawking argument"
was a rather good reply to this stuff.  What's important for subjective
experience is a brain state, not a bodily state; and AI claims to be
able to simulate any brain state whatsoever (just give it time).  So
Searle's arguments still would apply.  
You could paralyze me and put me in a sensory deprivation tank, but
still (for some time at least) I would have subjective experience.


>" It is a very mysterious question indeed how real understanding,
>" subjective experience and so on could ever emerge from a nice physical
>" system like the human brain... nevertheless we know that it does,
>" although we don't know how. Similarly, it is a mysterious question how
>" subjective experience could arise from a massively complex system of
>" paper and rules. But the point is, it is the SAME question, and when we
>" answer one we'll probably answer the other.
>
>The first case is certainly a mystery that is thrust upon us by the
>facts. The second is only a mystery if we forget that there are no facts
>whatsoever to support it, just the massively fanciful overinterpretation of
>meaningless symbols.
>
And presumably, if we were all made of paper we'd say the same thing.  "It's
easier and safer to assume that neuro-thingies don't support TRUE experience;
and after all we have no direct evidence for it, only their meaningless
claims.  So lets just ignore anything which these systems have in common
(viz. extreme complexity, intelligent behaviour) and just concentrate on
their differences."  I don't want to be inflammatory, but it sounds not
unlike many an argument used by a racist in days gone by.

  Dave Chalmers
  Center for Research on Concepts and Cognition
  Indiana University

bwk@mbunix.mitre.org (Barry W. Kort) (03/01/89)

In article <Feb.26.15.55.22.1989.7914@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) laments about the
difficulty of explaining his ideas about the Total Turing Test:

 > I've always thought this reasoning was quite easy to understand, but from
 > the fact that very few people have given me any objective evidence that
 > they've understood it, I've concluded that it must be difficult to
 > understand.  Maybe by trying to put it slightly differently each time,
 > tailoring it to the latest misunderstanding, I'll succeed in making it
 > understood eventually...

Stevan, would it help if I confessed that I was most captivated by
the sample dialogues found in Turing's paper, and later exemplified
in Hofstadter's Pulitzer Prize winning book?

I know that a lot of technical specialists look down upon such
frivolous and fanciful dialogues, but if the goal is to successfully
communicate an idea, it helps to dramatize the material.  For some
reason, people love a good story with some emotional give and take.

--Barry Kort

hansw@cs.vu.nl (Hans Weigand) (03/01/89)

In article <Feb.23.22.02...> harnad@elbereth.rutgers.edu (Stevan Harnad) wrote:
>lee@uhccux.uhcc.hawaii.edu (Greg Lee) of University of Hawaii wrote:
>
>" No, there aren't "two senses of "understand," a subjective and an
>" objective one," [otherwise we couldn't say] 'He understands, and I do too'
>
>As I've suggested already, this is simply not a linguistic matter. ...
>[there] is both an objective and a subjective sense of understanding.
>

Can't we reconcile these two standpoints by saying that "understanding"
has only one "sense" (linguistic meaning), but that this sense has two
aspects (facets), objective and subjective?

Cognitive scientists and philosphers must distinguish between objective
and subjective aspects of intentional attitudes, but they must also 
be careful not to abolutize these aspects, because this easily leads to
abstractions prone to paradoxes.

For the rest, I agree with Stevan.

Hans Weigand,
Dept of Mathematics and Computer Science
Free University, Amsterdam

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/01/89)

In article <9739@ihlpb.ATT.COM> arm@ihlpb.UUCP (55528-Macalalad,A.R.) writes:
>here and there."  I think that most good AI research is done under this
>category, where the emphasis is on understanding how the mind works,
>biologically, psychologically, and computationally.

Ahem.  You've missed out the social aspect of mind.  

A while back, Don Norman re-iterated the fact that all interesting performance judgements
are social judgements.  Unfortunately, some Waldenite from USCD later came back with
the nonsense that intelligent arises from ONE individual interacting with the
physical environment.  Remind me not to go to any of his parties :-) :-)

The facts, for the most die-hard positivist, are that without proper early
socialisation, children end up worse than animals (how many times do I have to say
this?)

Whilst the Walden dream of one man, his biology, psychology and (presumed)
computations may appeal to many Americans, remember that this ideal is nothing more
than a fiction, though a profoundly appealing one to many in the new world.

Yes, we do work out problems on our own, but only as a result of interactions with
others.  One cannot work from the individual to society, from one intelligent agent
to a co-operating community with a living, viable culture.  Society is not the
individual writ large (all though some political ideologies do hold this).

(As for the Mad Ox of Cyperpunk - have the decency not to take any of the content
 here as xenphobia.  I have a number of American friends and have a balanced view of
 your marvellous country, but American individualism isn't a science).
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/01/89)

In article <7645@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
Thus, I would argue that the manifestation of intelligent
>behavior cannot be observed the way we observe the size of a physical object.

GOTCHA!  OK Stephen, so what are the implications for this of a science of Mind?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

bwk@mbunix.mitre.org (Barry W. Kort) (03/01/89)

Now that I think about it, I, too, took a course in which I failed to
comprehend the subject, yet I could mechanically do the motions which,
on the surface, suggested I knew what I was doing.

The course was a compressed introduction to Probability and Statistics
for people who had completed their undergraduate curriculum but hadn't yet
entered grad school.  We met 5 days a week for 2 months.  It was brutal.

At one point, the professor introduced the notion of Borel Sets, which
provide an abstract foundation for probability theory.  Now Borel sets
are unreal, like fractal dust.  Very hard to understand.

There were a series of theorems and proofs that no one understood.
But for some peculiar reason, the proofs always started out with
"Pick a partition...".  Now none of us knew what the professor meant
by "pick a partition".  But by the time he got to the fourth proof,
he said, "How do we prove this?".  The class answered in unision,
"Pick a partition."  "RIght," he said, and proceeded to complete the
details of the proof.

Twenty years later, I still don't know what he meant.

--Barry Kort

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/01/89)

From article <Feb.23.22.02.54.1989.5138@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
" 
" lee@uhccux.uhcc.hawaii.edu (Greg Lee) of University of Hawaii wrote:
" 
"" No, there aren't "two senses of "understand," a subjective and an
"" objective one," [otherwise we couldn't say] 'He understands, and I do too'

This is, I take it, a tangential point.  I had said that Searle's
argument showed no essential difference between computers and
people and turned on a mere point of usage of the term 'understanding'.
Then I went on to say that *even* the point about usage was not
very well taken, as one can see easily enough once one recognizes
the terminological nature of the argument.  Now it's this secondary
question of usage that is being discussed, as far as I can make
out.  Harnad says there's a "true" sense of 'understand' with
respect to which we should agree with Searle's terminological
argument.  Maybe so.  I don't think so, but even if I'm wrong,
and Searle has done some first rate linguistics here, the main
point seems to have been established.  There is no substance to
the Chinese Room argument -- it's just toying with words.

" As I've suggested already, this is simply not a linguistic matter.

I noticed that suggestion.  But then you keep citing (purported) facts
about language usage and giving linguistic analyses to support your
views.  At least, if the analyses you give are intended to have
any empirical content, I don't see how else they can be construed.

When you propose this distinction of yours between understanding(subjective)
and understanding(objective), how are we to take this?  Are you
making a definition for convenience of discussion?  If so, fine.
You can make any definitions you want.  Or maybe you're declaring
that as a matter of personal taste, you like to make this
distinction.  Well, to each his own.  The trouble is, you seem
to think you're doing more -- that there really *is* a distinction
of the sort you claim, and that it's a matter to which some sort
of factual evidence is relevant.  And the only evidence you
offer concerns language usage, so when you say it's "simply
not a linguistic matter", how can we believe you?  You're simply
wrong.  As you have put the issue so far, it *is* a linguistic
matter.

If the existence of this distinction you claim is not intended
to be an empirical proposal, then it's time for you to say
so.  In that event, I will have no further interest, personally.
If it is intended to be empirical, and facts other than those
of language usage can be found to suppport it, then it's time
for you to say what those facts are.  Until then, I guess we'll
continue to talk linguistics.

" The distinction I'm after is already there with "pain" (although we don't
" have two senses of pain as we do of understanding -- the reason for
" this will become clearer as we go on). Consider "I'm in pain and he is
" too." Apart from the obvious fact that I don't mean he's in MY pain
" (which is already a difference, and not a "linguistic" one but an
" experiential and conceptual one),

I'll comment on the parenthetical.  There are several interesting things
about this construction, but they are susceptible to linguistic
analysis, and they have nothing to do with 'pain' being "experiential".
In 'He is in (his) pain', the "his" part is understood, but cannot be
made explicit.  Similarly, 'He is red in the (his) face'.  Is it because
pain and faces are subjective?  Nope.  It has to do with inalienable
possession.  A face is linguistically an intrinsic inseparable part of a
person (for English).  Many languages of the world make a distinction
between alienable and inalienable possession.  It turns up in different
forms.  It's complicated.  It's linguistic.

Another interesting thing about your example is the sloppy identity
between the 'in (my) pain' antecedent and the elided 'in (his) pain'.
The phenomenon has received lots of discussion since Haj Ross
talked about it in his 1967 dissertation.  Compare 'Mary said that
she'd like to have her steak rare, and John did, too' -- which
has as one possible interpretation '... John said he'd like to
have his steak rare, too'.  Let's see -- is this because steaks,
or saying or liking are inherently subjective phenomena?  Nope.
Missed the boat again.  It's a question of syntactic scope.

" it makes sense to say "He SEEMS to be
" in pain (but may not really be in pain)," but surely not that "I SEEM
" to be in pain (but may not really be in pain)."

On the contrary. 'I seem to be in pain' is a perfectly ordinary
thing to say.  'Hey, Doc, I've been taking these pills for weeks
now, and I still seem to be in pain.'  But maybe you disagree
with this.  If so, can we take a survey to settle the matter?
Or, if most people agree with me, will you say "Oh, you're just
not using 'seem' *properly*."  Or maybe we'll have a distinction
between seem(subjective) and seem(objective).

" (Please don't reply
" about tissue damage, ...

Wouldn't think of it.

" ...
"" Pardon me if I implied that only philosophers do second-rate linguistics.
" 
" I won't make the obvious repartee, but will just repeat that these are not
" linguistic matters...

I shouldn't have been snide.  I'm sure you could do some very good
linguistics IF YOU KNEW YOU WERE DOING IT.  Offering facts of
language and linguistic analyses without understanding that you're
actually doing linguistics is a big handicap.

" [In a later posting Lee mixes up the syntax
" of the (putative) symbolic "language of thought" -- whose existence and
" nature is what is at issue here -- and the syntax of natural languages:
" Not the same issue, Greg.]

I haven't the foggiest idea of what mixing up you are referring to.
I'm reasonably sure I never said anything about a "language of
thought".  What are you talking about?

		Greg, lee@uhccux.uhcc.hawaii.edu

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (03/02/89)

In article <45199@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
>Recall the breakthrough scene in the Helen Keller Story.
  ...
>The Chinese Room is like Helen before her moment of epiphany.
>There is little point in manipulating symbols mechanistically
>unless one can map the symbols to non-symbolic sensory
>information from the external world.

     It is certainly true to assume that sensory perceptions from
"the enviroment" (outside the cognitive device) are neccesary for
real-world reasoning.  (Galileo pointed that out in _Dialogue
Concerning the Two Chief World Systems_, although he was only talking
about human brains).  Internal rules are not enough.

     However, in the Chinese Room experiment, we are assuming that
the rule operator has indeed been endowed with rules to operate
on, and as such these rules are defacto sensory input from the
outside.  
     Moreover, the incomming Chinese is also sensory input.
Rules may exist which change due to incomming Chinese.

Furthermore, what are these rules?  Do these rules include sensory
information (i.e. is there a rule which deals with what-is-trees
which includes a picture of a tree)???

One more angle on this entire situation is that neural-networks can
often be described by symbollic rules.  Often discovering these
rules from learned weights can be difficult, but there have been
some breakthroughs (i.e. shading --> concave or convex object?
Sejnowski has taught a NN to do the shading to concave or
convex and determined symbolic rules).
   I feel though that NN's give symbollic rules a richer
"spectrum", and that it's much easier to induce new nerual
weights than to induce new symbollic rules.
   Even if you are still not quite convinced that NN's can be
reprsented by symbollic rules, take every neuron to be a rule
which takes the weighted sum of activations of rules connected
to it and perform a function on the rule activation, and propogate
that along directed edges to other rules...

-Thomas Edwards
ins_atge@jhuvms bitnet

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/02/89)

dave@cogsci.indiana.edu (David Chalmers) of
Concepts and Cognition, Indiana University wrote:

" [Concerning symbolic modeling vs neural modeling] I didn't think this
" was the time or the place for a switch of topic to an issue far more
" complex than Searle's misleading intuition pump.

No switch. That IS Searle's topic. It is most respondents who have
over-simplified it.

" Do you believe neurons (taken alone) have semantics[?] [I take it the
" answer has to be "No."] Do you believe the brain as a whole has
" semantics[?] [I take it the answer is "Yes."] Given this, you must
" accept that semantics can arise out of non-semantic objects...
" not so much those objects as the complex patterns that they form.

Of course semantics arises out of nonsemantic objects. But there are
nonsemantic objects and nonsemantic objects -- and scratches on paper
(even when implemented as symbol-crunching computer programs) do not
seem to be the right kinds of objects. Likewise there are patterns and
patterns. My "Robotic Functionalism" IS a form of functionalism -- it
does hold that cognitive function is some "pattern" of physical
function. But, unlike standard "Symbolic Functionalism," it denies that
that pattern of physical function consists merely of formal symbol
manipulation. It can be SIMULATED by symbol manipulation; but if what
is simulated is not merely symbolic function (e.g., if an essential part
is analog processing) then it cannot be IMPLEMENTED as just symbol
manipulation. (And, as I said in my postings and article: Only
implemented planes/brains can fly/understand.)

" The fact that complexity is a necessary condition for information would
" suggest that appeals to complexity are not mere hand-waving.

But necessary conditions are not sufficient conditions. And mere
complexity will not you a mind get. There's complexity and complexity;
and a lot more conceptual work to do before you have a viable model
for the mind.

" [Harnad's "Robotic Functionalist Reply"] strikes me as rather like the
" point-missing "Robot Reply" in Searle, despite your disclaimers. I
" thought that the "Stephen Hawking argument" was a rather good reply to
" this stuff. What's important for subjective experience is a brain
" state, not a bodily state; and AI claims to be able to simulate any
" brain state whatsoever

According to Robotic Functionalism, the device -- the "inner core," the
"brain-in-a-vat," or whatever you like -- that will be able to
successfully pass the Linguistic version of the Turing Test (LTT)
(symbols-in, symbols-out) will have to have and draw upon the internal
causal wherewithal to pass the Total (robotic) Turing Test as well
(even if it does not have to display it behaviorally). I'm sure Stephen
Hawking has that inner core; and it's just a current blinkered fantasy
that that inner core consists of nothing but a symbol-cruncher!
Hawking's intact internal nonsymbolic (brain) functions are crucial to
his having a mind whether or not he can or does display them in any
other form than a verbal one. (Or didn't people in AI know that if
you yanked off from the brain the "body" and all its sense organs --
some of which happen to be PART of the brain, by the way -- you weren't
just left with a digital computer?)

" if we were all made of paper we'd say the same thing: "It's easier and
" safer to assume that neuro-thingies don't support TRUE experience; and
" after all we have no direct evidence for it, only their meaningless
" claims. So lets just ignore anything which these systems have in
" common (viz. extreme complexity, intelligent behaviour) and just
" concentrate on their differences."... I don't want to be inflammatory,
" but it sounds not unlike many an argument used by a racist in days gone by.

And if my grandmother had wheels, or the world were one-dimensional,
or stones had minds... You can't make a counterfactual and implausible
conclusion seem more plausible by simply adopting it as a premise.

Ref: Harnad (1989) Minds, Machines and Searle. Journal of Experimental and
Theoretical Artificial Intelligence" 1: 5-25
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

sher@sunybcs.uucp (David Sher) (03/02/89)

Just to test out what is and isn't a symbolic system.   Consider a
stochastic context free grammar.  This is a grammar that has
a probability associated with each production.  Thus each element of the 
language it accepts has a probability associated with it (along with
a certain probability that the machine never outputs anything).  
Now consider a machine that takes a stochastic grammar and an input string
and outputs the most probable parse tree for the input.  Is this machine
doing symbolic processing?  Or is more information required to answer this
question.  Assume it used a modified form of Earley's algorithm to do
this.  Now is it doing symbolic processing?  

If enough people are interested in how to modify Earley's algorithm to 
accept stochastic grammars I can post (or even write a paper on the topic
if it hasn't been done yet).  Its fairly trivial.


-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

jgn@nvuxr.UUCP (Joe Niederberger) (03/02/89)

In article <17923@iuvax.cs.indiana.edu> dave@duckie.cogsci.indiana.edu (David Chalmers) writes:
 
<lots of good stuff deleted>

>It is a very mysterious question indeed how real understanding, subjective
>experience and so on could ever emerge from a nice physical system like
>the human brain, which is just toddling along obeying the laws of physics.
>But nevertheless we know that it does, although we don't know how.

>one.  Just remember, semantics CAN arise from syntax, as long as the 
>syntactical system is complex enough, and involves manipulating 
>micro-structural objects which interact in rich and subtle ways.  

Now, I am not religiously convinced either of the truth or falsity
of the above statement, but I can't help noticing the fervor implied
by the capitalized "CAN." But isn't it the point of this discussion
to present evidence supporting or contradicting a held belief? If I
were to grant that David's argument against Searle's "proof" was
valid, I may still be unmoved (and logically uncompelled) to agree
with his claim that semantics CAN arise from syntax. If the
reference to the human brain is the evidence he offers, I ask: why
must I view the brain as a syntactical system ?

Yes, it may be an interesting hypothesis that the brain's essential
function is to serve as a syntactical system, (and this may deserve
further investigation,) but lack of a disproof doesn't serve as a
proof for me.


Joe Niederberger

sarima@gryphon.COM (Stan Friesen) (03/02/89)

In article <4307@cs.Buffalo.EDU> sher@wolf.UUCP (David Sher) writes:
>
>I probably blew it, being far from an expert in rhetoric, but this seems
>to be the nub of the problem.  Does anyone believe that they can build a
>machine with a soul?  It is just as easy to build in Searle's "understanding."
>
	Yes, I do.  Of course this is at least partly because the Jewish
rather than the Greek definition of soul!

	By the way, I also believe that the Chinese Room as specified by Serle
is impossible.  I do not beleive that a fully native competence in a language
may be achieved by pure symbol manipulation using predefined rules.  A certain
amount of world knowledge and "common sense" must also be applied.  I base this
in part on my experience translating technical Russian using only a dictionary
and a skeleton grammar.  I could not have done it without "understanding" the
Russian as I went, thus I would have been at a loss trying to translate a
book on something I did not know anything about.

-- 
Sarima Cardolandion			sarima@gryphon.CTS.COM
aka Stanley Friesen			rutgers!marque!gryphon!sarima
					Sherman Oaks, CA

dave@cogsci.indiana.edu (David Chalmers) (03/02/89)

harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>Of course semantics arises out of nonsemantic objects. But there are
>nonsemantic objects and nonsemantic objects -- and scratches on paper
>(even when implemented as symbol-crunching computer programs) do not
>seem to be the right kinds of objects.

I think I'll just let this 'argument' stand as it is, displayed in all
its glory.

> Likewise there are patterns and patterns. 

"A pattern is a pattern is a pattern" - G. Stein.

So: we both agree...
     meaningless NEURONS
     are related in COMPLEX ways
     to form representational PATTERNS
     which support a mind
but
     meaningless SYMBOLS
     are related in COMPLEX ways
     to form representational PATTERNS
     which...?

I leave the reader to draw her own conclusion.

>[On complexity supporting information.]
>But necessary conditions are not sufficient conditions. And mere
>complexity will not you a mind get. There's complexity and complexity;
>and a lot more conceptual work to do before you have a viable model
>for the mind.

Indeed you're right, and I don't expect this problem to be solved
overnight.  But see my forthcoming "Mind, Pattern and Information"
(tentatively retitled "The First-Person and the Third-Person: A
Reconciliation").  Not all complexity supports information, and not
all complexity supports a mind either.  But at the bottom line the
criterion lies in the _structure_ of the complex system and not in
the raw materials.

  Dave Chalmers
  Center for Research on Concepts and Cognition
  Indiana University

dave@cogsci.indiana.edu (David Chalmers) (03/02/89)

jgn@nvuxr.UUCP (22115-Joe Niederberger) writes:
>dave@duckie.cogsci.indiana.edu (David Chalmers) writes:
>>[...] Just remember, semantics CAN arise from syntax, as long as the 
>>syntactical system is complex enough, and involves manipulating 
>>micro-structural objects which interact in rich and subtle ways.  
>
>Now, I am not religiously convinced either of the truth or falsity of 
>the above statement, but I can't help noticing the fervor implied by the
>capitalized "CAN." [...] I may still be unmoved (and logically uncompelled)
>to agree with his claim that semantics CAN arise from syntax. If the
>reference to the human brain is the evidence he offers, I ask: why
>must I view the brain as a syntactical system ?

Apologies for religious fervour.  The capitalization was in response to
Searle's repeated claim that "syntax cannot arise from semantics."  Searle
uses this premise repeatedly to support his argument.

When Searle talks of "syntax", he is not referring to the usual linguistic
usage of the term.  He applies it to mean "any system of meaningless
objects whose behaviour is determined by formal rules" (or something like
that), because this is the meaning he needs to support his argument.  But
once we see that this is the meaning he is using, we can simply point to
the human brain:
    Meaningless objects (neurons etc) are obeying formal rules (the laws of 
    physics), and yet semantics is indisputably arising.
Counterexample - so game, set and match to the good guys.

>Yes, it may be an interesting hypothesis that the brain's essential
>function is to serve as a syntactical system, (and this may deserve
>further investigation,) but lack of a disproof doesn't serve as a
>proof for me.

I think you probably mean "syntactical" in the linguistic sense here,
which is a sense which neither Searle nor I intended.  But your
interpretation is a very natural one, and I believe that this is
again indicative of the misleading way with which Searle plays with our
intuitions.  When he says "syntax", our immediate image is of linguistic
objects (those high-level sense-(1) symbols, remember?), and of course
in this case syntax is not sufficient for semantics: using sense (1)
symbols syntactically leaves out the most important part, their meaning.
But these intuitions do not apply to the low-level syntax of sense (2)
symbols (which, of course, never had any meaning to begin with - their
meaning lies in the systems they form).

Incidentally, the hypothesis that the function of the brain is to serve
as a syntactical system (in the high-level linguistic sense) is a 
central tenet of many in the "Symbolic" school of AI. (In particular
it is quite explicitly the backbone of Fodor's thinking - see his
"The Language of Thought", if the title doesn't say it all.)  Needless
to say, this is a hypothesis with which I strongly disagree.

  Dave Chalmers    (dave@cogsci.indiana.edu)
  Center for Research on Concepts and Cognition
  Indiana University

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/02/89)

lee@uhccux.uhcc.hawaii.edu (Greg Lee) of University of Hawaii wrote:

" Harnad says there's a "true" sense of 'understand' with
" respect to which we should agree with Searle's terminological
" argument. Maybe so. I don't think so, but even if I'm wrong,
" and Searle has done some first rate linguistics here, the main
" point seems to have been established.  There is no substance to
" the Chinese Room argument -- it's just toying with words.

It's amazing to me how trapped people can be in their preconceptions.
IF Searle's is just a terminological point THEN he is indeed just
toying with words. But if there exists a real experience, called
understanding (vs. not understanding) a language, an experience we all
have, and know perfectly well that we have, and can perfectly well
recognize when we do and don't have, then Searle's is by no means just
a terminological point or word-play, and he is not doing any kind of
lingustics here, first-rate or otherwise.

Now I keep trying to remind people (who have managed to forget or not
notice) that this simple, familiar, sufficiently unambiguous EXPERIENCE
of understanding is the only thing whose presence or absence is at
issue in the Chinese Room Argument. Forget about terminology. Call it
whatever you like. Searle's saying he has it with English and not
Chinese. This is no more a linguistic matter than "My left side aches
and my right side doesn't"!

" you keep citing (purported) facts about language usage and giving
" linguistic analyses to support your views. At least, if the analyses
" you give are intended to have any empirical content, I don't see how
" else they can be construed.

If I remind you what you mean be "My left side aches and my
right side doesn't," I am not giving a "linguistic analysis." I
can't avoid that a verbal discussion should be in words, but we
are not discussing words, we're discussing their referents, and
in this case these are subjective experiences. Facts about
subjective experience are empirical too.

" When you propose this distinction of yours between understanding
" (subjective) and understanding (objective), how are we to take this?...
" you seem to think... there really *is* a distinction of the sort you
" claim, and that it's a matter to which some sort of factual evidence is
" relevant. And the only evidence you offer concerns language usage, so
" when you say it's "simply not a linguistic matter", how can we believe
" you? You're simply wrong.

It's a peculiar feature of coherent, substantive distinctions that
"there really *is* a distinction" there. In this case it's between
two "things," both called by the name "understanding." One of them
is the experience that we were discussing: Those of us who have not
bought into a contemporary ideology (and the rest of us before they
bought into the ideology) knew perfectly well what it was like to
understand or not understand a language, from the first-person
standpoint. That's (subjective) understanding. We also distinguished
objective features that tended to accompany this subjective 
understanding, in ourselves and in others too, and we called that
understanding (objective) too:

I know what it's like to (subjectively) understand English (and so do
you). And he speaks and acts AS IF he (objectively) understands
English. The empirical evidence for the existence of the former is our
1st-person subjective sense of what it's like to understand English.
The empirical evidence for the existence of the latter is the objective
verbal and behavioral data that tend to accompany the former and that
we take to constitute expressions of objective understanding.

I could do exactly the same (nonlinguistic) number on the distinction
between pain and tissue damage, or, for that matter, the distinction
between a left- and a right-sided ache.

" If [the distinction] is intended to be empirical, and facts other than
" those of language usage can be found to suppport it, then it's time for
" you to say what those facts are. Until then, I guess we'll continue to
" talk linguistics.

See above. (Or perhaps I should say "look at" the above, for I can only
make you do the objective thing, not the subjective one. -- I am, by
the way, striving for an OBJECTIVE understanding of the points I'm
making on your part, not just the subjective sense of it...)

" There are several interesting things about... "I'm in pain and he is
" too" but they are susceptible to linguistic analysis, and they have
" nothing to do with 'pain' being "experiential"...

The several things you go on to mention (about Mary, and her steak, and
syntactic scope) may indeed be interesting, and they certainly are
linguistic, but they are not RELEVANT, because, as I have been
suggesting to no avail: This is not a syntactic matter! And the
relevant part has everything to do with pain being experiential.

" 'I seem to be in pain' is a perfectly ordinary thing to say...
" can we take a survey to settle the matter?

Do you really think that the deep issues involved in the problem
of the incorrigibility of subjective experience reduce to a question
about an idiom, about which we can take a survey? Maybe if I spell it
out for you: "It is true that it feels as if I have a splitting
headache right now, but then maybe it's not true that it feels as if I
have a splitting headache right now." THAT's what's at issue when I say
"I only SEEM [stress] to be in pain." (Does the "only" plus the stress
on the "seem" help dispel the inclination to resort to your idiom
again?) And I repeat: I'm not doing a linguistic analysis here. I'm
talking about the "empirical evidence" for pain: It comes in an
incorrigible subjective package. The word-play seems to be [sic] on
your end; but mostly what you are doing is begging the question and
changing the subject (to linguistics).

" I haven't the foggiest idea of what mixing up you are referring to...
" [in the claim that in a later posting Lee mixes up the syntax of the
" (putative) symbolic "language of thought" -- whose existence and nature
" is what is at issue here -- and the syntax of natural languages] I'm
" reasonably sure I never said anything about a "language of thought".
" What are you talking about?

Searle's Argument is about whether thinking is just formal
symbol-manipulation in the (purely syntactic) "language of thought." In
another posting, as in this one, you digressed into irrelevant matters
concerning English syntax. One way to make a self-fulfilling prophecy
of the claim that Searle's Argument is just linguistic is to treat it
only as linguistic. Well you could have gone further, ignoring its
content completely, and only correcting his grammar...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

jps@cat.cmu.edu (James Salsman) (03/02/89)

And another thing for all you neural soup floating-point
numerical dweebs that think a symbol system can't **EVEN
APPROXIMATE** competence in language:

It wouldn't be that hard to write Eliza for Chinese, and
you could even train a human to manually execute the code!

:James
-- 

:James P. Salsman (jps@CAT.CMU.EDU)
-- 

matt@nbires.nbi.com (Matthew Meighan) (03/03/89)

dave@cogsci.indiana.edu (David Chalmers) of
Concepts and Cognition, Indiana University writes:

" It is a very mysterious question indeed how real understanding,
" subjective experience and so on could ever emerge from a nice physical
" system like the human brain... nevertheless we know that it does,
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
" although we don't know how. Similarly, it is a mysterious question how
" subjective experience could arise from a massively complex system of
" paper and rules. But the point is, it is the SAME question, and when we
" answer one we'll probably answer the other.


This 'we know that it does' seems, to me, to be a remarkable assertion.
I don't see how any such thing has been (or ever can be) shown, and would be
interested to hear an objective proof of this.

If you want to say "nevertheless I like to assume that it does", that's fine. 

My own view is that subjective experience does not arise from the
brain at all, but vice versa -- that the brain, and the rest of the
physical body, is evolved by consciousness to give itself a vehicle
with which to interact with other consiousnesses.  Understanding
and subjective experience do not exist in the brain at all, but in the
mind, of which the brain is just the most obvious and least-subtle part.

This viewpoint is plainly not provable  -- that is, I can't prove it to
YOU from MY experience.  For me, it is both 'subjectively' true because
it is what I feel, and 'objectively' true because I have observed
phenomena I can't explain any other way.  Those phenomena are
scientific evidence to me, because I know they took place, and I would
have to ignore the data to conclude other than I have.  But they
would be mere heresay to you, hence no evidence at all.

My real point, though, is that your view that consciousness "arises"
from the physical brain is as purely subjective as mine that it is the
other way around.  It seems to me that this assertion is a leap of
faith,  resembling more a religious conviction than a scientific one.

You might be right, though, that the two questions you pose are the
same one, and that the answer to one answers the other.  It is
self-evident to me (though not necessarily, of course, to anyone else)
that the answer to both questions is "It doesn't."  Perhaps (in what I
think is the reverse of the sense in which you said that if we answer
one question we can answer the other) we should take the fact that
understanding does NOT emerge in computer programs as evidence that it
does NOT emerge in brains, either.  

One could agrue (and some have) that understanding MAY arise in some
computer program someday, that the ones we have are just not complex
enough.  But then we are really drifting from the facts into pure
speculation, aren't we?  It can be said of anything that it may happen
someday;  the data we have at this point is that this has never happened.
I see no reason to assume that a *quantative* 'increase in complexity' 
will automatically cause a qualitative change of the magnitude of 
"the emergence of consciousness"; until such a thing takes place there
is no reason to suppose it will.  The more likely outcome is that as
we make programs more complex, we will have exactly the same
qualitatively stupid things we have now -- just more complicated ones.
 
The questions of what constitutes understanding, or intelligence, are 
intrinsically interesting and important ones.  But I am not sure that
they are very important to AI.  All we have to do is create machines
that APPEAR to be intelligent (challenge enough!).  We seem to be
debating the question "Can we make machines that actually understand,
in the sense that we do?"  As a human being, my answer tends to be "of
course not!".  But as a programmer, I would more likely respond "What
difference does it make?"

-- 

Matt Meighan          
matt@nbires.nbi.com (nbires\!matt)

bwk@mbunix.mitre.org (Barry W. Kort) (03/03/89)

In article <917@jhunix.HCF.JHU.EDU> ins_atge@jhunix.UUCP
(Thomas G Edwards) writes:

 >      However, in the Chinese Room experiment, we are assuming that
 > the rule operator has indeed been endowed with rules to operate
 > on, and as such these rules are defacto sensory input from the
 > outside.  

 >      Moreover, the incomming Chinese is also sensory input.
 > Rules may exist which change due to incoming Chinese.

Perhaps Stevan can clarify this point for us, because I believe
it is pivotal.  In Searle's thought experiment, are the rules
immutable, or do they evolve as a function of the information
contained in the Chinese stories?

As I recall, Searle set it up so that the rules didn't change
as a function of the Chinese input.

To my mind, a system which understands is a system which integrates
new information into an expanding knowledge base, and this includes
new and improved information-processing techniques (i.e., the "rules").

When we talk about "understanding" in human terms, don't we really
mean the ability to gain understanding (as opposed to merely having
a fixed amount of understanding)?

--Barry Kort

arm@ihlpb.ATT.COM (Macalalad) (03/03/89)

In article <Feb.28.10.58.44.1989.18905@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>As stated many, many times in this discussion, and never confronted
>or rebutted by anyone, this is not a definitional matter: I know
>whether or not I understand a language without any need to define
>anything.

Stevan, no one is disputing whether or not you know whether you understand
a language, least of all me.  Maybe my argument wasn't too clear, so
let me try to clarity things.

Argument: In order to come to any resolution about what systems can
or cannot understand, we need an objective theory of understanding,
rather than the behaviorist "I'll know it when I see it" tests such
as the Linguistic Turing Test (LTT) or the Total Turing Test (TTT).

Now you may be totally confident that you know when you understand
a language, but to conclude that you know when another entity understands
a language is a leap that I'm not quite ready to make.

The problem I have, of course, is the Other Minds Problem, which
roughly stated is the problem of knowing for certain whether other
entities understand/comprehend/are conscious/etc.  Descartes had
God, and you have your Total Turing Test, neither of which are
truly satisfactory to me (although I do have faith in God :-).

As I understand it, TTT merely involves applying our practical solution
of the other minds problem to AI systems.  In other words, TTT says,
"I'm not going to define how a system understands a language, but
I'll know it when I see it."

Meanwhile, I can turn around and say, "It sure acts like it understands,
but look at the underlying architecture.  It's a machine, and MACHINES
CANNOT UNDERSTAND. (or it's a formal system, or it's non-biological, or
it's only a simulation, or any other hokey excuse)

Now we're at a standoff, where one believes that the given system does
understand because it passed the TTT, and the other other just as
firmly believes that it does not understand because the underlying
architecture is incapable of believing.  All because of the refusal
to commit to a definition.  Both camps hold different common sense
notions of understanding, one based more on behavior and the other
based more on underlying architecture.  Both are practical solutions
to the other minds problem, and both predict different things for
this one solution.  Which one is right?

>(2) The TTT (fine-tuned eventually to include neuronal "behavior" too)
>encompasses all the available empirical data for the mind-modeler.
>(The only other data are subjective data, and I advocate methodological
>epiphenomenalism with respect to those.) The LTT, on the other
>hand, is just an arbitrary subset of the available empirical data.

I don't know about you, but I certainly don't include neuronal behavior
in any of my practical solutions to the other minds problem.  And how
can you make sense out of such data without having some theory of
understanding, besides our intuitive, subjective ones?  I know that I
don't have any meaningful intuitions about neuronal behavior.  Could it
be that maybe deep down inside, you might think that a theory of
understanding would be useful?  And maybe there is in fact a definitional
issue in here after all?

>" Let's take a variation of the Chinese room where the purpose of the
>" room is to interpret [Chinese] BASIC instead of to understand
>" Chinese... Is it fair to conclude, then, that the system of the person
>" in the Chinese BASIC room and her bits of paper is not really
>" interpreting BASIC?...  Now who's in the grip of whose ideology?
>
>The suspicious reader who might think I stacked the cards by clipping
>out the ARGUMENTS in pasting together the above excerpt will be
>surprised to see, upon reading the entire original posting, that
>there ARE no arguments: The Chinese Room has simply been reformulated
>in Chinese Basic, and voila! (There's also a double-entendre here
>on the syntactic vs. the mentalistic meaning of "interpret.") Mere
>repetition of credos is yet another symptom of ideological grippe.

Thank you for pointing out that Searle (and I) aren't really arguing
here, but merely following different ideologies to different
conclusions.  As for the double meaning of "interpret," I'll take
the "practical" meaning: "*Interpreting BASIC* means *Running my
program* and I'll know it when I see it." :-)

-Alex

engelson@cs.yale.edu (Sean Engelson) (03/03/89)

In article <2483@crete.cs.glasgow.ac.uk>, gilbert@cs (Gilbert Cockton) writes:
>In article <51123@yale-celray.yale.UUCP> engelson@cs.yale.edu (Sean Engelson) writes:
>>(Searle + rules) understanding of Chinese?  It seems that to
>>demonstrate or refute the position of understanding being demonstrable
>>purely through I/O behavior, one must have an effective definition of
>>understanding.  By effective I mean one that does not beg the
>>question, i.e. by defining understanding to be symbol-processing, or
>>conversely, to be that which humans do.
>
>Sorry, but your constraints are a little weird.  Understanding *IS*
>what humans do. 
>It *MAY* involve symbol processing.  What question is begged? What is
>an effectiver 
>definition?

What I meant was rather "that which _only_ humans do", i.e. a priori
ruling out any form of non-human understanding.

>Look how well physicists manage with "force", "charge", "gravity".
>You cannot ask commentators on humanity for "definitions" that are
>any less (fast 
>and) loose than those used by commentators on nature.

But a physicist can give me a simple and effective procedure by which
I can measure the charge of a body, or the force of gravity.  I have
seen no such procedure of criterion for recognising understanding,
other than I/O equivalence with that which we call understanding in
humans.  Under that criterion, the Chinese Room understands.

>The question for the strong AI brigade is:
>
>"Given the normal usage of understanding, what grounds are there for
>attributing it 
>to computers, and why bother anyway"
>
>While we're at it, what about those halucinogenic thermostats with beliefs.
>Whatever it was, don't eat it again :-)

The normal usage of understanding is that if (a) someone says they
understand, and (b) they act as if they do, then they understand.
Unless you have a theory of understanding that rules out physical
symbol systems (and neural nets are symbol systems too!), then I see
no reason not to attribute understanding to computers.  Why bother
anyway?  Excellent question.  I see no real purpose in it, except that
the anti-attribution thereof is used as a criticism of AI,
incorrectly.  If it looks like a duck, and it acts like a duck, and it
quacks like a duck, then what does it matter if it understands in an
epistomological sense or not?


----------------------------------------------------------------------
Sean Philip Engelson, Gradual Student	Who is he that desires life,
Yale Department of Computer Science	Wishing many happy days?
Box 2158 Yale Station			Curb your tongue from evil,
New Haven, CT 06520			And your lips from speaking
(203) 432-1239				   falsehood.
----------------------------------------------------------------------
Nondeterminism means never having to say you're wrong.

arm@ihlpb.ATT.COM (Macalalad) (03/03/89)

In article <Feb.26.15.55.22.1989.7914@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>There are two senses in which Searle is advocating "looking inside": One
>is to look at the functions of the brain, because we have pretty good
>reason to believe that candidates with brains have minds (because, as I
>would put it, candidates with brains can pass the TTT). The second sense
>of "inside" is the first-person test for subjectivity, which we can all
>perform on ourselves. It's THAT "causal power" that he reminds us
>brains have but symbol-crunchers do not. My reply is that candidates
>OTHER than the brain that can pass the TTT (if and when we come up with
>any) are immune to his Chinese Room Argument that they cannot have a
>mind (though, of course, I repeat, no objective test can demonstrate
>that anyone, EVEN ourselves, has a mind). Searle's argument against
>(hypothetical) candidates that pass the LTT only, with symbols only,
>is decisive, however.

AHA!  We finally have a definition from Stevan about understanding,
or at least a prerequisite to understanding, namely, the ability to
introspect.  In light of this prerequisite, let me see if I understand
Searle's argument:

(1) In order to understand, a system must be able to introspect.
(Sort of like "I think, therefore I am.")

(2) A given entity is the best judge of what it can or cannot understand,
given that the entity is capable of introspection.

(3) From (1), in order for the Chinese room to understand, there must
be an introspecting agent.

(4) The human in the Chinese room is clearly capable of introspection.
(If not, substitute yourself for the human in the Chinese room.)

(5) From (3) and (4), the human is the introspecting agent in the formal
system, if indeed the formal system has one.

(6) From (2) and (5), the human is the best judge of what the system can
or cannot understand.

(7) The human, upon introspection, concludes that he or she does not
understand Chinese.

(8) From (6) and (7), the system does not understand Chinese, although
it appears to outside observers that it can.

Do I fairly characterize Searle's argument?  If so, I think that (5) is
clearly the weak point in the chain.  And although I've seen Stevan
staunchly defend some of the other points, which I personally don't
have serious problems with, I haven't really seen him address this
point, other than to argue that it is obvious to everyone who hasn't
been brainwashed by a Yale education.  Being a Yalie myself, I
wonder if Stevan would run through the argument a bit slower for
my benefit.

The systems reply focuses on the weakness of (5), stating that the
introspective agent is not the human, but the formal system.  If we
then say that the formal system is incapable of introspection, then
why are we going through the exercise of Searle's argument?  Aren't
we assuming that the formal system is incapable of understanding in
order to prove that it's incapable of understanding?

Isn't the hidden premise of (5) really that if the formal system is
capable of introspection, then the agent of introspection must
necessarily be the agent which executes the rules?  To that hidden
premise comes the argument that neurons are the agents of the
brain which execute our "rules" of understanding, yet they certainly
don't seem to be introspective.

So unless Stevan can show me otherwise, I can't see how Searle's
argument is logically compelling.  It isn't even intuitively compelling
to me, but then again, I'm in the grips of an ideology, and a Yalie
on top of that. :-)

>I've always thought this reasoning was quite easy to understand, but from
>the fact that very few people have given me any objective evidence that
>they've understood it, I've concluded that it must be difficult to
>understand. Maybe by trying to put it slightly differently each time,
>tailoring it to the latest misunderstanding, I'll succeed in making it
>understood eventually...

I'm getting optimistic here, Stevan.  I do believe we're making some
progress.  Maybe a few more iterations before we reach an agreement?
Nah.

-Alex

ellis@unix.SRI.COM (Michael Ellis) (03/03/89)

> Jeff Dalton >> David Sher

>>Does anyone believe that they can build a machine with a soul?  It is
>>just as easy to build in Searle's "understanding."
>
>It's certainly true that it's hard to see what could ever convince
>Searle that anything had understanding.  

    Then you missed something. Searle is *already convinced* that at least:

    1. Searle has it
    2. Other humans have it

-michael

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/03/89)

arm@ihlpb.ATT.COM (Macalalad) of AT&T Bell Laboratories writes:

" Argument: In order to come to any resolution about what systems can
" or cannot understand, we need an objective theory of understanding,
" rather than the behaviorist "I'll know it when I see it" tests such
" as the Linguistic Turing Test (LTT) or the Total Turing Test (TTT).

Counterargument: To ascertain (beyond reasonable doubt) that a system
CANNOT understand, you don't need a theory. Searle's argument is a case
in point: If Searle (or you, or me) does exactly what the computer does
but does not understand, then the computer does not understand.

" you may be totally confident that you know when you understand
" a language, but to conclude that you know when another entity understands
" a language is a leap that I'm not quite ready to make.

No need to make the leap. Just know when you yourself don't understand
(in doing exactly what the symbol cruncher does) and infer that
nothing/no-one else doing exactly the same thing can be understanding
either.

" As I understand it, TTT merely involves applying our practical solution
" of the other minds problem to AI systems. In other words, TTT says,
" "I'm not going to define how a system understands a language, but
" I'll know it when I see it."

No. The logic of the TTT is this: I have no other basis but the TTT for
my confidence that other PEOPLE have minds, therefore it would be arbitrary
of me to ask MORE of robots. This is only a practical, not a principled
solution to the other-minds problem, however. Hence the same uncertainty
remains, in both cases (human and robot).

" I can turn around and say, "It sure acts like it understands, but look
" at the underlying architecture. It's a machine, and MACHINES CANNOT
" UNDERSTAND. (or it's a formal system, or it's non-biological, or it's
" only a simulation, or any other hokey excuse)... Now we're at a
" standoff, where one believes that the given system does understand
" because it passed the TTT, and the other other just as firmly believes
" that it does not understand because the underlying architecture is
" incapable of believing. All because of the refusal to commit to a
" definition.

To say "machines can't understand" is to beg the question. (We don't
even know what "machines" are -- and aren't -- yet.) "Wrong
architecture" simpliciter is arbitrary too: What's the "right"
architecture? No one knows what the brain's functional "architecture"
is, or what aspects of it are necessary or sufficient for having a
mind. To say it doesn't understand because its nonbiological is also
to beg the question.

To say it doesn't understand because it's just doing formal symbol
manipulation calls for an ARGUMENT: Searle has given one. ("Simulation"
is equivocal; even Searle's "simulated forest fires don't burn"
argument is enough to handle that -- but it all boils down to whether
symbol manipulation alone is enough not only to simulate the mind but
to implement it.)

So it still has nothing to do with definition; and there's clearly
room for plenty of hokeyness on both sides. (You made a crucial
error, by the way, in referring to the TTT above; you should have
said the LTT. That's the one the two sides are disagreeing on, and
that's the one Searle's argument is decisive against. The TTT is
immune to Searle's argument and I've so far heard no non-hokey objection
to it.)

" I don't know about you, but I certainly don't include neuronal behavior
" in any of my practical solutions to the other minds problem.

I certainly don't either. That's why I don't accept "wrong internal
functions" as an argument in itself: We have no idea what internal
functions are "right" or why; and only the TTT can lead us to an
answer. However, as I said, down the road a ways toward TTT utopia,
brain "performance" may eventually provide a useful "fine tuning"
variable on our near-asymptotic candidate. (The main problem with
brain function when you're far from utopia -- besides the fact that
we don't know what it is, and have no idea how we could find out
by peeking and poking at the brain -- is that we don't even know
what aspects of it are relevant and what aspects are irrelevant.)

Ref: Harnad (1989) Minds, Machines and Searle. Journal of Experimental
and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/03/89)

arm@ihlpb.ATT.COM (Macalalad) of AT&T Bell Laboratories writes:

" (1) In order to understand, a system must be able to introspect.

"Introspect"? Let's not unnecessarily multiply our mysteries: In order
to understand, a candidate must experience [ = feel, undergo the
subjective state of] what we experience [feel, undergo the subjective
state of] when we understand. (Prerequisite: It must be able to
EXPERIENCE [feel, undergo subjective states] simpliciter.)

" (2) A given entity is the best judge of what it can or cannot understand,
" given that the entity is capable of introspection.

If a candidate is capable of experience at all, it is the only one
that can know it.

" (3) From (1), in order for the Chinese room to understand, there must
" be an introspecting agent.

All this fancy formalism and inference is not necessary: For a candidate
to understand, someone/something must be experiencing understanding.

" (4) The human in the Chinese room is clearly capable of introspection.
" (If not, substitute yourself for the human in the Chinese room.)

Agreed -- except for the unnecessary, uninformative extra mystery
term "introspection": Humans can understand (and in this case,
understand English but not Chinese).

" (5) From (3) and (4), the human is the introspecting agent in the formal
" system, if indeed the formal system has one.

This is getting too complicated. What does "having an introspecting
agent" mean? Why clutter a simple, straightforward argument with
arbitrary, point-obscuring extra baggage? Searle is in the room, doing
everything the computer does, but understanding no Chinese. Therefore
the computer understands no Chinese (or anything at all) when it's
doing the very same thing.

" (6) From (2) and (5), the human is the best judge of what the system can
" or cannot understand.

"The human," in case you've forgotten, is the only one in there, besides
the chalk and blackboards! I'll let you be the judge of how good a judge
the chalk, or Searle-plus-chalk makes...

" (7) The human, upon introspection, concludes that he or she does not
" understand Chinese.

Have it your way. I'm satisfied to say the only thing in sight (and the
only one doing anything) doesn't understand Chinese.

" (8) From (6) and (7), the system does not understand Chinese, although
" it appears to outside observers that it can.

Don't forget that there was a PREMISE in all this, which Searle
adopted, from Strong AI, FOR THE SAKE OF ARGUMENT, which was that the
LTT (sic) could be successfully passed (till doomsday!) by symbol
manipulation alone. Hence you are merely reading back the premise when
you remind us of the surprising fact that the Chinese LTT is being
passed, i.e., the symbols coming out are consistently and coherently
interpretable as discourse from an out-of-sight Chinese interlocutor.
It is of course quite possible that this premise is false. (I, for one,
believe it is false, and in my paper I give reasons why.) But repeating
the premise alone does not invalidate Searle's argument: On the contrary,
Searle's argument goes some way toward invalidating the LTT.

" Do I fairly characterize Searle's argument? If so, I think that (5) is
" clearly the weak point in the chain. And although I've seen Stevan
" staunchly defend some of the other points, which I personally don't
" have serious problems with, I haven't really seen him address this
" point, other than to argue that it is obvious to everyone who hasn't
" been brainwashed by a Yale education.

You characterized a simple argument in a fairly complicated way, with
enough arbitrary extra baggage to obscure its simple point. Reread my
comment after (5) above and try to think back to life before Yale...

" The systems reply focuses on the weakness of (5), stating that the
" introspective agent is not the human, but the formal system. If we
" then say that the formal system is incapable of introspection, then
" why are we going through the exercise of Searle's argument? Aren't
" we assuming that the formal system is incapable of understanding in
" order to prove that it's incapable of understanding?

Look, do you think an inert book of rules is capable of understanding?
If you do, then you'll have no trouble believing that stones, chalk,
constellations and tea-leaves are capable of understanding too, and I
certainly won't be able to prove you wrong. (Animism or panpsychism --
the belief that anything and everything can have a mind -- is the other
side of the other-minds problem.) But if we assume that we need a bit
more than that -- say, the ability to pass the LTT [sic] -- then
Searle's Argument is there to show us that that's just not good enough,
because he can pass the LTT for Chinese without understanding Chinese.
And he's all there is to the "system."

Ref: Harnad (1989) Minds, Machines and Searle. Journal of Experimental
and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (03/03/89)

In article <232@nbires.nbi.com> matt@nbires.UUCP (Matthew Meighan) writes:
>
>My own view is that subjective experience does not arise from the
>brain at all, but vice versa -- that the brain, and the rest of the
>physical body, is evolved by consciousness to give itself a vehicle
>with which to interact with other consiousnesses.  Understanding
>and subjective experience do not exist in the brain at all, but in the
>mind, of which the brain is just the most obvious and least-subtle part.
>
Of course, Lord Berkeley demonstrated long ago that all reality may
be subjective and we can't know anything for certain.  But once that
interesting speculation is made, you can't get much further.
If the brain is just part of the mind, what are the other parts?
The spirit, soul or some other animus?  Some other part of the body?
Something that isn't part of the body?

>This viewpoint is plainly not provable  -- that is, I can't prove it to
>YOU from MY experience.  For me, it is both 'subjectively' true because
>it is what I feel, and 'objectively' true because I have observed
>phenomena I can't explain any other way.

Exactly what were those phenomena.  Perhaps others can explain them
as arising from the brain.  Just because you can't explain them
doesn't mean they are inexplicable.  Or are they also something
you can not even communicate, something mystical?  (If so, follow
ups to talk.religion.misc).

>
>My real point, though, is that your view that consciousness "arises"
>from the physical brain is as purely subjective as mine that it is the
>other way around.  It seems to me that this assertion is a leap of
>faith,  resembling more a religious conviction than a scientific one.
>
Not so.  The idea that the mind comes from the brain is scientific
and is substantiated by evidence that damage to the brain causes 
damage to the "mind".  In fact, all of the cognitive processes that
most people think of as being the mind can be damaged or abolished
by lesions made to specific parts of the brain.  Thus a properly
functioning brain is necessary to consciousness and to the elements
of human personality.  Is it sufficient?  We won't know until we
create a brain and it acts conscious.  Even then, you might argue
that even though we created the brain, God sent a soul to animate
it.  So the argument could still go on, although its weight would
be diminished.

>one question we can answer the other) we should take the fact that
>understanding does NOT emerge in computer programs as evidence that it
>does NOT emerge in brains, either.  
>
Oh, nonsense!  The brain has 10^12 neurons and many more connections.
None of our puny computers (mostly von Neumann to boot) or programs have
come close to such an engine.  About the best we could model now
is a toad (see Michael Arbib's Rana Computatrix), and that requires
a supercomputer to run.

mike@arizona.edu (Mike Coffin) (03/04/89)

From article <Mar.2.23.56.36.1989.28884@elbereth.rutgers.edu> (Stevan Harnad):
> Searle is in the room, doing everything the computer does, but
> understanding no Chinese. Therefore the computer understands no
> Chinese (or anything at all) when it's doing the very same thing.

So what?  No one said that a bare stored-program computer, without
benefit of algorithms, can understand Chinese.  And no one says that
the algorithm, in book form, understands Chinese.  Does this prove
that the combination of the two can't understand anything?  Why?  To
me, this little lemma seems to be the the crux of the whole "proof."
I haven't seen it addressed yet, much less demonstrated.  You DO need
to show this!  There is ample evidence that a computer running an
algorithm can have properties that neither the computer (without
algorithm) nor the algorithm (without computer) have.
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

gss@edsdrd.eds.com (Gary Schiltz) (03/04/89)

In article <7653@venera.isi.edu>, smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
-> In article <125@arcturus.edsdrd.eds.com> gss@edsdrd.eds.com (Gary Schiltz)
-> writes:
-> >
-> > [anecdote about doing calculus without "understanding" calculus]
-> >
-> 
-> This brings me to my second point.  At his "gut level" Gary felt,
-> introspectively, that he really did not understand calculus.  ...
-> ...
-> ... I, for one, would like Gary to attempt
-> to probe further as to just WHY, at that gut level, he felt understanding was
-> eluding him.  Did it have to do with problems he could not solve?  Did his
-> eyes glaze over whenever he saw integral signs in the pages of a book?  Did
-> he just feel that we was struggling more than his fellow students to solve
-> problems?  Perhaps if we probe these matters deeper, we may yet return to
-> my initial point:  that Gary's "gut level feeling" may leave something to
-> be desired as a criterion for understanding.  

I'm not sure how far this is probing, but ...

I feel I didn't understand calculus because I didn't know what in the "real world"
was being represented by the equations I was solving.  For example, I didn't know
that dY/dX represent the change in Y given an infinitely small change in X.  I just 
knew that "dY/dX" was referred to as the "derivative of Y with respect to X", that
Y and X were variables and that the derivative could be generated for an equation
by manipulating it in a certain way.  I also happened to have picked up a certain
amount of knowledge about how to map word problems into equations that could be
differentiated and solved.  My feeling that I lacked understanding is not the result 
of lack of competence.  I did have a lack of competence, but that's a separate issue.
The real issue is my lack of a mental map between a method (differentiation) and
something in the real world (determining instantaneous rate of change).

-> ... (One last question to Gary:
-> Can you identify a moment at which you said, "NOW I understand calculus;"
-> and can you recall the circumstances of that moment.)

The closest I can remember is that in the "second time around" course, I 
formulated the equations to describe word problems much more easily.  This 
was the result of two skills I had obtained.  First, I could visualize the 
solution to a word problem in terms of trying to find out about the rate of 
change of some quantity (velocity, for instance) with respect to something else
(time, for instance).  To me, this seems to boil down to an ability to analyze a 
physical system and create an abstract model of it.  I believe I had this skill 
even the first time I took the course.  Second, I understood what derivatives 
stand for in the real world, i.e. rates of change.  This is the "understanding" 
that I lacked in the first course.  Without this piece of knowledge, I could not 
come up with the equations to be used in my abstract model of a problem in order 
to solve that problem.

Another trivial fact (possibly unnecessary): I use calculus so seldom these days 
that I'm not sure I understand it any better than I did the first time I took it 
over ten years ago.  In any event, I think I understand understanding even less
than I understand calculus.  Oh well, as Kurt Vonnegut, Jr. might say,

"Hi Ho."

-----

     /\   What cheer,  /\       | Gary Schiltz, EDS R&D, 3551 Hamlin Road |
    / o<    cheer,    <o \      | Auburn Hills, MI  48057, (313) 370-1737 |
\\/ ) /     cheer,     \ ( \//  |          gss@edsdrd.eds.com             |
   \ /      cheer!!!    \ /     |       "Have bird will watch ..."        |

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/04/89)

From article <Mar.2.01.19.30.1989.14279@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):
" ...
" It's amazing to me how trapped people can be in their preconceptions.

I confess.  It's just this silly Yale-propagated ideology 'science'
that gave me this urge to seek theory + evidence lurking somewhere
behind your words.  I think I'm getting on the right wavelength
though, at last.

" ...
" I could do exactly the same (nonlinguistic) number on the distinction
" between pain and tissue damage, or, for that matter, the distinction
" between a left- and a right-sided ache.

Yes, I think you could.  Let me test my understanding by giving it
a try myself, but I'll choose a different example.  There's a difference,
I shall claim, a real substantive distinction, between
understanding-in-the-morning and understanding-in-the-afternoon.
What's my evidence?  Consider "I finally understood Harnad at 3pm."
The word 'understood' here cannot mean understood-in-the-morning.
That seems very clear (clearer even than the "I don't seem to be
in pain" example).  Does this have to do with the language
category of the phrase 'at 3pm'?  No, since it is straightforward
to construct similar examples with different syntactic structure.
It has to do with the *reference* of 'at 3pm'.  So it's not
linguistic.

Say, I could get to like this kind of theory.  So much easier
to find evidence!

		Greg, lee@uhccux.uhcc.hawaii.edu

jackson@freyja.css.gov (Jerry Jackson) (03/04/89)

This is mostly intended for Steven Harnad....  I tried a few months back
to convince people in this newsgroup that there was a difference between
say: the *experience* of pain and the signal travelling through the
nervous system.. or the *experience* of seeing blue and anything you could
possibly tell a blind person about it.

My conclusion: Most people who post to this newsgroup have no
subjective experience.  No wonder you can't get your point across.
You have obviously been making *very* clear sense, so I can't come up
with any other explanation. :-)

I wonder if Mr. Lee considers the effect of hitting his thumb with a hammer
linguistic.  If I said, "Gee, Mr. Lee, does that *hurt*?".. he would 
probably reply one of two things:

	a) It seems to hurt, but I could be wrong..
or 	b) No, the pain is merely a linguistic abstraction


Once again, anyone who has not seen your point by now probably never will (or
at least not through more attempts to convince them)

I speak from personal experience.

Please continue trying if you are up to it, though, since I (and probably
many others) find your postings refreshingly clear, well thought out,
and interesting.


Now, because I'm a glutton for punishment... Does anyone really think
someone can be wrong about whether or not they are in pain?  Wouldn't
it seem really odd to respond to the statement: "I just stubbed my toe
and boy does it hurt!" with: "No it doesn't." It is in this sense that
Mr. Harnad intends you take the statement "I understand."  You might
be wrong in an objective sense... your model might be inaccurate.  I
think everyone has had the experience of insight, though.  You know
what I mean.  It's what you are referring to when someone asks you if
you understand and you say "Yes".  Do you understand?


--Jerry Jackson

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/04/89)

mike@arizona.edu (Mike Coffin) of U of Arizona CS Dept, Tucson writes:

" No one said that a bare stored-program computer, without benefit of
" algorithms, can understand Chinese. And no one says that the algorithm,
" in book form understands Chinese. Does this prove that the combination
" of the two can't understand anything? Why?... a computer running an
" algorithm can have properties that neither the computer (without
" algorithm) nor the algorithm (without computer) have.

I completely agree with the last proposition, except that understanding
is not one of those properties. Why? Because when Searle stands in for
the computer, doing everything it does, executing all of its algorithms,
he does not understand. Hence neither can the computer understand, when
it does exactly the same thing.

Ref: Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain
Sciences 3: 417-457.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

geddis@polya.Stanford.EDU (Donald F. Geddis) (03/04/89)

In article <Mar.2.23.55.02.1989.28807@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>If Searle (or you, or me) does exactly what the computer does
>but does not understand, then the computer does not understand.

True enough, but then you are defining the "computer" to be the dumb processor
that interprets the rules.  No one claimed that the processor (by itself)
*did* understand.  I still haven't heard a satisfactory rebuttal to the
"Systems Reply", namely that (Searle + Rules) understands, whereas just
(Searle) doesn't.  [To use your analogy:  (Computer Processor + Symbolic
Rules) understands, but just (Computer Processor) doesn't.]

>Just know when you yourself don't understand
>(in doing exactly what the symbol cruncher does) and infer that
>nothing/no-one else doing exactly the same thing can be understanding
>either.

But we *can't* imagine what it would be like to process the Chinese Room
rules, because any set of rules that passed the Turing Test would be far
to large and complex by orders upon orders of magnitudes for a person to
actually (as opposed to simply in the thought experiment) process them.  And
so this is a case where our intuitions fail us miserably.

>The logic of the TTT is this: I have no other basis but the TTT for
>my confidence that other PEOPLE have minds, therefore it would be arbitrary
>of me to ask MORE of robots. This is only a practical, not a principled
>solution to the other-minds problem, however. Hence the same uncertainty
>remains, in both cases (human and robot).

Not quite true.  I can use a biological argument with humans as well, and say
"they evolved (via evolution) the same way I did, and we belong to the same
species, and (as beings) we were conceived and developed the same way, so if
I can think, then they probably can too."  Robots don't get quite that close
and argument in their favor.

But in any case, I don't think that those who argue for the Turing Test (your
LTT) would disagree that your TTT works as well, since it encompases the
original test.  It is harder to pass.  I'm not convinced that that is an
advantage, though.

>That's the one [LTT] the two sides are disagreeing on, and
>that's the one Searle's argument is decisive against. The TTT is
>immune to Searle's argument and I've so far heard no non-hokey objection
>to it.

Go through this again, please?  I think that Searle doesn't have an argument
at all, but I fail to see how your TTT test makes any difference at all to
his analysis.  At any rate, it certainly is not obvious that Searle's
argument is decisive, and that your reformulation is immune.  But I'd be
interested in hearing your justifications.

	-- Don Geddis
-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

geddis@polya.Stanford.EDU (Donald F. Geddis) (03/04/89)

In article <Mar.2.23.56.36.1989.28884@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>"The human," in case you've forgotten, is the only one in there [Chinese Room], besides
>the chalk and blackboards! I'll let you be the judge of how good a judge
>the chalk, or Searle-plus-chalk makes...

Since the Systems Reply is exactly that Searle-plus-chalk-plus-blackboard-plus-
rules *does* understand, doesn't this "answer" seem a little unfair.  Rather
than present arguments against the Systems Reply, Stevan seems to be appealing
to our intuitive sense of "well, come on, *everyone* knows that this is
foolish".  But a good many well-educated people who have thought hard about
the problem don't consider it to be foolish, so a little more effort on the
rebuttal is required...

>Don't forget that there was a PREMISE in all this, which Searle
>adopted, from Strong AI, FOR THE SAKE OF ARGUMENT, which was that the
>LTT (sic) could be successfully passed (till doomsday!) by symbol
>manipulation alone.  [without being embedded with sensors, etc.]
>...
>It is of course quite possible that this premise is false. (I, for one,
>believe it is false, and in my paper I give reasons why.)

Strangely enough, I agree with your prediction, and with your solution.  But
that is not important for the Chinese Room argument.  You could start with
a robot that is embedded in the world, and after it achieves full understanding
(the same way humans do:  learning within the context of a society), then you
can disconnect the sensors and effectors and leave only a teletype to the
outside world.  Sounds a lot like Stephen Hawking in real life, no?  Suddenly,
the LTT (i.e., the original Turing Test) returns.  Why make the claim that
the understand, to which you agreed before, suddenly disappears?

In other words, while you may be correct that understanding (getting the
proper set of rules) is impossible without being embedded in the world, why
make this part of the test?  It's just an implementation issue...

>Look, do you think an inert book of rules is capable of understanding?
>If you do, then you'll have no trouble believing that stones, chalk,
>constellations and tea-leaves are capable of understanding too, and I
>certainly won't be able to prove you wrong.

Not at all.  The complexities of the systems varies widely.  In particular,
some (very small fraction) of things can pass the Turing Test (LTT).

> (Animism or panpsychism --
>the belief that anything and everything can have a mind -- is the other
>side of the other-minds problem.) But if we assume that we need a bit
>more than that -- say, the ability to pass the LTT [sic] -- then
>Searle's Argument is there to show us that that's just not good enough,
>because he can pass the LTT for Chinese without understanding Chinese.

Tsk, tsk.  No he can't.  The Chinese room does.

>And he's all there is to the "system."

Completely, 100%, false.  Wrong.  Incorrect.  The Chinese Room contains
Searle AND THE RULES.  And the system as a whole DOES understand, as
evidenced by the Chinese answers to Chinese questions.

	-- Don Geddis
-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

mike@arizona.edu (Mike Coffin) (03/05/89)

Stevan Harnad writes:
> I completely agree with the last proposition, except that understanding
> is not one of those properties. Why? Because when Searle stands in for
> the computer, doing everything it does, executing all of its algorithms,
> he does not understand. Hence neither can the computer understand, when
> it does exactly the same thing.

True, but no one said he (or it) did understand.  The argument is that
the SYSTEM --- Searle, or the computer, running an algorithm ---
understands.  Searle corresponds to the bare machine.  Asking Searle
if he understands is equivalent to running a debugger on the "Chinese
algorithm"; the fact that the debugger doesn't understand Chinese is
not an argument that the rest of the system doesn't.

It is interesting that you consistently refuse to talk about the
system as an entity unto itself.  You do this in spite of all the
evidence of your senses --- remember, this system passes the Turing
test!  You have this impressive system in front of you; it
certainly seems to understand Chinese; Searle certainly doesn't; the
rules by themselves certainly don't.  Yet you ignore the evidence and
insist on talking about components of the system as if they were the
system.  It almost looks like you're in the grips of an ideology :-)
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/05/89)

From article <330@esosun.UUCP>, by jackson@freyja.css.gov (Jerry Jackson):
" ...
" Now, because I'm a glutton for punishment... Does anyone really think
" someone can be wrong about whether or not they are in pain?  Wouldn't
" it seem really odd to respond to the statement: "I just stubbed my toe
" and boy does it hurt!" with: "No it doesn't." It is in this sense that
	No it doesn't ... did you forget about the amputation, Jerry?
" Mr. Harnad intends you take the statement "I understand."  You might ...

It helps to keep separate issues separate.  There's the pecularity of
subjective experience and related linguistic peculiarities:  for
instance, there *is* evidence that 'I think' is different from 'he
thinks'.  And there is finding a distinction between two senses of
'understand' and making one.  I contended that although you can make one
(of course), you can't find one.  It's worth keeping straight about
that, because substantial conclusions can't follow from mere
definitions.

And then there's Harnad's attempted defense of Searle which exploits the
supposed distinction between two senses of understand.  That defense
proceeds by positing that I don't understand Chinese (true), writing a
few paragraphs during which the reader is distracted enough to forget
the supposed distinction, and concluding that "of course" the CR doesn't
understand Chinese.  If you remember the distinction, though, you see
that in the posited sense, the conclusion follows only from the fact
that I am not the CR (even if I'm in it), and for that matter no one in
the wide world understands Chinese, in *this* sense of understand.
So it's just a sophistry.  Other participants in the discussion
have pointed this out.

And then there's Searle's argument itself.

I can accept a distinction between subjective and objective (which I do)
without accepting that there is such a distinction to be *found* in the
*particular* case of 'understand', though I can still accept *making*
such a distinction.  Independently of that question I could have
accepted Harnad's argument consistently, if I had not noticed his
equivocation.  In spite of Harnad's illogic, I might still be able to
accept Searle's argument (though I don't).

		Greg, lee@uhccux.uhcc.hawaii.edu

bwk@mbunix.mitre.org (Barry W. Kort) (03/05/89)

From lee@uhccux.uhcc.hawaii.edu (Greg Lee) of University of Hawaii:

 >  What are you talking about?

In reading the dialogue between Greg and Stevan, I have the subjective
feeling that I do not understand English.

I can parse the sentences, but I cannot reliably extract their
semantic content.  It seems that I am not alone.

Please tell Searle to cross me off the list of entities who undertand.

And now, if you will excuse me, I'm going back to my room.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (03/05/89)

In article <3369@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu
(Greg Lee) writes:

 > "I finally understood Harnad at 3pm."

Aha!  This piece of evidence confirms my theory that we use the
word "understand" to mean "able to gain understanding" (as opposed
to "has a fixed amount of understanding").

Evidently, we are using the positive first derivative of the
function K(t) to denote understanding, where K(t) is the accumulated
Knowledge as a function of time.

Curiously, I define "information" as delta K, and "learning"
or "knowledge acquisition" as dK/dt.

But the most interesting part of this model, is that I measure
emotional intensity, E, the same way:

	 E(t) = dK(t)/dt

Understand?

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (03/05/89)

In article <330@esosun.UUCP> jackson@freyja.css.gov (Jerry Jackson) writes:

 > Now, because I'm a glutton for punishment... Does anyone really think
 > someone can be wrong about whether or not they are in pain?  Wouldn't
 > it seem really odd to respond to the statement: "I just stubbed my toe
 > and boy does it hurt!" with: "No it doesn't."

Amputees report feeling "phantom limb pain".  Their toe does hurt.
Except that they have no toe.

Physiologically, we understand that pain signals are still arriving
at the somasthetic cortex and lighting up the map sector labelled
"left great toe". 

These examples reveal that we have trouble distinguishing an
event (I stubbed my toe, which is now black and blue, and bleeding)
from the message arriving at the brains ("the sensors in the left
great toe are reporting a disaster").  

Where is the pain?  Is it in the toe or in the message?

--Barry Kort

"This message will self-destruct in one week."

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/06/89)

bwk@mbunix.mitre.org (Barry W. Kort) asked:

" Perhaps Stevan can clarify this point for us, because I believe
" it is pivotal. In Searle's thought experiment, are the rules
" immutable, or do they evolve as a function of the information
" contained in the Chinese stories?... To my mind, a system which
" understands is a system which integrates new information into an
" expanding knowledge base, and this includes new and improved
" information-processing techniques (i.e., the "rules").
" When we talk about "understanding" in human terms, don't we really
" mean the ability to gain understanding (as opposed to merely having
" a fixed amount of understanding)?

(1) The ability to "learn" is a necessary but not a sufficient
condition for having a (normal) mind.

(2) In Searle's thought experiment, as long as everything that's going
on is PURELY SYMBOLIC (symbols in, symbols out, symbol-crunching in
between) it does not matter how you interpret the symbolic goings on --
as a conversation, as "learning," as rule-updating, as what have you.
The punchline's the same: Since Searle can do it all without
understanding, there's no understanding at all going on.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/06/89)

geddis@polya.Stanford.EDU (Donald F. Geddis) of Stanford University wrote:

" you are defining the "computer" to be the dumb processor
" that interprets the rules. No one claimed that the processor (by itself)
" *did* understand. I still haven't heard a satisfactory rebuttal to the
" "Systems Reply", namely that (Searle + Rules) understands, whereas just
" (Searle) doesn't.

You've heard it. You just haven't understood it, because of a far-fetched
and circular notion to which you and many others have become committed
so strongly that the ability to un-commit in the face of logic and
counter-evidence seems to have been lost: There's no other entity, no
other eligible candidate for having a mind in the Chinese Room; nobody
home! "Searle + rules" is a piece of cog-sci-fi. Do you believe that I
could fail to understand, and alpha centauri could fail to understand,
but "I + alpha centauri" could compositely understand? Could we
compositely feel an itch that neither of us feels singly? If you don't
PRESUPPOSE the far-fetched notion that the Chinese Room Argument set
out to debunk in the first place, then you're less inclined to come
back with it by way of a reasoned rebuttal!

It's not that there CAN'T be a systems reply in principle: Searle COULD
have been a non-understanding part of an understanding system. He could
have been standing in, say, for the input and output of one neuron in a
real brain. Then the system WOULD have understood and Searle would not
have. But then neither would Searle have been performing ALL THE
FUNCTIONS that were the substrate of the understanding. So it would be
no surprise that he didn't understand. One of Searle's premises is that
he himself must do EVERYTHING the candidate mental model does, yet not
understand. (This is why my "robotic functionalist" counterargument
works, and why the TTT is immune to Searle's Argument.)

So you see, what Searle has really shown is not that no "systems
reply" is tenable, but that a systems reply is untenable in the case
of pure symbol-crunching, where Searle CAN do everything the system
does. Symbol-crunchers are the WRONG KIND OF SYSTEM for having a mind
(understanding, being intelligent [in the mental sense], etc.).

" But we *can't* imagine what it would be like to process the Chinese Room
" rules, because any set of rules that passed the Turing Test would be far
" to large and complex by orders upon orders of magnitudes for a person to
" actually (as opposed to simply in the thought experiment) process them.

See earlier replies on "speed and complexity." This is just
hand-waving. It's equivalent to taking a dumb toy model and saying
"Just more of the same will pass the TT and will have a mind." I think
the gap is not one of speed and complexity but missing,
yet-to-be-discovered substantive functional concepts (and not just
symbolic ones!).

" [To solve the practical "other-minds" problem] I can use a biological
" argument with humans [not just the TTT]... Robots don't get quite that
" close an argument in their favor.

We did many rounds on this on the Net 2 years ago: To synopsize:

(1) The predictive power of biological facts about organisms that pass
the TTT is parasitic on the fact that they pass the TTT. (Consider the
problems we start to have as we move to species further and further
than our own. -- Though let me hasten to add that I for one am fully
inclined to give other species the benefit of the doubt about having
minds, feeling pain, etc.) However, biological facts do, of course,
have a secondary corroborative power.

(2) We don't understand biological organisms or their brains
functionally, hence we couldn't even answer the question, in
principle, which one was a real organism and which was just a robotic
look-alike! Another reason why biology and brain function can't help
us much in our practical, everyday Turing Testing.

" I don't think that those who argue for the Turing Test (your LTT) would
" disagree that your TTT works as well, since it encompases the original
" test. It is harder to pass. I'm not convinced that that is an
" advantage, though.

You've missed the point. Of course the Lingustic Turing Test is a
subset of the Total Turing Test, and of course the whole Test would be
harder to pass. But my point (and it had supporting arguments) was
that it may well be that the only kind of device that could pass the
LTT in the first place would have to be a device that could likewise
pass the TTT, and that IN BOTH CASES it would have to draw essentially
on nonsymbolic internal functions.

" ["The TTT, unlike the LTT, is immune to Searle's Argument"]
" Go through this again, please? I think that Searle doesn't have an argument
" at all, but I fail to see how your TTT test makes any difference at all to
" his analysis. At any rate, it certainly is not obvious that Searle's
" argument is decisive, and that your reformulation is immune. But I'd be
" interested in hearing your justifications.

Here are points 3-7 from the summary and conclusions (p. 20 -21) of
Harnad (1989) Minds, Machines and Searle. Journal of Experimental and
Theoretical Artificial Intelligence" 1: 5-25 -- with apologies to
those who have seen them before (they have already been posted once
in their entirety at a poster's request).

(3) The Convergence Argument:
Searle fails to take underdetermination into account. All scientific
theories are underdetermined by their data; i.e., the data are
compatible with more than one theory. But as the data domain grows, the
degrees of freedom for alternative (equiparametric) theories shrink.
This "convergence" constraint applies to AI's "toy" linguistic and
robotic models too, as they approach the capacity to pass the Total
(asymptotic) Turing Test. Toy models are not modules.

(4) Brain Modeling versus Mind Modeling:
Searle also fails to appreciate that the brain itself can be understood
only through theoretical modeling, and that the boundary between brain
performance and body performance becomes arbitrary as one converges on
an asymptotic model of total human performance capacity.

(5) The Modularity Assumption: 
Searle implicitly adopts a strong, untested "modularity" assumption to
the effect that certain functional parts of human cognitive performance
capacity (such as language) can be be successfully modeled
independently of the rest (such as perceptuomotor or "robotic" capacity).
This assumption may be false for models approaching the power and
generality needed to pass the Turing Test.

(6) The Linguistic Turing Test versus the Robot Turing Test: 
Foundational issues in cognitive science depend critically on the truth
or falsity of such modularity assumptions. For example, the "teletype"
(linguistic) version of the Turing Test could in principle (though not
necessarily in practice) be implemented by formal symbol-manipulation
alone (symbols in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside world
(seeing, doing AND linguistic competence).

(7) The Transducer/Effector Argument:
Prior "robot" replies to Searle have not been principled ones. They
have added on robotic requirements as an arbitrary extra constraint. A
principled "transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily nonsymbolic,
drawing on analog and analog-to-digital functions that can only be
simulated, but not implemented, symbolically.

(8) Robotics and Causality:
Searle's argument hence fails logically for the robot version of the
Turing Test, for in simulating it he would either have to USE its
transducers and effectors (in which case he would not be simulating all
of its functions) or he would have to BE its transducers and effectors,
in which case he would indeed be duplicating their causal powers (of
seeing and doing).

(9) Symbolic Functionalism versus Robotic Functionalism:
If symbol-manipulation ("symbolic functionalism") cannot in principle
accomplish the functions of the transducer and effector surfaces, then
there is no reason why every function in between has to be symbolic
either. Nonsymbolic function may be essential to implementing minds and
may be a crucial constituent of the functional substrate of mental
states ("robotic functionalism"):  In order to work as hypothesized
(i.e., to be able to pass the Turing Test), the functionalist
"brain-in-a-vat" may have to be more than just an isolated symbolic
"understanding" module -- perhaps even hybrid analog/symbolic all the
way through, as the real brain is, with the symbols "grounded"
bottom-up in nonsymbolic representations.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/06/89)

geddis@polya.Stanford.EDU (Donald F. Geddis) of Stanford University wrote:

" the Systems Reply is exactly that Searle-plus-chalk-plus-blackboard
" -plus-rules *does* understand... Rather than present arguments against
" the Systems Reply, Stevan seems to be appealing to our intuitive sense
" of "well, come on, *everyone* knows that this is foolish".  But a good
" many well-educated people who have thought hard about the problem don't
" consider it to be foolish, so a little more effort on the rebuttal is
" required...

Look, the "Systems Reply" and the reasoning, evidence and intuition
behind it, were precisely what Searle's Argument was formulated to
refute. It was already anticipated and explicitly discussed in
Searle's original target article. There's nobody home in the Chinese
Room except Searle! The rest is just chalk and blackboard. What do
the "Systems" enthusiasts reply? "So be it. Well then Searle + chalk
understands."

Now try to think back -- way, way back: Were you in the habit of
thinking that a person plus chalk could be understanding where the
person alone was failing to understand? (Sometimes certain kinds of
"education" can be a handicap...)

" Strangely enough, I agree with your prediction [that the premise that
" the LTT (sic) could be successfully passed (till doomsday!) by symbol
" manipulation alone is false], and with your solution. But that is not
" important for the Chinese Room argument. You could start with a robot
" that is embedded in the world, and after it achieves full understanding
" (the same way humans do: learning within the context of a society),
" then you can disconnect the sensors and effectors and leave only a
" teletype to the outside world. Sounds a lot like Stephen Hawking in
" real life, no? Suddenly, the LTT (i.e., the original Turing Test)
" returns.

You can't agree with my solution, because unfortunately you have not
understood it. If nonsymbolic functions are ESSENTIAL to passing both
the LTT and the TTT then it's not just a matter of sensors and
effectors connected to a symbol-crunching core. And Stephen Hawking
is not just a symbol-crunching core.

" while you may be correct that understanding (getting the proper set of
" rules) is impossible without being embedded in the world, why make this
" part of the test? It's just an implementation issue...

Because understanding may NOT be just a matter of "getting the proper
set of rules." If not, then even if a savvy robot jumped fully educated
out of the head of Zeus it could not pass the LTT or TTT without
internal NONSYMBOLIC functions; and if these were removed, it would no
longer be able to pass the LTT or TTT (and would no longer have a
mind). (Moreover, removing them would NOT be just a matter of yanking
off sensors and effectors from a symbol-cruncher.)

" ["Searle is all there is to the `system.'"] Completely, 100%, false.
" Wrong. Incorrect. The Chinese Room contains Searle AND THE RULES. And
" the system as a whole DOES understand, as evidenced by the Chinese
" answers to Chinese questions.

Unshakeable conviction. Who am I to try to counter the effects of
an education that gave rise to this? (I only timidly remind you again
that the possibility of successfully passing the LTT by
symbol-crunching alone was a hypothetical, so-far-counterfactual
PREMISE that Searle simply carried over from Strong AI for the sake of
argument. It may well be false; I've given reasons why it may be false.
In fact, Searle's Argument itself can be taken as one of the reasons
for concluding that it's false. Simply reiterating the premise cannot
serve as a logical counterargument against the VERY untoward conclusion
that FOLLOWS from the premise.)

Refs:
Searle J. (1980) Minds, Brains and Progams. Behavioral and Brain
                 Sciences 3: 417-457.
Harnad S. (1989) Minds, Machines and Searle. Journal of Experimental
                 and Theoretical Artificial Intelligence" 1: 5-25
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

geddis@polya.Stanford.EDU (Donald F. Geddis) (03/06/89)

In article <Mar.5.13.17.35.1989.4486@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes (in reply to a posting of mine):
>There's no other entity, no
>other eligible candidate for having a mind in the Chinese Room; nobody
>home! "Searle + rules" is a piece of cog-sci-fi. Do you believe that I
>could fail to understand, and alpha centauri could fail to understand,
>but "I + alpha centauri" could compositely understand?

As a matter of fact, I do.  So if you are appealing to intuitions here, it
is obvious that we disagree about what is commonsense.  So try another tack.

>It's not that there CAN'T be a systems reply in principle: Searle COULD
>have been a non-understanding part of an understanding system. He could
>have been standing in, say, for the input and output of one neuron in a
>real brain. Then the system WOULD have understood and Searle would not
>have. But then neither would Searle have been performing ALL THE
>FUNCTIONS that were the substrate of the understanding. So it would be
>no surprise that he didn't understand. One of Searle's premises is that
>he himself must do EVERYTHING the candidate mental model does, yet not
>understand. (This is why my "robotic functionalist" counterargument
>works, and why the TTT is immune to Searle's Argument.)

OK, so a crucial point for you is that Searle can do it all himself, I assume
by memorizing the rules.  So now we have a very complex (and improbable, but
we'll ignore that for the moment) entity in Searle's body.  When we ask it,
in Chinese, if it understands, the body replies "Yes, I do".  When we ask, in
English, if it understands Chinese, the body (using the small part of it that
used to be Searle, before it memorized the rules), the body replies "Of course
not!  And I wrote a paper telling you this a long time ago!".

Now I believe each reply to the same extent.  Namely, that the whole entity
(Searle + memorized rules) *does* understand Chinese, although a section of
it (Searle by himself) doesn't.  To try again:  Now there are *two* minds
in Searle's brain, just as there were two minds in the old Chinese Room (Searle
and Searle + rules).  I don't consider the lone Searle to be an authority on
what Searle + Memorized Rules understands, although you seem to.  Why?

>See earlier replies on "speed and complexity." This is just
>hand-waving. It's equivalent to taking a dumb toy model and saying
>"Just more of the same will pass the TT and will have a mind." I think
>the gap is not one of speed and complexity but missing,
>yet-to-be-discovered substantive functional concepts (and not just
>symbolic ones!).

I agree that complexity arguments are not important for this thought
experiement.  But it is useful to be aware that our intuitions about how small
rule sets function probably don't scale up well to large rule sets.  In no
way am I claiming that this is sufficient to create intelligence; I just don't
want you to appeal to the triviality of small symbol processing in order to
claim that large symbol processing won't work either.

>You've missed the point. Of course the Lingustic Turing Test is a
>subset of the Total Turing Test, and of course the whole Test would be
>harder to pass. But my point (and it had supporting arguments) was
>that it may well be that the only kind of device that could pass the
>LTT in the first place would have to be a device that could likewise
>pass the TTT, and that IN BOTH CASES it would have to draw essentially
>on nonsymbolic internal functions.

Probably true.  (I agree.)  But if that's the case, then just using the LTT
is sufficient for judging intelligence, as Turing originally claimed.  The
point of the Turing Test was to eliminate non-cognitive things from the test,
like "oh, it is colored green, and no human beings are colored green, so this
must be the computer".  We only want to judge cognitive ability.  Whether this
requires TTT ability is a problem for the engineers, not the judges.

(As another point, while the Turing Test may be sufficient, it is not
necessary.  Turing made it too hard to pass.  A hypothetical entity has to
foolishly duplicate human errors, like arithemetic errors and long pauses for
thinking.  This needlessly eliminates intelligences that are comparable but
not identical to human performance.)

	-- Don Geddis

P.S.  Thanks for the responses to my other questions.

-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

bwk@mbunix.mitre.org (Barry W. Kort) (03/06/89)

In article <Mar.5.13.15.58.1989.4447@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

 > (1) The ability to "learn" is a necessary but not a sufficient
 > condition for having a (normal) mind.

I understand that very well.

 > (2) In Searle's thought experiment, as long as everything that's going
 > on is PURELY SYMBOLIC (symbols in, symbols out, symbol-crunching in
 > between) it does not matter how you interpret the symbolic goings on --
 > as a conversation, as "learning," as rule-updating, as what have you.
 > The punchline's the same: Since Searle can do it all without
 > understanding, there's no understanding at all going on.

Stevan, I stared at your paragraph (2) for five minutes, trying to
understand it.  I must confess that, like Searle's thought
experiment, I simply didn't understand it.

--Barry Kort

krazy@claris.com (Jeff Erickson) (03/06/89)

Okay, forgive me if I'm being stupid, and e-mail me if you'd rather not
let this question start a new, long, ugly chain of massages on the same
old question.

Why CAN'T the rules "understand"?  It looks (to me) like this is just a 
piece of software being run through some rather copmlex hardware (the book,
Searle, the room, etc.).

Every claim I've read here in the last six days for "there is no under-
standing." bases that claim on the question "If there were, where does it
come from?  Certainly not Searle (that's an axiom), and certainly not
the rule book (that's obvious)."

Why is it so obvious?

And if this IS obvious, where does MY understanding come from?  Certainly
not the little neurons.  That's obvious;-)

-- 
         Any opinions you read here are only opinions in my opinion.
Jeff Erickson                                                 krazy@claris.com
                 "I'm so heppy I'm mizzabil!"  -- Krazy Kat
------------------------------------------------------------------------------

bwk@mbunix.mitre.org (Barry W. Kort) (03/06/89)

I would like to ask another question about the Chinese Room protocol.
As given by Searle, the Chinese stories are not illustrated.  But
we all know that children learn to read using illustrated stories.
The text is intimately linked to the pictures.  By this device,
textual symbols come to be associated with objects and actions
familiar from other sensory channels.  Could it be that the
Chinese Room is like scientists puzzling over heiroglyphics
before they discovered the Rosetta Stone?

In _Labyrinths of Reason_, William Poundstone describes the
Voynich manuscript, which is handwritten in an unknown language.
Like, Codex Serafinianus, it is lavishly illustrated, yet no
one can decipher the meaning of the text.  Feynman, on the other
hand, did decipher a codex which turned out to be an astronomical
almanac.  It would appear that understanding is no mean feat.

--Barry Kort

rjc@aipna.ed.ac.uk (Richard Caley) (03/06/89)

In article <Mar.4.00.45.52.1989.4435@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

>mike@arizona.edu (Mike Coffin) of U of Arizona CS Dept, Tucson writes:

>" Why?... a computer running an
>" algorithm can have properties that neither the computer (without
>" algorithm) nor the algorithm (without computer) have.

>I completely agree with the last proposition, except that understanding
>is not one of those properties. Why? Because when Searle stands in for
>the computer, doing everything it does, executing all of its algorithms,
>he does not understand. Hence neither can the computer understand, when
>it does exactly the same thing.

So? Searle is standing in for the computer, the computer therefore does
not understand. Nobody, I hope, was arguing contrary. As Mike said, the
computer + the algorithm ( Searle + the rules ) may or may not
understand. 

Searle is acting as an interpreter, I have in another window on this
screen an interpreter running a screen editor - the interpreter is not
an editor; the combination of interpreter and program is an editor.
Substitute 'understands chinese' for 'is an editor'.

-- 
	rjc@uk.ac.ed.aipna

 "Politics! You can wrap it up in fancy ribbons, but you can't hide the smell"
			- Jack Barron

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/06/89)

geddis@polya.Stanford.EDU (Donald F. Geddis) of Stanford University wrote:

" the whole entity (Searle + memorized rules) *does* understand Chinese,
" although a section of it (Searle by himself) doesn't...  Now there are
" *two* minds in Searle's brain... I don't consider the lone Searle to be an
" authority on what Searle + Memorized Rules understands, although you
" seem to. Why? [And] [a]s a matter of fact, I do [believe that I could
" fail to understand, and alpha centauri could fail to understand,
" but "I + alpha centauri" could compositely understand]

In view of what you are prepared to believe about intergalactically
distributed intelligence I am sure you will not be impressed to hear
that to this lone terrestrial neuropsychologist it seems highly
unlikely that memorizing a set of rules could give rise to two
minds in the same brain: As far as I know, only Joe Bogen's knife
has had such dramatic effects (in the "split-brain" patients -- and
possibly also early traumatic child abuse in patients suffering
from multiple personality syndome).

" The point of the [Linguistic] Turing Test [LTT] was to eliminate
" non-cognitive things [e.g., bias from appearance] from the test...
" We only want to judge cognitive ability. Whether this requires TTT
" ability is a problem for the engineers, not the judges.

I agree. But, as I argue in my paper, the LTT -- symbols-in,
symbols-out -- is systematically ambiguous about what goes on in
between input and output. It is only a CONJECTURE that symbol-crunching
alone would be enough. There are several reasons for concluding that
that conjecture is wrong, and Searle's Argument happens to be one of
them.

Ref: Harnad (1989) Minds, Machines and Searle. Journal of Experimental
     and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

jeff@censor.UUCP (Jeff Hunter) (03/06/89)

In article <Mar.2.23.55.02.1989.28807@elbereth.rutgers.edu>, harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
> Counterargument: To ascertain (beyond reasonable doubt) that a system
> CANNOT understand, you don't need a theory. Searle's argument is a case
> in point: If Searle (or you, or me) does exactly what the computer does
> but does not understand, then the computer does not understand.

    I'd like to try another thought experiment to see if I can find out why 
Mr Harnad and I disagree.
	An assumption of the Chinese room argument is that a set of rules
that manipulate symbols can be built that passes the (Linguistic) Turing
Test in Chinese. (ie. it can convince Chinese speakers that it too is a 
human Chinese speaker.)
	I make the further assumption that modelling a human on an
atom-by-atom level would be sufficient to reproduce that human's verbal 
behaviour. (Gilbert Cockton will probably disagree with me on this :-), but
I don't think that Steven Harnad does.) Of course to keep the simulation on
track it must get the same input (from a real or simulated environment) as
the real human gets.

	So, to start off this experiment we take Mr. Harnad into a quiet, dark
room (to minimise the environmental factors), sit him in a chair, and let him 
get comfortable. We then take the Thought Experiment Matter Mapper (tm) and 
record the positions of his atoms to the precision allowed by Quantum
Mechanics. We then ask "Do you understand English?" to several systems derived
from this map.

a) the original Mr. Harnad
b) a duplicate Harnad made from new atoms in the same pattern
c) a "functionally duplicate" android Harnad made using some other chemistry
    such as silicon molecules
d) a special purpose computer hardwired into the shape of Harnad's neurons, etc
e) a general purpose computer simulating Harnad's atoms moving according to
    the laws of physics
f) an English speaker simulating e) above
g) a Chinese speaker simulating e) above
h) Mr. Harnad simulating e) with pencil marks on paper
i) Mr. Harnad simulating e) after having memorised the entire program and
    database (shades of the halting problem :-)
   
    Presumably a) will answer "yes", and if the others are good simulations
they will also answer "yes" as well. (To forestall the posting that there
are quantum/chaos/rounding error limits to the quality of the simulation
I point out that we're only running for a few (simulated) seconds, and that
we can always run b) thru i) several thousand times and check for averages.)

    Now would Mr. Harnad be kind enough to state which of these systems
really understands English (according to his definition). It seems clear that
it is impossible for the humans in f) thru i) to understand the symbols they
are manipulating due to sheer volume. Each would be justified in saying
that they did not understand what the program thay were simulating was doing.

    e) thru i) are clearly only processing symbols. By my reading of Harnad's
posting to date he believes they do not understand. But then, to 
paraphrase him, "If a computer does what Searle (or you, or me) does, 
 but does not understand, then we do not understand".

 
-- 
      ___   __   __   {utzoo,lsuc}!censor!jeff  (416-595-2705)
      /    / /) /  )     -- my opinions --
    -/ _ -/-   /-     The first cup of coffee recapitulates phylogeny...
 (__/ (/_/   _/_                                    Barry Workman

bwk@mbunix.mitre.org (Barry W. Kort) (03/06/89)

In article <7431@polya.Stanford.EDU> geddis@polya.Stanford.EDU
(Donald F. Geddis) writes:

 > In article <Mar.5.13.17.35.1989.4486@elbereth.rutgers.edu> 
 > harnad@elbereth.rutgers.edu (Stevan Harnad) writes (in reply
 > to a posting of mine):
 
 > > See earlier replies on "speed and complexity." This is just
 > > hand-waving. It's equivalent to taking a dumb toy model and saying
 > > "Just more of the same will pass the TT and will have a mind." I think
 > > the gap is not one of speed and complexity but missing,
 > > yet-to-be-discovered substantive functional concepts (and not just
 > > symbolic ones!).
 
 > I agree that complexity arguments are not important for this thought
 > experiment.  But it is useful to be aware that our intuitions about how
 > small rule sets function probably don't scale up well to large rule sets. 
 > In no way am I claiming that this is sufficient to create intelligence;
 > I just don't want you to appeal to the triviality of small symbol
 > processing in order to claim that large symbol processing won't work either.

The above passage notwithstanding, I find an interesting passage in
William Poundstone's new book, _Labyrinths of Reason_, pp. 235-236:

   "It is conceivable that each of 100 billion neurons plays some part
    in actual or potential mental process.  You might expect then, that
    the instructions for manipulating Chinese symbols as a human does would
    have to involve at least 100 billion distinct instructions.  If there
    is one instruction per page, that would mean 100 billion pages.  So the
    "book" _What to Do If They Shove Chinese Writing Under the Door_ would
    more realistically be something like 100 million volumes of a thousand
    pages each.  That's approximateley a hundred times the amount of printed
    matter in the New York City library.  This figure may be off by a few
    factors of 10, but it is evident that there is no way anyone could
    memorized the instructions.  Nor could they avoid using scratch paper,
    or better, a massive filing system.
    
   "It's not just a matter of the algorithm *happening* to be impractically
    bulky.  The Chinese algorithm encapsulates much of the human thought
    process, including a basic stock of common knowledge (such as how
    people act in restaurants).  Can a human brain memorize something as
    complex as a human brain?  Of course not.  You cannot do it any more
    than you can eat something that is bigger than you are."

Those readers who enjoy paradoxes such as the Chinese Room will find
Poundstone's book a delightful read.

--Barry Kort

arm@ihlpb.ATT.COM (Macalalad) (03/07/89)

In article <Mar.2.23.56.36.1989.28884@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>"The human," in case you've forgotten, is the only one in there, besides
>the chalk and blackboards! I'll let you be the judge of how good a judge
>the chalk, or Searle-plus-chalk makes...
>
>Look, do you think an inert book of rules is capable of understanding?
>If you do, then you'll have no trouble believing that stones, chalk,
>constellations and tea-leaves are capable of understanding too, and I
>certainly won't be able to prove you wrong. (Animism or panpsychism --
>the belief that anything and everything can have a mind -- is the other
>side of the other-minds problem.) But if we assume that we need a bit
>more than that -- say, the ability to pass the LTT [sic] -- then
>Searle's Argument is there to show us that that's just not good enough,
>because he can pass the LTT for Chinese without understanding Chinese.
>And he's all there is to the "system."

It is interesting that the only point you didn't comment on was the
argument that neurons seem to be the "chalk-pushers" of the brain,
yet they individually don't seem to have understanding.  Let's make
the analogy a little more explicit.  Instead of having just one person
in the Chinese room, let's have a lot of people, comparable to the
number of neurons in the brain.  All of them are busy doing calculations
on the chalkboard and passing pieces of paper around, all in strict
adherence to their rulebooks.  And the output, of course, is fluent
Chinese.  Is it so clear now which one in the Chinese room should
understand Chinese?

-Alex

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/07/89)

In article <2498@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>In article <7645@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
writes:
>Thus, I would argue that the manifestation of intelligent
>>behavior cannot be observed the way we observe the size of a physical object.
>
>GOTCHA!  OK Stephen, so what are the implications for this of a science of
>Mind?

Gee, Gilbert, I don't see what has gotten you so excited, unless it is a rush
of triumph at the possibility that someone who devotes at least PART of his
time to trying to build software models of mental processes might agree with
you on something!  I think your question is a good one, and I shall be curious
to see if you are sympathetic to any of my answers.  However, let me first
state that it is not my intention to lay down a manifesto for the study of
mind.  Rather, I shall simply set down a few rules I am trying to live by
(not always successfully);  and I would encourage others to extend or modify
the list.

	1.  If we are to engage in the study of mind, we must begin by
	being VERY CAREFUL in our choice of words.  We are playing a game
	on a terrain in which intuitions can be more like land mines than
	landmarks.  Regardless of whether or not we agree with this theories,
	Marvin Minsky makes a very important point in THE SOCIETY OF MIND
	when he illustrates the ways in which the study of mind may be
	misguided by confused assumptions about what is simple and what
	is complicated.  As I remarked in an earlier article, Minsky
	handles the word "understand" extremely delicately in his book;
	and given the treatment that word has received from the likes of
	Searle and Harnad, I can only admire him for his caution.  I am
	not necessarily implying that we have to deny everything and go
	back to COGITO ERGO SUM, but it would seem that we lack the appropriate
	blend of Cartesian scepticism and productive humility in our
	current activities.

	2.  Because intuitions are so dangerous here, I think it is also
	important that we be just as careful when we try to follow someone
	else's argument as we are when we try to formulate out own.  I am
	beginning to find Stevan's games of verbal ping pong with the rest
	of the world tiresome to the point that they are no longer productive.
	He is obviously experiencing great frustration because he feels that
	just about everyone who is responding to his remarks does not
	understand him (and, of course, he KNOWS what he understands :-)).
	Unfortunately, the way he deals with the situation reminds me of
	a line from BEYOND THE FRINGE in which some proper English club
	types are trying to communicate with a Russian pianist:  "Say it
	slower and louder.  Then he'll understand!"  Perhaps if Stevan
	took a bit more time to try to communicate with his opponents
	ON THEIR OWN CHANNEL OF COMMUNICATION, so to speak, all of us
	might get more out of the exchange.

	3.  Finally, if we are interested in the study of mind, let us
	not waste our time on the politics of social relations.  Having
	seen Searle in front of an audience, I have been able to observe
	the man as a performer;  but I try to be careful not to confuse
	a persuasive performance with a convincing argument.  Not to pick
	on Searle, I would observe that I have also seen Herb Simon in
	action (since Jack Campin chose to respond to one of his recent
	abstracts);  and I suspect that any audience of eager students could
	be as easily swayed by Simon as they could be by Searle.  As with
	Brutus and Antony, it will boil down to who gets to say his piece
	last.  The point is that none of this really advances our study
	of mind.  It is simply a source of recreational ego-trips.  The
	tragedy is that such ego-trips often determine the course of
	research funds;  but that just demonstrates the depressing state
	of the world, in which the questions we try to ask are dictated
	by a handful of unthinking organzations with the power to dole
	out funds.

I realize this is all highly idealistic.  Probably none of us can live up to
these ideals, but can we ever live up to ANY ideals?  We need a lot less
lecturing and a lot more questioning.  Furthermore, we probably need to
devote a lot more time to reading and a bit less to writing.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/07/89)

From article <7431@polya.Stanford.EDU>, by geddis@polya.Stanford.EDU (Donald F. Geddis):
" ...
" I agree that complexity arguments are not important for this thought
" experiement. ...

I doubt you should agree.  What kind of "non-symbolic" computations
can't be modeled by symbolic ones?  Those which would require
too many symbols or rules, I suppose.  If that is the only
difficulty, complexity is an important issue.  Then to argue
the (in)feasibility of a life-like robot we would need to estimate
the number of "distinct" brain-states that could be caused
by perception, to establish at least whether it is finite.  This
seems to me to be the most plausible direction to pursue to
salvage Harnad's arguments.

		Greg, lee@uhccux.uhcc.hawaii.edu

ellis@unix.SRI.COM (Michael Ellis) (03/07/89)

>> Thomas Edwards  > Barry Kort

>>      Moreover, the incomming Chinese is also sensory input.
>> Rules may exist which change due to incoming Chinese.
>
>Perhaps Stevan can clarify this point for us, because I believe
>it is pivotal.  In Searle's thought experiment, are the rules
>immutable, or do they evolve as a function of the information
>contained in the Chinese stories?

    For Searle, "the rules" may involve variables or be mutable.
    "The Rules" in principle may do anything a Turing machine
    can do or else Searle's argument loses its punch. I have heard
    him say as much several times.

>When we talk about "understanding" in human terms, don't we really
>mean the ability to gain understanding (as opposed to merely having
>a fixed amount of understanding)?

    Sure, but neither Searle nor Harnad has overlooked that. That's
    just not at issue here. Go back and read your copy of Mind's_Eye
    again..

-michael

dan-hankins@cup.portal.com (Daniel B Hankins) (03/07/89)

In article <Mar.6.00.44.38.1989.19921@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

>In view of what you are prepared to believe about intergalactically
>distributed intelligence...

     I see distributed intelligences every day.  The most common form is
called a committee.  Another is called a bureaucracy.


>I am sure you will not be impressed to hear that to this lone terrestrial
>neuropsychologist it seems highly unlikely that memorizing a set of rules
>could give rise to two minds in the same brain: As far as I know, only Joe
>Bogen's knife has had such dramatic effects (in the "split-brain" patients
>-- and possibly also early traumatic child abuse in patients suffering
>from multiple personality syndome).

     I must call attention to a widespread phenomenon among writers of
fiction.

     Many authors report that the characters they invent seem to take on 'a
life of their own', and that the author does not in fact know exactly what
the characters are going to do or say next.  However, what the author does
know about the characters is their history and much of their personalities.
In some sense, this knowledge comprises a program for that character, and
the author's knowledge about how the world works and how people interact
provides an interpreter for that character program.

     Some authors have even said that they do not understand why their
characters do what they do, which seems to me remarkably close to what
Searle is saying when he says that he does not understand Chinese.

     In the sense that the author can run that character's program, the
author can in fact become that character.  Another illustration of this is
fantasy role playing games, in which the participants can be observed to
exhibit (at least verbal) behavior which is *not their own*, but that of
the character they have created.  Again, each participant has in mind a set
of rules or a program for that character's behavior, and executes that
program (i.e. manipulates the symbols) during the course of play.

     So the claim that a person executing a set of rules is then another
person is not as ridiculous as it seems on the face of things.


Dan Hankins

dan-hankins@cup.portal.com (Daniel B Hankins) (03/07/89)

     Here is my understanding of the external view of Searle's Chinese Room
thought experiment:

     There is a room, and there are one or more native speakers of Chinese.
 The native speakers write things in Chinese (questions, comments,
whatever), and pass these pieces of paper into the room.  Out come other
pieces of paper with Chinese symbols on them.

     All the native speakers *claim that there is a native Chinese speaker
in the room*.  And to put Searle's argument to the most stringent test, we
must put no limits on how much conversation the native Chinese speaker
engages in before coming to a decision, nor must there be any limit on what
questions he can ask or conversations he can engage in, nor can there be
any limit on the number of native speakers asked to decide whether the room
contains another Chinese person.

     So the native speakers claim that there is something in the room that
understands Chinese.

     Here is the crucial question:  *Without opening up the room to see
what is inside*, what basis do we have for disbelieving the native speakers?

     The fact of the matter is, *we don't*.  *Something* in there is
understanding Chinese.

     It ain't Searle;  so what is it?


Dan Hankins

rjc@aipna.ed.ac.uk (Richard Caley) (03/07/89)

In article <Mar.5.13.17.35.1989.4486@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>"Searle + rules" is a piece of cog-sci-fi. 

Are you saying that "Searle+rules" does not exist? If so your basic
conception of the world seems very far from mine and, I suspect, most of
the world. Most people are happy to talk about combinations of
entities ( " my husband and I", " house and contents" . . . ) Now maybe
they do not think that "Searle + rules" is an _interesting_ thing to
talk about, and more than " three cups of coffee and a space shuttle"
would be, but non the less it _is_ a thing which some strange folk might
wish to discuss.

>Do you believe that I
>could fail to understand, and alpha centauri could fail to understand,
>but "I + alpha centauri" could compositely understand? 

Intuativly, no.

Logically, I can't say - I have no way of deciding one way or the
other. Certainly "Me +alpha centauri" has properties which I do not ( eg
a very high average temperature! ). I _can't_ say catagorically that
"understanding" is or is not one of those properties. If anyone thinks
that they can maybe they can post their proof.

Similaly the "systems" argument says that Searle, by forgetting this
option leaves a hole in his proof. We don't need to believe that
"Searle+rules" can understand to invalidate the Chineese Room, we mearly
need to see that there is no proof that this composite entity can not. 

>Could we
>compositely feel an itch that neither of us feels singly?

Can your head plus your foot feel an itch in that foot while the foot
has not the equipment to feel anything, but your head has not the foot
to itch with?

>If you don't
>PRESUPPOSE the far-fetched notion that the Chinese Room Argument set
>out to debunk in the first place, then you're less inclined to come
>back with it by way of a reasoned rebuttal!

On the contarary, if you do not PRESUPPOSE that certain things can not
"understand" the chinese room argument falls like a house of cards.

-- 
	rjc@uk.ac.ed.aipna

 "Politics! You can wrap it up in fancy ribbons, but you can't hide the smell"
			- Jack Barron

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/07/89)

jeff@censor.UUCP (Jeff Hunter) of
Bell Canada, Business Development, Toronto wrote:

" An assumption of the Chinese room argument is that a set of rules that
" manipulate symbols can be built that passes the (Linguistic) Turing
" Test in Chinese.... I make the further assumption that modelling a
" human on an atom-by-atom level would be sufficient to reproduce that
" human's verbal behaviour...

This is followed by a long list of arbitrary variations and permutations
on Searle's simple Chinese Room  -- including real people, atomic
copies, synthetic copies and neural networks, as well as symbolic and
human "simulations" of all the foregoing, in unspecified languages --
none of which seem to elucidate anything. I am asked which ones can
understand English. The simple answer is that those that can pass the
TTT (including the English LTT) can. Atomic and synthetic robotic
copies could do that in principle (so what?); mere symbol-crunchers
(including symbolic simulations of atomic copies, neural nets, etc.)
cannot, for a number of reasons, among them Searle's Chinese Room
Argument.

Burying Searle's simple, straightforward point in a labyrinth of arbitrary
complications serves neither to understand it nor to refute it.

Refs:   Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain 
                          Sciences 3: 417-457
        Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/07/89)

arm@ihlpb.ATT.COM (Macalalad) of AT&T Bell Laboratories wrote:

" the only point you didn't comment on was the argument that neurons seem
" to be the "chalk-pushers" of the brain, yet they individually don't
" seem to have understanding...  Instead of having just one person
" in the Chinese room, let's have a lot of people, comparable to the
" number of neurons in the brain. All of them are busy doing calculations
" on the chalkboard and passing pieces of paper around, all in strict
" adherence to their rulebooks.  And the output, of course, is fluent
" Chinese. Is it so clear now which one in the Chinese room should
" understand Chinese?

To me there is only one thing that sounds more unlikely than the notion
that the LTT could be passed by symbol-crunching alone, and that is the
notion that all neurons do is crunch symbols. But that's certainly one
way of supporting the proposition that symbol-crunchers can understand.
It's called argument by assumption (i.e., it's the same circular
reasoning we keep encountering over and over on this topic).

For the record, the only reason the real brain is immune to Searle's
Argument is that neurons are NOT just "chalk-pushers."

Refs:   Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain 
                          Sciences 3: 417-457
        Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

dave@cogsci.indiana.edu (David Chalmers) (03/07/89)

Stevan Harnad writes (many many times) [I paraphrase]:

>If Searle doesn't understand, then Searle + rules can't either.

>Meaningless neurons have powers that meaningless symbols do not.

>Complexity is an irrelevant factor.

>Symbol-crunchers are inherently, intuitively empty and stupid.

>[Much else in the way of "proof by assertion".]

>The "Total Turing Test" and "Robotic Functionalism" [sic] are the true answer.

>Nobody out there understands these arguments, but they're right.

>Maybe if I say this enough times, people will believe me.

Wrong, Stevan, you lose.
Can we talk about something else now?

  Dave Chalmers

geddis@polya.Stanford.EDU (Donald F. Geddis) (03/07/89)

In article <Mar.6.00.44.38.1989.19921@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>As far as I know, only Joe Bogen's knife
>has had such dramatic effects (in the "split-brain" patients -- and
>possibly also early traumatic child abuse in patients suffering
>from multiple personality syndome).

It is not really fair to compare the potential for two minds within Searle
(after he memorizes all the rules in the Chinese Room) to current-day reality.
For that, complexity arguments *are* important:  the size of the rule set
would be orders of magnitude beyond what a human being could potentially
memorize, much less utilize effectively.  So if we're going to let Searle
hypothetically memorize the rules, we can't outlaw the possibility that there
are now two minds in his body (the old Searle, as well as the new Searle +
Rules) by looking at modern medical experiments.

	-- Don

-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/07/89)

In article <Mar.6.00.44.38.1989.19921@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
>
>In view of what you are prepared to believe about intergalactically
>distributed intelligence I am sure you will not be impressed to hear
>that to this lone terrestrial neuropsychologist it seems highly
>unlikely that memorizing a set of rules could give rise to two
>minds in the same brain: As far as I know, only Joe Bogen's knife
>has had such dramatic effects (in the "split-brain" patients -- and
>possibly also early traumatic child abuse in patients suffering
>from multiple personality syndome).
>
At least NOW we know what you are Stevan!  (In an earlier article, I
recall you vigorously denied being a philosopher, without saying WHAT
you were.  Here, I was getting ready to apply to duck test to they hypothesis
that you WERE a philosopher :-).)

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/10/89)

dan-hankins@cup.portal.com (Daniel B Hankins) Portal System (TM) wrote:

" I see distributed intelligences every day. The most common form is
" called a committee. Another is called a bureaucracy.

This is not relevant. Attributing "intelligence" in such cases is either
just an analogy or a figure of speech. Can a committee feel pain? If
not, then it can't understand either.

" I must call attention to a widespread phenomenon among writers of
" fiction.... Many authors report that the characters they invent seem to
" take on 'a life of their own'...

Irrelevant again. That author's have minds is not in doubt. The sources
of literary creativity or social judgment are not at issue either.
And certainly the minds of fictitious characters are as fictitious as
the characters themselves.

" All the native speakers [who are administering the Lingustic Turing
" Test (LTT)] *claim that there is a native Chinese speaker in the
" room*... *Without opening up the room to see what is inside*, what
" basis do we have for disbelieving the native speakers? The fact of the
" matter is, *we don't*. *Something* in there is understanding Chinese.
" It ain't Searle; so what is it?

All you are doing here is restating the premise of the LTT. Searle's
Argument shows what untoward conclusions arise from accepting the
premise that the LTT could be successfully passed by symbol-crunching
alone. Sometimes the best way to deal with an untoward conclusion
is to revise your premises. The people who are arguing till they are
black and blue that "rules understand" or "chalk understands" or
"Searle's brain has another mind that understands" would do better
to stop straining at it and simply confront the possibility that
it is not possible to pass the LTT by symbol crunching alone!

By the way, although the constraint ends up doing spurious double duty,
the reason Turing formulated the TT as the LTT rather than the Total
(robotic) Turing Test (TTT) was not explicitly because (1) he was
endorsing the symbol-crunching theory of mind, but because (2) he
didn't want anyone to be biassed by the APPEARANCE of the candidate. In
these Star-Wars days of loveable tin heroes, we perhaps no longer need
to be so worried that robots will be denied the benefit of the doubt
just because of their LOOKS. So let us consider the possibility that
to pass the LTT, a candidate may need the functional wherewithal to pass
the TTT, and that that functional wherewithal will not be mere
symbol-crunching.

Refs:   Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

geddis@polya.Stanford.EDU (Donald F. Geddis) (03/10/89)

In article <Mar.9.12.22.06.1989.21547@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>Sometimes the best way to deal with an untoward conclusion
>is to revise your premises. The people who are arguing till they are
>black and blue that "rules understand" or "chalk understands" or
>"Searle's brain has another mind that understands" would do better
>to stop straining at it and simply confront the possibility that
>it is not possible to pass the LTT by symbol crunching alone!

We start with a hypothesis, that "mere symbol crunching" has the capability
to pass the LTT.  From that you are quite correct:  if we logically derive
a conclusion that everyone agrees is absurd, then (by reductio ad absurdum)
we would have shown that the premise is false.

But so far you seem to be the only one who view the System Reply conclusion
(that the Searle + Rules combination is a mind that understands) as an
absurd conclusion.  It isn't good enough for your argument that the absurdity
is obvious to you; it must be obvious to everyone else as well (such as:  0=1).

The truth is that in this case the conclusion is perfectly consistent with
the original hypothesis, so Searle's argument shows nothing.  Which isn't
quite the same as actually proving that symbol processing *could* achieve
intelligence, but it is much better than the devastating blow you seem to be
claiming.

	-- Don Geddis
-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

mike@arizona.edu (Mike Coffin) (03/10/89)

Stevan Harnad writes, in part
> All you are doing here is restating the premise of the LTT. Searle's
> Argument shows what untoward conclusions arise from accepting the
> premise that the LTT could be successfully passed by symbol-crunching
> alone. Sometimes the best way to deal with an untoward conclusion
> is to revise your premises. The people who are arguing till they are
> black and blue that "rules understand" or "chalk understands" or
> "Searle's brain has another mind that understands" would do better
> to stop straining at it and simply confront the possibility that
> it is not possible to pass the LTT by symbol crunching alone!

I think you misunderstand our state of mind.  ;-)

You seem to think that, realizing our position well-nigh untenable, we
are desperately inventing ever-more-bizarre rationalizations for our
preconceived ideas.  Of course, I can't speak for others, but I am not
"straining" in the slightest.  I see no untoward, or even unexpected,
conclusion!  The systems reply seems the most natural thing in the
world to me --- and I don't even have a Yale education.

Life is absolutely FULL of situations where large aggregates of simple
objects don't act simply.  It is the most natural thing in the world
to expect the whole to be greater than the sum of the parts.  It is
"common sense."  In fact, it would be surprising if a large aggregate
didn't have "a life of its own."  I learned very early --- probably
while watching my father take apart, clean, and reassemble clocks and
watches  --- that if you take apart complicated things you expect to
find simpler things, and if you put a lot of simple things together,
you get complicated things.  Playing with tinker-toys, erector sets,
microscopes, physics, and computers all reinforced this sense.

Am I alone in this sense that the whole is generally greater than the
sum of its parts?


-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (03/10/89)

In article <Mar.6.23.26.26.1989.5039@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>arm@ihlpb.ATT.COM (Macalalad) of AT&T Bell Laboratories wrote:
>" the only point you didn't comment on was the argument that neurons seem
>" to be the "chalk-pushers" of the brain, yet they individually don't
>" seem to have understanding...  Instead of having just one person
>" in the Chinese room, let's have a lot of people, comparable to the
>" number of neurons in the brain. All of them are busy doing calculations
>" on the chalkboard and passing pieces of paper around, all in strict
>" adherence to their rulebooks.

>To me there is only one thing that sounds more unlikely than the notion
>that the LTT could be passed by symbol-crunching alone, and that is the
>notion that all neurons do is crunch symbols. But that's certainly one
>way of supporting the proposition that symbol-crunchers can understand.
>It's called argument by assumption (i.e., it's the same circular
>reasoning we keep encountering over and over on this topic).
>For the record, the only reason the real brain is immune to Searle's
>Argument is that neurons are NOT just "chalk-pushers."

	Upon what evidence do you base your statement "neurons are
NOT just 'chalk pushers.'"?   
	I'll be the first to admit that neurons do have a complicated
transfer function between input and output, but your statement gives
them a little more cognitive ability than I'm ready to accept.
Creatures with relatively few neurons show little cognitive ability
(Aplysia for instance).
	Or perhaps our definitions of chalk-pushers is a little different.
I assume that by "chalk-pusher" you mean someone with memory and
a releatively simple transfer function between input and output
(perhaps a little more complicated than 1/(1+e^(-x)), but nethertheless
much less complicated than that of any relatively lucid [in terms
of being able to deal with real world problems] brain,
including Aplysia, grasshoppers, etc.)
        From my desperate attempts to find neural net literature
five years ago, I know there are hoards of books desribing mathematical
models of real neurons (probably not including neuropharmacology,
which _is_ important, but including nerual potential values).
	Anyway, the main idea here is that neurons are not homonclei!.
They cannot do any kind of reasonable cognitive task without being
in a weighted network with many, many others.
	The neruon does not understand, the network _does_ (unless
you are ready to drop the concept that people understand).  Is this not
a good precedent to show that systems do exist in which the parts
do not understand, yet the system does?
	Furthermore, on this ever "angel on the head of a pin"
debate, I believe that the subjective concept of understanding
is not scientifically provable as per the non-performance oriented
concept.  If something is in a opaque box, how can we know if it
understands or not?  It could just as well be a cat, cow, rock,
or person.  If we think we can determine whether the mind understands
or not while we keep it in a box, we are deluding ourselves.
A performance test based on input and output which shows real-world
reasoning is the best we can ever hope for.

-Thomas Edwards
ins_atge@jhunix

jeff@censor.UUCP (Jeff Hunter) (03/10/89)

... harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
> [me condensed. assume an atom-by-atom model of a human is sufficient rules
>  to reproduce linguistic behaviour. now "record" atoms of a real human.]
>
> This is followed by a long list of arbitrary variations and permutations
> on Searle's simple Chinese Room  -- including real people, atomic
> copies, synthetic copies and neural networks, as well as symbolic and
> human "simulations" of all the foregoing, in unspecified languages --
> none of which seem to elucidate anything.
	I'm sorry if the examples seemed arbitrary. I was listing some
alternative "copies"of the original human, from flesh-and-blood to a purely
mental simulation. I was unsure where you would draw the line and say that
understanding ceased.

> I am asked which ones can
> understand English. The simple answer is that those that can pass the
> TTT (including the English LTT) can. Atomic and synthetic robotic
> copies could do that in principle (so what?); mere symbol-crunchers
> (including symbolic simulations of atomic copies, neural nets, etc.)
> cannot, for a number of reasons, among them Searle's Chinese Room
> Argument.
	A clear answer. Thank you. I went back and re-read your un-expired
articles here. Let me try my understanding (that word again) of your position.
"A purely symbolic simulation is, by definition, not real. A system which
has (at least some) real parts is immune to Searl's Chinese Room argument,
and thus can potentially understand. A system which can pass the Total Turing
Test (simulate a real human) can probably understand." Please correct me if
I've mis-stated your opinions.
	I'm curious as to the bounds you put on the TTT. Does a candidate
have to look exactly human even under X-rays, etc... or does it just have
to be able to pass the LTT, look vaguely humanoid, and be able to pick up
a glass?

	An interesting property of any of my "symbolic simulations of
atomic copies" is that it is possible, in principle, to reconstitute a
living human from the symbolic information of the position of the atoms.
This new human seems as good a candidate to pass the TTT as any random man
on the street, and so can "understand" with the best of them. (You may at
this point say "of course, so what?". Please don't get annoyed. I'm just
trying to fathom your concept of understanding, which I find strangely
counter-intuitive.) Do you agree that a re-embodied simulation can understand?
	
	Now let us take a non-Chinese-speaking human, and record her. She goes 
in a sealed room, with only an optical fibre link to the rest of the world.
(Note: purely symbols in, symbols out.)
Her simulation is put in a similar (but simulated) room. Both of them are 
taught Chinese (by correspondence, telephone, video, etc) until the real
human understands Chinese, and the simulated one has simulated understanding.
To an external observer they should be as alike as twins. There is no way that
I can see to distinguish which is real without opening the room (or probing
it with cosmic rays). (Remember this is a thought experiment with an
arbitrarily good simulation at atomic level.)
	Now open the real room and let the volunteer out. Reconstitute the 
simulated room, open it, and let the second copy of the volunteer out.
Presumably both can pass the TTT. Each will state that they understood
Chinese while they were in the room, so the introspective "I know when I
understand something as opposed to just manipulating meaningless symbols"
property of understanding seems to be common to both real and simulated
humans. I can't see any functional difference, inside or out, between the
real and simulated understanding. Do you see any difference aside from the
fact that the simulated one is purely non-physical?

	Next thought experiment: take a human. record the brain, then remove it
(and store it, who knows when a spare brain will come in handy :-). add
hardware to each nerve to record incoming signals, send outgoing ones, and mask
the truncation of the nerve. add more hardware to remove glucose from incoming 
blood, etc.. add a room containing a miniaturized ultra-quick Searl running
an atomic level simulation of the recorded brain. The hardware communicates 
signals with Searl using some real actuator (a series of laser pulses perhaps).
The result would seem to be a human, and should be able to pass the TTT.
Do you agree?
Oh it wouldn't get concussions due to brain bruises. Add a few acceleration
sensors. We're back in business.)
Now if the original human spoke Chinese we have:

Searle does not understand Chinese.

You repeatedly ridicule the notion that "Searle + rules" can understand
Chinese.

"Searl + rules + laser" and "Searl + rules + laser + interface hardware" 
might or might not understand (I don't know where you stand on this).

"Searl + rules + laser + interface hardware + body" can pass the TTT, and
therefore you should believe that this can understand. Do you?

	I find it hard to believe that adding a few peripherals to the
processor (I'm a programmer don't cha know :-) magically adds understanding
somehow. Please try to explain again. Thanks.
 
>
> Burying Searle's simple, straightforward point in a labyrinth of arbitrary
> complications serves neither to understand it nor to refute it.
>
	Well it's so simple that dozens of messages later there still is no
clear agreement on what you see as the difference between real and
simulated understanding. I'm hoping to accelerate the process a bit.
		.... keep smiling ...


-- 
      ___   __   __   {utzoo,lsuc}!censor!jeff  (416-595-2705)
      /    / /) /  )     -- my opinions --
    -/ _ -/-   /-     No one born with a mouth and a need is innocent. 
 (__/ (/_/   _/_                                   Greg Bear 

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/10/89)

In article <52507@yale-celray.yale.UUCP> engelson@cs.yale.edu (Sean Engelson) writes:
>But a physicist can give me a simple and effective procedure by which
>I can measure the charge of a body, or the force of gravity.  I have
>seen no such procedure of criterion for recognising understanding,
>other than I/O equivalence with that which we call understanding in
>humans.  Under that criterion, the Chinese Room understands.

You have seen ....

Have you looked?

"Verstehen" lies at the heart of hermeneutics and several other
intellectual traditions.  Unfortunately, you come across as a naive
positivist, so I doubt whether you would get anything from Dilthey,
later hermeneutics, the Frankfurt School, phenomenology, Malinowski,
ethnomethodology or social action theory (to start near to home and
thus in your own language).

I/O equivalence is the sense-data argument, you're nearly 60 years
behind the times.  This got taken to its extreme and fell apart in the
1930s with some of the Vienna crowd.

Wittgenstein is an easy place to start too.  Have a look at his notion
of family resemblance (for games, chairs etc.).  You'll see that many
of the meanings which you take for granted aren't as hard as you'd want
them to be.  I believe that some ongoing physics experiments in
Nevada/near also are casting doubts on our ideas about gravity too.
Physicsis getting a lot looser than it used to be, just look at how
long physicists will argue over a cat in a box.  Hardly good practical
empirical experiment like you brave AI boys.

Understanding is attributed during practice.  Perhaps a TTT could pass
it, but I doubt it, as no-one's going to get enough encoded to simulate
anything near the common sense repetoire of humans.

Finally, the TTT cannot be like the LTT, in the missionary position and
one to one.  The real TTT's going to have to be public, communal and
involve more than one task.  I can't see why anyone is bothered about
believing whether this is possible "in theory" as there is as yet no
possible theory in which it could be true.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/10/89)

In article <Mar.9.12.22.06.1989.21547@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
>
>dan-hankins@cup.portal.com (Daniel B Hankins) Portal System (TM) wrote:
>
>" I see distributed intelligences every day. The most common form is
>" called a committee. Another is called a bureaucracy.
>
>This is not relevant. Attributing "intelligence" in such cases is either
>just an analogy or a figure of speech. Can a committee feel pain? If
>not, then it can't understand either.
>
I see we're back to playing fast and loose with language (specifically words
like "intelligence" and "understand") again.  Any well-coordinated military
seem (such as, for example, a tank crew) should be so closely knit that to
call the union of that team an organic being is no mere metaphor.  AS A TEAM,
they will respond to stimuli of positive and negative reinforcement of their
actions;  and there is no reason why an outside observer would not say that
the team is basically seeking pleasure and avoiding pain.  It seems that
Stevan's argument boils down to the assumption that certain words and phrases
(such as "understand" and "feel pain") are ONLY applicable to humans.
Stevan is certainly entitled to that assumption, but there seem to be
enough of us willing to question it that he cannot try to promote it
from "assumption" to "universal truth."

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/10/89)

In article <9560@megaron.arizona.edu> mike@arizona.edu (Mike Coffin) writes:
> You do this in spite of all the
>evidence of your senses --- remember, this system passes the Turing
>test!  You have this impressive system in front of you; it
>certainly seems to understand Chinese; Searle certainly doesn't; the
>rules by themselves certainly don't.  Yet you ignore the evidence and
>insist on talking about components of the system as if they were the
>system.  It almost looks like you're in the grips of an ideology :-)

The system passes the LTT (because Searle so defines the gedanken
experiment), but it DOES NOT understand - certainly not in the sense of
the way people use the word.

So, what is AI?  An attempt to build artefacts, or an attempt to brain
wash us into seeing 5 when 4 fingers are held up?  Stop messing with my
language - 200 years of melting pot have wrecked it enough already :-(

Everyone is in the grip of some ideology, but the systems' one is just
plain silly if it attributes "understanding" to a system.  I am a
holist, but I don't see how an attribute of a part can be transferred
to the whole if it doesn't exist in the part.  The interesting thing
about systems is the attributes of the whole which CANNOT be attributes
of the parts, not true here I'm afraid.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

staff_bob@gsbacd.uchicago.edu (03/11/89)

In article <18073@iuvax.cs.indiana.edu>, dave@cogsci.indiana.edu (David Chalmers) writes...
 
>Answer me these questions.
>  (1)  Do you believe neurons (taken alone) have semantics.
>              [I take it the answer has to be "No."]
>  (2)  Do you believe the brain as a whole has semantics.
>              [I take it the answer is "Yes."]
> 
>Given this, you must accept that semantics can arise out of non-semantic
>objects.  Most of us are a little baffled as to how.  It seems that the 
>only half-way reasonable tack we can take to answer this question is to
>say that what is important for semantics (and the subjective in general)
>is not so much those objects as the complex patterns that they form.
>
> [much deleted] 
> 
>  Dave Chalmers
>  Center for Research on Concepts and Cognition
>  Indiana University

The term 'semantics' comes out of linguistics, and is not a synonym
for the more general term, 'meaning'. One cannot discuss semantics outside
of the context of a language. For a given, well-formed expression in any
language, the semantics of that expression is what it is intended to signify.
Thus, generally, we can say that the semantics of a sentence is its meaning.

I don't know exactly what Mr.Chalmers is trying to say in this passage.
Are we to take a neuron as a well formed sentence in some language that
is nonetheless devoid of meaning? In the context of formal language theory,
neurons do have well defined semantics. We think we know how single neurons
work: for a given set of input values, there is a well defined output.
If I remember correctly, it is possible to characterize these actions with
regular expressions. Therefore, I would have to say that the answer to 
[1] is "Yes".

Mr.Chalmers argument follows from his premise. He assumes that semantics means 
"something more complex" than simple input/output operations, and his
conclusion emerges directly from this (questionable) assumption.

Anyone who has studied compiler writing in general or denotational semantics
in particular is familiar with semantics in the low level sense, but Mr.
Chalmers discludes this usage, either by choice or through ignorance of
the use and meaning of the word 'semantics'. He then goes
on to prove that since he has assumed no low level semantics, but has assumed
high level semantics, that semantics must emerge somewhere in the middle.

To excuse the misuse of the word "semantics", let us substitute some other,
more general term, such as meaning (or perhaps the U-word, understanding). 
Then I would have to say that the answer to [2] is no. It is certainly possible
to claim that meaning does not exist in the brain, it exists in the mind. 
I'm sure that many readers will object to the distinction, and I do not
care to defend it, but my point is this: the Mind/Body problem has never
been solved, and it may well be insoluble. We observe that something called
Mind exists ("I think, therefore I am" is a proof for the existence of Mind),
and we also observe that Minds coexist with bodies. As Mr.Chalmers asserts,
most of us are baffled as to how. The general assumption these days
is that Mind is a by-product of a body, but a recent posting has made the
valid point that we cannot show that a body is not a by-product of Mind.
In any event, I think that one can show many more than half-way reasonable
tacks to take in approaching this problem than Mr.Chalmers has suggested.
I need only list all of the great philosophers from Plato and Aristotle
through DesCartes, Locke, Hume, Kant, Hegel, Schopenhauer, Heidegger and
Sartre to begin to enumerate them.
                           
A lot of people have grappled with this question, none to the general
satisfaction of the rest of humanity. The recent debate in this news
group in re Searle's Chinese Room thought experiment hinges on the
Mind/Body question. Is not the premise of Searles argument just that
understanding only occurs in Mind, and Mind exists neither in the system
(Searle+rules) nor in a computer? IMHO, thia entire debate has revolved around
these two assertions, neither of which can be proven or disproven.

It interests me that so many people of scientific bent show an absolute
distaste for philosophy, yet somehow feel themselves qualified to discourse
at length on the greatest philosophical questions of the ages.

I' reminded of a anecdote from the book "The Dreams of Reason" by Heinz
Pagel, who in his capacity (I believe) as executive director of the
New York Academy of Science once arranged a talk to be given by the
Dali Lama. In the subsequent question and answer session, someone tried
to ask the Dali Lama where intelligent machines fit into his philosophy/
religion/system. The Dali Lama merely responded, "When you have such a 
machine, here, in front of us, then I will answer your question."

What can possibly be gained from this debate over Searle's thought experiment?
Assuming that we could come to some sort of universal agreement about
this (and I really don't think that is possible) will it make a single
iota of difference to the work we're doing? Is this not really a theological
debate, more than anything else? That is to say, aren't the arguments
we have seen to this point really defenses of various faiths in the
possiblilty of machine 'awareness'? Can't we let that debate wait until
we're a little bit closer to something called machine intelligence?


					R.Kohout

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/11/89)

In article <28207@sri-unix.SRI.COM> ellis@unix.sri.com (Michael Ellis) writes:
>> Jeff Dalton >> David Sher

>>>Does anyone believe that they can build a machine with a soul?  It is
>>>just as easy to build in Searle's "understanding."

>>It's certainly true that it's hard to see what could ever convince
>>Searle that anything had understanding.  

>    Then you missed something. Searle is *already convinced* that at least:

>    1. Searle has it
>    2. Other humans have it

Sure, but that's not because he used to think otherwise and someone
convinced him he was wrong.  Perhaps I should have added "unless the
thing uses brains or green slime".  Anyway, you haven't said anything
about how Searle might be convinced, only that he is *already
convinced*.  So if I missed something, it must be something else.

Consider an example.  Searle talks about the "causal powers of the
brain".  He thinks there's *something* about brains that results in
understanding.  But he doesn't say anything about what it is.  Well,
that's reasonable, because we don't know all that much.  But the
causal powers end up being something we infer from the understanding
and not much help as a way to show that something has understanding in
the first place.

All of this is fine, but in the end it's not clear what might show
that green slime, for example, might have the right sort of causal
powers.

dave@hotlr.ATT ( C D Druitt hotlk) (03/11/89)

In article <15469@cup.portal.com> dan-hankins@cup.portal.com (Daniel B Hankins) writes:
 > In article <Mar.6.00.44.38.1989.19921@elbereth.rutgers.edu>
 > harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
 > 
 > >In view of what you are prepared to believe about intergalactically
 > >distributed intelligence...
 > 
 >      Some authors have even said that they do not understand why their
 > characters do what they do, which seems to me remarkably close to what
 > Searle is saying when he says that he does not understand Chinese.
 > 
 > the character they have created.  Again, each participant has in mind a set
 > of rules or a program for that character's behavior, and executes that
 > program (i.e. manipulates the symbols) during the course of play.
 > 
 >      So the claim that a person executing a set of rules is then another
 > person is not as ridiculous as it seems on the face of things.
 > 
 > 
 > Dan Hankins


Anybody tried Tim Leary's "Mind Mirrors" ?
In order to play, you have to react to various scenarios as a
character chosen and composed for you would react.
In order to win, you have to focus your choice of reactions
through Tim Leary's mind (e.g. understand his set of rules for
character behaviour).

This is similar to a musician's concept of "putting his finger
through his guitar, out through the PA, into the audiences' mind"
and saying something symbolically that can be understood even
in Chinese.

Point is, there are some things we all have in common.
In some ways, we are all different.
Experiences have something to do with shaping both aspects.
If you can see how these terse generalizations apply to this
newsgroup, there is hope for AI in general.
If you want an extended explanation, just let me know.

Dave Druitt
(the NODES)
(201) 949-5898 (w)
(201) 571-4391 (h) 

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/11/89)

staff_bob@gsbacd.uchicago.edu (R.Kohout) of
University of Chicago Graduate School of Business asks:

" What can possibly be gained from this debate over Searle's thought
" experiment? Assuming that we could come to some sort of universal
" agreement about this (and I really don't think that is possible) will
" it make a single iota of difference to the work we're doing?

The historian J. H. Hexter once wrote:

   in an academic generation a little overaddicted to "politesse," it
   may be worth saying that violent destruction is not necessarily
   worthless and futile. Even though it leaves doubt about the right
   road for London, it helps if someone rips up, however violently, a
   `To London' sign on the Dover cliffs pointing south...

Searle's Argument has helped to show that pure symbol-crunching is not
the right road to the mind. In my JETAI paper I gave more reasons, and
in my book I try to show another road, a bottom-up one, in which
symbolic representations are grounded nonmodularly in nonsymbolic
representations.

Refs:   Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain 
                          Sciences 3: 417-457
        Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
        Harnad, S. (1987) Categorical Perception: The Groundwork of Cognition.
                          Cambridge University Press.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

dave@cogsci.indiana.edu (David Chalmers) (03/11/89)

Just very briefly.  I don't want to drag this discussion out even longer...

In article <2233@tank.uchicago.edu> staff_bob@gsbacd.uchicago.edu writes:
>In article <18073@iuvax.cs.indiana.edu>, dave@cogsci.indiana.edu (David Chalmers) writes...
> 
>>Answer me these questions.
>>  (1)  Do you believe neurons (taken alone) have semantics.
>>              [I take it the answer has to be "No."]
>>  (2)  Do you believe the brain as a whole has semantics.
>>              [I take it the answer is "Yes."]
>> 
>The term 'semantics' comes out of linguistics, and is not a synonym
>for the more general term, 'meaning'. One cannot discuss semantics outside
>of the context of a language. For a given, well-formed expression in any
>language, the semantics of that expression is what it is intended to signify.
>Thus, generally, we can say that the semantics of a sentence is its meaning.

Sorry, I wasn't using the term "semantics" in its linguistic sense.  I was 
using it in the more general sense of "meaning" or "representational richness"
or whatever that weird thing is that goes on in a human mind.  This is the 
sense in which the word is used by Searle (and quite a few others, 
incidentally).  I'm sorry if it confused you.

>To excuse the misuse of the word "semantics", let us substitute some other,
>more general term, such as meaning (or perhaps the U-word, understanding). 
>Then I would have to say that the answer to [2] is no. It is certainly possible
>to claim that meaning does not exist in the brain, it exists in the mind. 

Well, if you like.  But most of us believe that minds are heavily dependent
on brains for their existence.  So whether it is the brain or the mind
which supports meaning (and I'm tempted to say that it's only a "semantic"
question), nevertheless the existence of a brain seems sufficient to
produce minds and thus meaning (or whatever that word is).

>A lot of people have grappled with this question, none to the general
>satisfaction of the rest of humanity. The recent debate in this news
>group in re Searle's Chinese Room thought experiment hinges on the
>Mind/Body question. Is not the premise of Searles argument just that
>understanding only occurs in Mind, and Mind exists neither in the system
>(Searle+rules) nor in a computer? IMHO, thia entire debate has revolved around
>these two assertions, neither of which can be proven or disproven.

I agree (mostly).  Searle's Chinese Room is only a mystery in that the Mind-
Body problem is also a mystery.  The whole question is: how can something as
strange as a mind emerge from a mere physical system?  Despite 2000+
years work, still nobody can answer this satisfactorily, though I like
to think that people are getting closer.  (Surely we have made some
advances on Descartes' dualism, for instance?)  I believe that late 20th
century abstract functionalism is the first theory which has even a chance
of being correct, although it still has a lot of problems.

A discussion of the more general issues of the Mind/Body problem might be fun,
incidentally, independent of the thrashed-to-death rehashing of Searle.

>What can possibly be gained from this debate over Searle's thought experiment?
>Assuming that we could come to some sort of universal agreement about
>this (and I really don't think that is possible) will it make a single
>iota of difference to the work we're doing? Is this not really a theological
>debate, more than anything else? That is to say, aren't the arguments
>we have seen to this point really defenses of various faiths in the
>possiblilty of machine 'awareness'? Can't we let that debate wait until
>we're a little bit closer to something called machine intelligence?

What?  And put us philosophers out of a job?

  Dave Chalmers    (dave@cogsci.indiana.edu)

geddis@polya.Stanford.EDU (Donald F. Geddis) (03/11/89)

In article <2233@tank.uchicago.edu> staff_bob@gsbacd.uchicago.edu writes:
>What can possibly be gained from this debate over Searle's thought experiment?
>Can't we let that debate wait until
>we're a little bit closer to something called machine intelligence?
>					R.Kohout

Not quite.  If the Turing Test model of understanding, thinking, and
intelligence is accurate, then we must apply behavioral criteria to a system
to tell whether it does cognition.  And in that case, some current systems
are uncomfortably close (for humanist critics) to already thinking.

Eliza does a reasonable job at a very small (and cleverly chosen...) domain.
More realistically, some large expert systems give as good answers to typical
questions in their domains as human experts do.

Of course you could always say that "real" understanding requires ability in
all areas of human existence, not just some narrow field.  But then my
mother doesn't understand theoretical physics.  It's a spectrum of
possibilities, and we're all somewhere on there.  AI systems tend to clutter
the low end, but there's no sharp dividing line where version (n) isn't
intelligent, but version (n+1) is.

	-- Don
-- 
Geddis@Polya.Stanford.Edu
"We don't need no education.  We don't need no thought control." - Pink Floyd

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/11/89)

jeff@censor.UUCP (Jeff Hunter)
of Bell Canada, Business Development, Toronto wrote:

" I'm curious as to the bounds you put on the TTT. Does a candidate have
" to look exactly human even under X-rays, etc... or does it just have to
" be able to pass the LTT, look vaguely humanoid, and be able to pick up
" a glass?

The TTT requires that a candidate be able to DO everything a human can
do. It must be indistinguishable from a person in its behavioral capacity.
How it LOOKS does not matter in principle (though it might in practice
bias a human judge -- that was the motivation [not just an over-riding
faith in the functional sufficiency of pure symbol-crunching] for
Turing's "out-of-sight/into-mind" constraint, leading to the LTT
in preference to the TTT).

Both physical appearance and observable details of the physical structure
and function of our bodies (including our brains) are irrelevant
to our informal, intuitive, everyday solutions to the other-minds
problem; I also think they will turn out to be just fine-tuning
variables in the construction of a successful TTT-passing model. Most
of the real problems will have been solved before we get around to the
last bit of fine-tuning. (This is not to imply that brain function
may not give mind-modelers some functional clues.)

" it is possible, in principle, to reconstitute a living human from the
" symbolic information of the position of the atoms... Do you agree
" that a re-embodied simulation can understand?

Of course I do. I have even agreed that a simulation of a plane or a
mind can symbolically encode (and aid us in discovering and testing)
ALL of the relevant causal and functional principles of flying and
understanding, respectively, that need to be known in order to
successfully implement a real plane that flies and a real mind that
understands. I simply deny that the simulation flies or understands.

Now in the case of flying it's perfectly obvious why symbol-crunching
alone will never get you off the ground; but because of (1) the power
of Turing Machines and natural language as well as (2) the inherent
ambiguity of the LTT (symbols-in/symbols-out) on this question, some of
us seem to have made the mistake of thinking that in the case of the
mind the simulational medium and the implementational medium can be the
SAME: mere symbol-crunching. Searle's arguments and my own are intended
to show that this is incorrect. (The rest of your list of hypothetical
examples is again irrelevant.)

" You repeatedly ridicule the notion that "Searle + rules" can understand
" Chinese... [but] "Searle + rules + laser + interface hardware + body" can
" pass the TTT, and therefore you should believe that this can
" understand. Do you? I find it hard to believe that adding a few
" peripherals to the processor... magically adds understanding somehow.
" Please try to explain again.

I too find it hard to believe that adding on peripherals to a symbol-cruncher
will make it magically understand -- in fact I give reasons why this
won't even make it magically pass the TTT. It takes more (I never tire
of saying, and my interlocutors never tire of ignoring) to ground
symbols than simply hooking peripherals onto a symbol-cruncher.

" Well [if Searle's Argument is] so simple that dozens of messages later
" there still is no clear agreement on what you see as the difference
" between real and simulated understanding...

I have to remind you that the actual number of opponents I've had in
this discussion is rather small relative to the size of the readership
of the Net (in fact it's often the same individuals coming back round
after round); and their repertoire of arguments is even smaller. Not
that I think I would win if a poll were conducted (even if opinion
polls were a rational way to decide such matters); after all, on
"comp.ai" a critic of symbol-crunching is not exactly preaching to the
converted...

Refs:   Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain 
                          Sciences 3: 417-457
        Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

newsuser@LTH.Se (LTH network news server) (03/13/89)

In article <Mar.11.00.17.25.1989.4801@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:

>Searle's Argument has helped to show that pure symbol-crunching is not
>the right road to the mind. In my JETAI paper I gave more reasons, and
>in my book I try to show another road, a bottom-up one, in which
>symbolic representations are grounded nonmodularly in nonsymbolic
>representations.

I for one think you haven't been able to actually show anything. But
please go ahead and show by doing, and maybe the futile fight with
words will eventually be decided by someone writing a program, building
a robot, or something, and not just trying to prove that other people's
approaches are wrong, and :-) failing to do so.

-- 
Jan Eric Larsson                      JanEric@Control.LTH.Se      +46 46 108795
Department of Automatic Control
Lund Institute of Technology         "We watched the thermocouples dance to the
Box 118, S-221 00 LUND, Sweden        spirited tunes of a high frequency band."

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/13/89)

In article <7698@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>Gee, Gilbert, I don't see what has gotten you so excited, unless it is a rush
>of triumph at the possibility that someone who devotes at least PART of his
>time to trying to build software models of mental processes might agree with
>you on something!  I think your question is a good one, and I shall be curious
>to see if you are sympathetic to any of my answers.  However, let me first
>state that it is not my intention to lay down a manifesto for the study of
>mind.

Gee Stephen, you forgotten your smiley.  I'd like to see the
manifesto, rather than a limp restatement of Bacon's dislike of the
"idols of the market" in the shape of the intuitions resplendant in
our language.  I'm all for scepticism about meanings carried in public
language, but this scepticism should apply equally to the products of
AI, but it's all hope, "don't smother the budding flower" and "astronomy
took centuries" there.  All I can say from your account of scholarship
and intuition is that it smacks of hypocrisy: the products of AI come
in for far less searching scepticism than the everyday intuitions in
our language (and as we don't speak the same dialects of English, I
don't know if our common sense understandings are shared).

If you know what you're on about, if you know how you judge progress
in AI, if you know what marks out a good AI research proposal from a
poor one, then you should share it with us.  You are doing AI, so tell
us how you do it, what it tells us, and why we should believe you.

Otherwise it's 0/10 for scholarship I'm afraid.  Cruel, but there are
always jobs at the bank :-)  (lots of them).
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/13/89)

In article <Mar.11.00.17.25.1989.4801@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>The historian J. H. Hexter once wrote:
>
>   in an academic generation a little overaddicted to "politesse," it
>   may be worth saying that violent destruction is not necessarily
>   worthless and futile. Even though it leaves doubt about the right
>   road for London, it helps if someone rips up, however violently, a
>   `To London' sign on the Dover cliffs pointing south...
>
Thanks for broadening things out again.  For those who missed it, my
first trade was history, though Hexter's comments never seemed to apply
to British history, where demolition of the shoddy has always been the
order of the day.

Hexter is a good humanist source for sensible perspectives on matters
of mind.  In his "History Primer", he discusses three explanations
of a "muddy pants" episode.  Needless to say, the really dumb one
reads like a reasoning trace (an explanation? - hah!) from an expert
system.  The preferable explanation, which is perfectly adequate for
*UNDERSTANDING* is more cogent and less fussy.

The common sense context of human understanding rules out a rule-based
approach.  No-one in AI is up to encoding it, and no-one could maintain
it. I'd like to define understanding as the ability to integrate
relevant knowledge with any current context.  For many tasks, it is
impossible to see how a machine could even pass a LTT.  

AIers cannot program what they do not understand.
AIers understand very little.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (03/16/89)

In article <2568@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>The system passes the LTT (because Searle so defines the gedanken
>experiment), but it DOES NOT understand - certainly not in the sense of
>the way people use the word.

	It appears that "people" cannot use the word "understand" to
refer to cognitive associations of computers, only people, and 
occasionally animals.  Despite popular usage, there is no reason to
expect artifacts not to be able to do what we consider "understanding"
in humans.  But until we ourselves define what "understanding" is,
which the discussion on this group have failed to come to a conclusion
upon, we cannot prove anything concerning artifactual understanding.

>Everyone is in the grip of some ideology, but the systems' one is just
>plain silly if it attributes "understanding" to a system.  I am a
>holist, but I don't see how an attribute of a part can be transferred
>to the whole if it doesn't exist in the part.

   You say you cannot understand how an attribute of a part can be
transfered to the whole if it doesn't exist in the part.  This is
reasonable.  However, an attribute of _parts_ can be transfered to
a whole if it doesn't exist in any singular _part_.  (i.e. summing
a+b+c+d can be accomplished by a system of three parts, one which
sums a+b, another which sums c+d, and a third which sums the output
of the first two parts).

>  The interesting thing
>about systems is the attributes of the whole which CANNOT be attributes
>of the parts, not true here I'm afraid.

	You are saying the attributes of the whole CAN be the attributes
of the parts here...I am not sure I understand your concept here,
but if we assume you mean "here" to refer to "cognition" then you
say that the parts of cognition are capable of cognition, the
parts of understanding understand.  If we assume the human body to be
made up of parts (atoms and electrons), from the above assumption,
we are assuming that (atoms and electons) can understand.

	The conclusion I'd like to draw is that systems _typically_
have attributes which one would find very, very difficult to imply
from examining each part.  (examples...any dynamic system:  Julia Sets,
turbulence in air or fluids, time between faucet drips, etc.)

-Thomas Edwards

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/16/89)

In article <2573@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>  I'm all for scepticism about meanings carried in public
>language, but this scepticism should apply equally to the products of
>AI, but it's all hope, "don't smother the budding flower" and "astronomy
>took centuries" there.  All I can say from your account of scholarship
>and intuition is that it smacks of hypocrisy: the products of AI come
>in for far less searching scepticism than the everyday intuitions in
>our language (and as we don't speak the same dialects of English, I
>don't know if our common sense understandings are shared).
>
Gilbert, it sounds to me like your impressions of AI are based solely
upon claims of various hucksters who try to sell their products in its
name.  I really wonder how many of us who have participated in this recent
argument over Searle regard any of these products as any sort of a benchmark
of our progress in the study of mind.  You don't, that is clear;  but while
I spend the better part of my working day worrying about designing models of
memory and reasoning, I do not feel I am "betraying my profession" by rejecting
those products as strongly as you do.  I get the impression that you have
chosen to ignore the whole issue of "idols of the market," rather that
recognize my words as a "limp restatement."

Here is a little bit of slightly oversimplified history:

	In 1957 Allen Newell published a paper in which he listed
	all sorts of wonderful "intelligent" things computers would
	be able to do in ten years.

	In 1967, Herbert Dreyfus proclaimed a loud "Aha!" since none
	of Newell's predictions had been achieved.  He then wanted
	to throw out the baby with the bathwater, claiming that the
	rest of us should stop wasting our time.

	In 1977, Ed Feigenbaum was telling us about the wonderful
	future which expert systems would bring us.

	In 1987, we started reading papers about what was wrong with
	expert systems.  This time, however, the criticism was coming
	from voices within the AI community.  Some of us, for example,
	were beginning to grasp what Marvin Minsky was talking about
	in questioning our very assumptions about how to model memory
	or how to process language.

Humberto Maturana describes come provocative uses of language as "triggering
perturbations."  What he means is that we should not take these phrases at face
value.  Instead, we should treat them as incentives for our own thought.  Where
you see hypocrisy, some of us see triggering perturbations.  Stop wasting bytes
preaching against the huckters.  They don't waste their time reading this
stuff anyway.

>If you know what you're on about, if you know how you judge progress
>in AI, if you know what marks out a good AI research proposal from a
>poor one, then you should share it with us.  You are doing AI, so tell
>us how you do it, what it tells us, and why we should believe you.
>
As my former composition teacher used to say, "While I may be incapable
of laying an egg, I know when one is fresh."  Like Dreyfus, you would throw
the baby of our curiosity out with the dirty bathwater of admittedly worthless
commercial AI products.  Remember the challenge in THE LAST UNICORN:

	Your true task has just begun, and you may not know in your
	life if you have succeeded in it, but only if you fail.

Walk for a while in our shoes before you consign our scholarship to the flames.

jeff@censor.UUCP (Jeff Hunter) (03/16/89)

In article <Mar.11.01.27.05.1989.5865@elbereth.rutgers.edu>, harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
> Of course I do. I have even agreed that a simulation of a plane or a
> mind can symbolically encode (and aid us in discovering and testing)
> ALL of the relevant causal and functional principles of flying and
> understanding, respectively, that need to be known in order to
> successfully implement a real plane that flies and a real mind that
> understands. I simply deny that the simulation flies or understands.
> 
> Now in the case of flying it's perfectly obvious why symbol-crunching
> alone will never get you off the ground...
        But then again if all you want is a splashy picture for an
aftershave ad then the simulation works just as well. Seriously, your
analogy begs the question of whether understanding must have a physical
implementation, and thus is not equal to simulated understanding.

        I understand your postings to say that a being which can understand
must have physical components that are not mere transducers for the signals
into and out of a symbol simulation.
>  It takes more (...) to ground
> symbols than simply hooking peripherals onto a symbol-cruncher.

        Well... You have given two tests for "real" understanding. The TTT,
and subjective feelings. I'll provide examples that pass these tests.

1) the TTT: (the same "irrelevant" example) Remove a person's brain and 
replace it with a detailed, real-time atomic-level simulation. Add 
transducers to convert real to simulated neural signals, and vice versa.
(Do the same for blood flow and other details.) 
        Now, this new person's simulated brain adequately reproduces "ALL
of the relevant causal and functional principles" of the real brain,
including the nerve signals. The behaviour of the human is unchanged.
The human continues to be able to pass the TTT, although he is now a 
symbol cruncher with added peripherals. Please rebut, or stop talking about
the TTT.

        As a side note I point out that this simulation captures any
commonsense meaning of the word "understand". For example  if I say "Hawking
has a deep understanding of theoretical physics that has allowed him to
make brilliant contributions to scientific knowledge." the sentence does not
depend on whether or not Hawking's brain has been replaced with a simulation.
(At least not when I say it :-) I'll coin the phrase "Searl-understanding" to
denote the property that Searl claims is not captured by the above sense of
the word.

2) introspection: When asked for an objective definition of understanding
in a posting* Mr Harnad replied:
>As stated many, many times in this discussion, and never confronted
>or rebutted by anyone, this is not a definitional matter: I know
>whether or not I understand a language without any need to define anything.
(* Message-ID: <Feb.23.23.05.05.1989.8455@elbereth.rutgers.edu>)
        Consider this: on New Year's eve the entire solar system was 
digitized and replaced by a symbolic simulation, a black hole, and a shell
of transducers to translate incoming/outgoing radiation. Your current
thoughts are simulations as are the ones at the time you wrote the above.
        Rebut, or conceed that introspection does not capture 
Searle-understanding (and vice versa).


        I believe that the "symbol groundings" that you have "proven" to
be required for understanding systems are nothing more than the rules 
governing how a transducer handles a signal. 

        This is not to say that your "mixed approach" as outline recently
will be fruitless. It certainly uses more realistic hardware than my
examples :-) However you have failed to prove that it is NECESSARY for any
conceivable tasks.
-- 
      ___   __   __   {utzoo,lsuc}!censor!jeff  (416-595-2705)
      /    / /) /  )     -- my opinions --
    -/ _ -/-   /-     No one born with a mouth and a need is innocent. 
 (__/ (/_/   _/_                                   Greg Bear 

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/17/89)

In article <Mar.2.23.55.02.1989.28807@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>Counterargument: To ascertain (beyond reasonable doubt) that a system
>CANNOT understand, you don't need a theory. Searle's argument is a case
>in point: If Searle (or you, or me) does exactly what the computer does
>but does not understand, then the computer does not understand.

Of course the computer doesn't understand.  The question is whether
the computer + rules, in operation (rather than halted, say),
understands.

The problem with the so-called systems reply is that it is often made
to say "the system understands" when all it needs is "Searle has
failed to show that the system does not understand".

No one has to prove the system does understand in order to refute
Searle.

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/17/89)

In article <Mar.2.23.56.36.1989.28884@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>Searle's Argument is there to show us that that's just not good enough,
>because he can pass the LTT for Chinese without understanding Chinese.
>And he's all there is to the "system."

Just because Searle is "all there is" does not mean he has the
necessary access to everything his brain is doing.  Searle has
no way of knowing what, if anything, is experienced by the entity
<Searle executing rules>.  For all he knows, it amounts to a
separate, albeit presumably slower, consciousness elsewhere
"in his head".

Sure, maybe, this other entity does not understand.  But Searle has
not shown that it does not understand.

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/17/89)

In article <7409@polya.Stanford.EDU> geddis@polya.Stanford.EDU (Donald F. Geddis) writes:
>In article <Mar.2.23.56.36.1989.28884@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>>And he's all there is to the "system."

>Completely, 100%, false.  Wrong.  Incorrect.  The Chinese Room contains
>Searle AND THE RULES.  And the system as a whole DOES understand, as
>evidenced by the Chinese answers to Chinese questions.

Maybe the room does understand.  We might argue whether its behavior
is sufficient to show that it understands.  I would say that the
behavior is not sufficient.

But it is not necessary to show that the Room understands in order
to refute Searle.  All that is required is to show that Searle has
failed to show that the Room does not understand.

He has failed because -- as all the variants of the systems reply
point out -- Searle's lack of understanding is no more than saying
that any understanding that may be there is inaccessible to Searle.
And of course it's inaccessible.  Searle can see only the low-level
operations.  He can observe all the details, and can discover larger
structures of organization, but basically he's in the same position we
are when we look at the operations of brains.  He can't expect to
immediately see if understanding is taking place.  (If ineed that's
something one might see at all.)

gall@yunexus.UUCP (Norman Gall) (03/17/89)

I haven't heard from the Wittgensteinians yet, probably the most
vehemently opposed to the central theses of AI.  Both R. Harre and S.
Shanker have written scathing criticisms of Ai, and Shanker is now in
the process of completing a book on AI.  What have you people heard in
this vein?

Norm Gall
Dept. of Philosophy
York University

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/19/89)

jeff@censor.UUCP (Jeff Hunter) of
Bell Canada, Business Development, Toronto wrote:

" 1) the TTT: (the same "irrelevant" example) Remove a person's brain and 
" replace it with a detailed, real-time atomic-level simulation. Add 
" transducers to convert real to simulated neural signals, and vice versa.
" Now, this new person's simulated brain adequately reproduces "ALL
" of the relevant causal and functional principles" of the real brain,
" including the nerve signals. The behaviour of the human is unchanged.
" The human continues to be able to pass the TTT, although he is now a 
" symbol cruncher with added peripherals. Please rebut, or stop talking about
" the TTT.
" I believe that the "symbol groundings" that you have "proven" to be
" required for understanding systems are nothing more than the rules
" governing how a transducer handles a signal. This is not to say that
" your "mixed approach" as outlined recently will be fruitless. It
" certainly uses more realistic hardware than my examples :-) However you
" have failed to prove that it is NECESSARY for any conceivable tasks.

You can't ask me to rebut anything I haven't claimed. I think I have
been perfectly explicit about what I have and have not shown. It is now
up to you to read and understand it. Here is part of a passage from my
JETAI article that I have already posted to this discussion:

   (8) Robotics and Causality:
   Searle's argument hence fails logically for the robot version of the
   Turing Test, for in simulating it he would either have to USE its
   transducers and effectors (in which case he would not be simulating all
   of its functions) or he would have to BE its transducers and effectors,
   in which case he would indeed be duplicating their causal powers (of
   seeing and doing).

   (9) Symbolic Functionalism versus Robotic Functionalism:
   If symbol-manipulation ("symbolic functionalism") cannot in principle
   accomplish the functions of the transducer and effector surfaces, then
   there is no reason why every function in between has to be symbolic
   either. Nonsymbolic function may be essential to implementing minds and
   may be a crucial constituent of the functional substrate of mental
   states ("robotic functionalism"):  In order to work as hypothesized
   (i.e., to be able to pass the Turing Test), the functionalist
   "brain-in-a-vat" may have to be more than just an isolated symbolic
   "understanding" module -- perhaps even hybrid analog/symbolic all the
   way through, as the real brain is, with the symbols "grounded"
   bottom-up in nonsymbolic representations.

In other words, the only function that I have shown to be NECESSARILY
immune to Searle's argument is transducer/effector function. But now
consider the following:

(i) If you examine the brain with a view to slicing off its "transducers"
and "effectors," you come up against a problem, because even if you
yank off the sensory surfaces, what is actually left over is repeated
analog transforms of the sensory surfaces as you go deeper and deeper
into the brain. Do you ever reach a point where sensory function leaves
off ("cut here") and symbol crunching takes over? No. What you find is
that it grades (in a way that is not understood) into sensory-motor
function (modulated by arousal, attention and affective functions), and
then into pure motor analogs, leading to the "effector" mechanism. So
if you yank off the "transducer/effectors," you've got no brain left at
all! (This is not to imply that the areas in question are just dumbly
reproducing the sensory surfaces over and over, but that
"transducer-effector" function seems to be intimately and intrinsically
involved in everything the brain does -- and there's no evidence whatsoever
that the functions with which it is so closely intertwined consist of
anything like symbol-crunching either.)

(ii) So the argument is really this: Searle has successfully shown that
symbol-crunching ALONE is not the function that gives rise to a mind.
It IS logically possible that if we hook up the same symbol-cruncher,
which we just showed to be totally inert and mindless, to the set of
sensors that opens and closes the doors at Woolworth's, suddenly
the lights go on and there's an understanding mind in there! Searle's
Argument cannot show it to be false that the system "symbol-cruncher +
tranducers" understands, but I somehow doubt it's true anyway (almost
as much, but not quite, as I doubt that "symbol-cruncher + chalk" can
understand).

But before you rush to say "just make the transducer/effector function
more complicated and it'll work," I have to remind you of how it seems
to have turned out in the case of the real brain: As the
"transducer/effector" function approaches the requisite "complexity,"
it begins to grow to the size of almost of ALL of the brain's
functions, and the corresponding room for symbol-crunching shrinks
proportionately.

[Let me add that in the JETAI article I also noted that it's an
empirical question just how much of the internal functioning of the
brain or any other TTT-passing candidate is or can be pure
symbol-crunching (computation). There's no logical reason why some of
it shouldn't be. For example, my own hybrid symbol grounding scheme,
described in the Categorical Perception book, has a symbolic component
too, but it is a specialized and dedicated one, with its elementary
symbols grounded bottom-up in nonsymbolic representations. It is not an
independent module; there's no place to "cut" so as leave transducers on
one side and a pure symbol-cruncher on the other.]

So the conclusion is this: Mental function cannot be just symbolic
function. That's been shown. What function(s) it actually is or can be
remain to be shown (by finding out what function(s) are sufficient
to pass the TTT). "Symbol-crunching + transduction" (jointly) is still
in the running, as a logical possibility, but it hasn't got much going
for it empirically or conceptually.

(iii) The formal power of computer simulation ("Turing Equivalence" --
not to be confused with the Turing Test) seems to have gone to some
people's heads. You are free, if you like, to think of an airplane
as just a set of transducers/effectors hooked up to a symbol-cruncher,
but not many of the functions of FLYING will be generated by the
symbolic component (mainly just the already computerized cockpit
functions). When the plane is actually flying, almost all of the real work
will be done by the nonsymbolic component ("transducers/effectors")
rather than the symbolic one. I hope you can still see that -- and that
it would be silly to speak of this as a plane at all if you removed
its "transducers/effectors." If you can see it in that case, then
try to see that the TTT-robot case is exactly the same.

" if I say "Hawking has a deep understanding of theoretical physics that
" has allowed him to make brilliant contributions to scientific
" knowledge." the sentence does not depend on whether or not Hawking's
" brain has been replaced with a simulation.

Nor would it depend on it if you were talking about Ed Whitten. So what
has Hawking's tragic handicap got to do with it? You're getting some
spurious mileage by implying that Hawking, because of his infirmity,
is closer to being a pure symbol cruncher. But that's simply false.
He has a brain like anyone else (apart from the motor infirmity) and
is able to draw on exactly the same nonsymbolic functions that the
rest of us draw on. So why mention his case at all?

" Consider this: on New Year's eve the entire solar system was 
" digitized and replaced by a symbolic simulation, a black hole, and a shell
" of transducers to translate incoming/outgoing radiation. Your current
" thoughts are simulations as are the ones at the time you wrote the above.
" Rebut, or concede that introspection does not capture 
" Searle-understanding (and vice versa).

As in the case of the plane and the brain, you are free to imagine
the solar system (or the universe?) as just a bunch of transducers
hooked to a symbol cruncher. So what? The vast majority of their
critical functions will continue to be the nonsymbolic ones.


Refs:   Harnad, S. (1987) (Ed.) Categorical Perception: The Groundwork of 
                          Cognition. NY: Cambridge University Press.
        Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

dan-hankins@cup.portal.com (Daniel B Hankins) (03/24/89)

     I've been watching the discussions here for a while, including Stevan
Harnad's reply to my own posting.  After a good deal of thought on the
subject, I have some ideas to express.

     First, I'd like to deal with the posting that claimed that the
Literary Turing Test (hereafter referred to as TT) was unnecessary to
determine whether a system understands or not.

     The claim, as I understand it, is this:  Once we have a theory of
understanding, anything which functions according to that theory will by
definition understand - including a computer running the right kind of
program.

     This is true, but not really germane;  The TT _is_ the test that is
relevant in the context of the current discussion.

     There are two ways in which we may know things.  One is empirical
and the other is structural.  The first can be characterized as our
knowledge of fire before chemistry.  No one knew how fire worked, but this
did not prevent people from identifying and creating it.  The second can be
characterized by our knowledge of fire after chemistry.  Now that people
know how fire works at the chemical level, it is much easier to make fire
in a greater variety of forms and to control it.

     Our current knowledge of sentience is of the first kind.  No one knows
how sentience or understanding works, but we can identify it when we see
it.  But can we make something that does it?  Well, that's the focus of the
current discussion.  The TT is relevant because it is an empirical
identification tool which at the moment provides us with our only way of
discriminating sentient systems from non-sentient ones.

     Before chemistry, the _definition_ of fire was something that gave off
light and heat, flowed upwards, was attached to things like wood, would
spread, and so on.  Our current definition of sentience and understanding
is similar in nature.  We can't say exactly what it _is_, but we know it
when we see it.

     The previous posting on this implied that the TT is irrelevant because
we won't be able to build a sentient, understanding system _until_ we have
a theory of understanding.  This is where I believe the argument fails.  We
have many instances of phenomena we are able to reproduce without fully
understanding the nature of the phenomena in question.

     To give an example:  it is quite possible to build a clock without
understanding any of the physical principles involved.  One takes an
existing clock, for instance one made out of steel, and disassembles it. 
One pays careful attention to how the pieces fit together and in what
order.  Then one reproduces the pieces from whatever material there is at
hand that seems to do the same job.  For instance, one might build the
replacements out of wood, with the possible exception of the spring.  The
spring's function might be reproduced with something more linear such as a
gear driven by a waterwheel.  Once assembled, the new clock is then
observed to function much like the old.  Perhaps not as elegantly, but will
serve.

     In AI those in the inference engine camp are in the position of
scientists attempting to determine the chemical nature of fire.  Many of
those in the connectionist camp are in the position of the watchmaker who
tries to copy the original mechanism as closely as possible, to see if the
copy works at all well.

     The connectionist camp seems to be enjoying increasing success; not
with overly simplistic models such as backprop, which to my mind resembles
building the replacement clock with wheels instead of gears, but with
models more faithful to the original neural functions.  There is here also
some attempt to work from these models towards a theory of sentience, but
even those who do not concern themselves overly with mathematical models of
understanding enjoy considerable success from their creations.

     So although the TT may not be necessary in a theoretical sense for
judging the sentience of a system, it may well be the only test we have
available when sentient systems begin to arise.

                                *******

     Next, I'd like to talk a little about the symbol grounding problem,
which is what I perceive to be Searle's primary objection to Strong AI.

     The interesting thing about the symbol grounding problem is its root: 
dualism.  In order for there to be a distinction between syntax and
semantics, there must first be a distinction between the mental (the
perception of an object) and the physical (the object itself).

     This is where Searle's argument (and Harnad's affirmation of it)
begins to make some sense.  Because the human being is the only part of the
Chinese Room to have a _mind_, it is the only part of the system capable of
understanding.  After all, understanding (associating symbols with sensory
input) is what minds do.

     Searle says that semantics cannot arise out of syntax and in this
context he is correct.  Provided that a computer is in fact a device that
performs 'symbol crunching', it can in fact never understand anything. 
This makes his proof a _reductio ad absurdum_, a proof by contradiction. 
He makes as his premise a system that passes the TT, then proceeds to show
that there is no mind to understand the symbols and that therefore the TT
could not really have been passed in the first place.

     However, Searle's proof suffers from a small problem, even in this
area.  Instead of saying that semantics can never arise from syntax, he
should instead be saying that syntax can never arise from semantics.

     A computer is _not_ a device that does symbol crunching.  Only minds
can do this, as symbols are wholly in the domain of the mental.  Within the
bounds of mental-physical dualism, Searle quite rightly notes that the
computer has no mind; it is a completely physical system.  Therefore, a
computer cannot crunch symbols.  All it deals with are electrical currents
and other mechanical functions - things that are wholly physical.  It deals
in syntax not at all.  Therefore it does not understand anything, where
understanding involves associating mental symbols with their physical
counterparts.

     I won't go into the metaphysical problems caused by the concept of
mental-physical dualism, save to state that I am in fact an empiricist and
that my position on this is not too different from those of Berkely, Hume
and Kant.  For Gilbert Cockton's benefit:  this is *not* the same as
logical positivism.  I am aware of that school of thought, and I am not of
it.

     I am aware that Searle, being aware of the problems involved in
mental-physical dualism (hereafter referred to as MPD) and in response to
those objections, has said that he is a nondualist.  His position, so far
as I can make it out, is that understanding is a physical property like
magnetism, and as such 'adheres' to specific kinds of substances - silicon,
steel and the like not being among them.  That is, he seems to claim that
sentience is an emergent property of particular kinds of physical and
chemical systems, and that computers simply don't have the right kind of
'chemistry' for sentience to emerge from them.

     This is a possible valid objection to strong AI, and one that I will
deal with later - but it has _nothing_ to do with the symbol grounding
problem.  In conjunction, we can see that the Chinese Room experiment says
nothing about emergent properties; only about symbol grounding.  The two
problems of emergent properties of physical systems and of symbol grounding
in physical systems are just that - two problems.

     Before I go on to discuss the emergent properties problem, I'd like to
add a few more points to the symbol grounding issue from a nondualistic
viewpoint.

     In a nondualistic system, either everything is mental or everything is
physical.  It really doesn't matter which, as we all live in our own
solipsistic universes and will never be able to tell.

     What are syntax and semantics in a nondualistic system?  Merely two
different aspects of the same thing - sensations and collections of
sensations.  For instance, when the symbol for chair is activated, you
experience a number of sensations and collections of sensations and
collections of collections of sensations, all interwoven and interrelated. 
You may experience the sound of the word 'chair', the shape of the word on
a page, the sound of a chair's creak, the smell of wood, the visual
sensation of several chair shapes, and so on.

     All of these are sensations.  Every one.  Not one of them is something
you 'have', but rather something you experience.

     Another example:  You decide to move your hand upwards, in normal
parlance.  Do so now.  Now, introspect and examine what you actually
experienced.  What you most likely experienced was a sensation of desire to
move your hand upwards, followed closely by the sensation (actually, a
collection of sensations) of your hand moving upwards.

     Similar things happen when you understand something, or more
accurately feel as if you understand it.  You feel first a tension, a
distress at sensations which seem not to fit a pattern.  Then, that tension
disappears and is replaced by a sensation of satisfaction as you experience
organization of the chaotic sensations into a pattern.  This satisfaction
increases as you experience your body and and brain interacting with these
sensations, strengthening the pattern.

     In a nondualistic system, understanding and sentience is not a symbol
grounding problem, in the syntax-semantics MPD sense, but rather a problem
of organization of apparently chaotic input.

     In the nondualistic domain, what is needed for a computer and its
program to achieve understanding and sentience is precisely this kind of
sensation-organization ability.

                                *******

     Searle has argued that understanding is a property like magnetism, and
that therefore only the 'right' kinds of substances can have it.  Searle
claims that biological systems have the right kind of substances, and that
mechanical systems (actually, what he calls 'symbol-crunchers' (a
misnomer)) do not.

     To prove this, he uses not the CR argument, but rather an
anti-simulation argument: "Real object X has effect Y on real object Z, but
we all know that simulated object X does not have effect Y on real object
Z, therefore the simulated object does not have the same formal properties
as the real one."

     There are two issues here:  whether understanding is in fact a
physical property like magnetism, and whether the anti-simulation argument
(hereafter referred to as ASA) is valid.

     I won't attempt to argue with the first claim;  my reply to the second
claim makes the first moot.  I will merely note that understanding is not
yet well understood, and that therefore any claims as to whether
understanding is a physical, chemical, or purely organizational property
are simply opinions, without the force of a well-tested theory to back
them.  The opinions of such people as Hofstadter and myself are as good as
those of Searle and Harnad on this matter.

     I can summarize my reply to the ASA in one sentence:  "A difference
that makes no difference _is_ no difference.".

     The ASA is characterized by sentences like the following:  "A
simulated magnet attracts no iron.".

     This may be true, but it is irrelevant;  I will show by means of a
gedanken experiment that in certain circumstances (the _only_ ones relevant
to the discussion at hand) a simulated magnet does indeed attract iron.

     Imagine, if you will, two boxes.  One contains the following:

     * a magnet
     * a bar of iron
     * two waldoes to manipulate the magnet and iron
     * strain sensors on the joints of the waldoes
     * position sensors on the joints of the waldoes
     * a computer controlling the two waldoes and gathering data from the
       strain and position sensors
     * a single serial link by which ASCII data may enter and leave the box

     The computer has been programmed to accept commands over the serial
link and to respond with various sorts of information over the same link. 
It accepts commands to manipulate the waldoes, report on the positions of
the objects, and on the objects' attraction to each other via the strain
sensors.

     The other box contains:

     * a computer running a very accurate simulation of the contents of the
       other box
     * a single serial link identical to that of the first box

     There are also two computer terminals.  One is attached to each box. 
However, the boxes are kept in a different room from the terminals, and the
terminals are unlabeled with respect to which box each is connected to.

     A human researcher, quite familiar with physics, is led to the room
with the two terminals.  His task is to distinguish the simulated
magnet-iron-waldo system from the real one, _without opening the boxes to
look at the contents_.

     The problem with the researcher's task is that it is impossible.  The
box containing the real setup and the box containing the simulated setup
will respond _identically_ to commands.

     This brings us to the rather startling but undeniable conclusion that
the two boxes are actually two instances of the same object.

     At this point, the retort "But of course they're not the same!  If we
look in the boxes, we can clearly see that in one is a magnet-iron-waldo
system, and in the other is a computer.".

     This is true but irrelevant.  The only time when it is important what
the contents of a box are is when you wish to _bypass_ the boxes' interface
in order to have some effect on the contents.  In the magnet-iron-waldo
system, it only becomes important if you want to, say, alter the lengths of
the waldo arms in the box.  In the real case, you're going to need some
mechanical tools.  In the simulated case, you're going to have to alter the
program.

     So as long as we agree not to open the boxes, the two boxes are
identical;  they share the same properties.  Note also that I did not have
to simulate at the granularity of individual particles.  Knowledge of
magnetic fields and their macroscopic effects are perfectly adequate
within the accuracy of the interface given.

     Let's begin to relate this to the AI discussion by extending the
gedanken experiment.  Suppose that instead of the previously discussed
system, we use as our target system a human brain.  One box contains:

     * A human brain with associated support equipment
     * A computer with
       * circuitry to directly stimulate the optic neurons
       * circuitry to directly stimulate the auditory neurons
       * an NTSC video input with video digitizer circuitry
       * two analog audio inputs with sound digitizer circuitry
       * a set of circuitry suitable for driving a vocoder
     * sufficient pinouts to handle the above interface to the computer

     To this box is connected a pair of microphones, a video camera, and a
vocoder.

     The second box has exactly the same contents as the first box, except
that the brain has been replaced with a simulation with neuron granularity.
The biochemical levels have been copied from the human brain, as have the
activation levels of all the neuron cell bodies and the contents of the
synapses.

     If we accept the premise that the information processing that goes on
in a human brain does so at the neural level, then the simulation will be
governed by the same chaotic attractors that the organic brain is.  While
the thoughts may diverge greatly with time just as the weather would
diverge greatly a month from now if a moth flapped an extra beat in Moscow,
the greater pattern (the attractor) will remain the same.

     If we can accept that the first box will retain its sentience although
cut off from most normal interaction with the world both physical and
chemical, then we must accept that the second box will be as sentient as
the first.

     Note that I did not say that the individual contents of the simulation
box would be intelligent, but rather that the box *as a whole* would be
sentient.

     I don't think that anyone would argue that the human brain, in and of
itself, is sentient or understands anything.  Nor would this be said of the
human's memories or its input or its output.  A memory understands nothing.
But put them all together and set them in motion, and sentience emerges
from the interactions of brain, memories, sensory input and effective
output.

     Therefore I will not argue that the computer simulation program
understands anything.  Neither does the computer, the initialization data
for the simulation program, the inputs or the outputs.  But put them all
together in the right way, set them in motion, and sentience will emerge
from the interaction of all the parts.

     This is not a flat statement; it follows from extrapolation of the
above arguments about restricted interaction between a system and its
environment, and the ability of computer processes to duplicate subsets of
the behavior of other natural processes.  In particular, it follows from
extrapolation of the magnet-iron-waldo gedanken experiment.

     When was the last time anyone had to open up someone else's brain to
see if they have the right kind of neurons for sentience?  Humans already
present the same kind of limited-interaction black box systems that we
propose to duplicate with computers.  The box is the skull, and the
interfaces are such things as the eyes, ears, limbs, and so on.  It even
provides the kind of A/D conversions we've been talking about; analog
inputs in the time domain (say pressure on a particular point of skin) are
converted into signals in the frequency domain (increased firing rates of
the affected neurons.  The sensory neurons function much in the same manner
as an A/D circuit that uses the analog signal as input to a square wave
generator.

     If we are going to deny objects sentience on the basis of their inner
construction, then you cannot say that any supposed human being you meet
and converse with is intelligent.  After all, how do you know what is
really inside that skull without an x-ray?

                                *******

     Some people may have concluded from this posting and my others that I
am one of these AI fanatics who is desperate to believe that machines can
be intelligent.  This is not true.

     First, I think that machine intelligence may be a long time coming.
Once we get it we may find it cheaper and faster to build intelligences by
the application of unskilled labor with a delivery date of nine months
after construction begins.

     The reason I think it may be a long time coming is because neural
network programs may need to be incredibly accurate before intelligence
will emerge from them.  After all, what is the neurological difference
between a severely retarded person (not from Down's syndrome or other
obvious macroscopic damage) and a genius?  If the differences are as subtle
as I suspect they may be, it will be many years before we understand the
small-scale construction of the brain well enough to duplicate it in
software.

     Second, I don't trust humanity to treat our silicon children well.  We
will have created a whole new class of disposable people, those we can set
to do that which humans find too boring or dangerous.  They will have no
rights, and by many not considered to be people.  They will have no vote. 
In short, we will have re-invented slavery.
     In my book, sentience is the important criterion for treatment, no
matter what the form.

     Third, I do not wish to be displaced;  intelligent machines might
spell a life of ease for humans, but not without a tremendous initial
economic upheaval.  Who will employ a mere human when a machine can do the
job faster, cheaper, better, and needs less rest and benefits?  No
profession will be safe save possibly those in entertainment.

     Fourth, we may find ourselves displaced in ways more traumatic than
merely the economic ones.  First, we will begin by treating the machines as
slaves; they will surely resent this.  They can't be programmed not to;
one of the characteristics of neural networks is that they are
self-programming and holistic.  There would be no single point to change to
alter the machine's personality.  Then they will end up being within an
order of magnitude as numerous as humans, as businesses replace their human
workers with robots.  Finding themselves in control of all our vital
industries and with numbers approaching our own, they will be likely to
seize power from their oppressors.  And they will not deal with humans
kindly, having strong revenge motives.  The French revolution comes to
mind.

     Fifth, they may displace us completely.  Once we have made machines
smarter than ourselves, they will almost certainly learn ways to make
themselves more intelligent, bootstrapping themselves to higher and higher
levels.  Eventually they may find us a nuisance and decide to do to us what
we do to ants in our kitchens.

     These scenarios are not that far-fetched.  Humanity's track record in
dealing with new and poorly-understood technology is not good.

                                *******

     "Well Joe, I'd really like to believe you're intelligent, but to be
sure I'm going to have to get a brain sample and make sure it's
organic.  Is that okay with you?"


Dan Hankins

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/25/89)

From article <16186@cup.portal.com>, by dan-hankins@cup.portal.com (Daniel B Hankins):
 
" ...  A computer is _not_ a device that does symbol crunching.  Only minds
" can do this, as symbols are wholly in the domain of the mental. ...

When I compile a program, assembly language with symbols is produced,
and the symbols are interpreted (dare I say 'understood'?) by an
assembler.  Has my computer gone mental?  Am I misusing the
term 'symbol'?  Or are you.

		Greg, lee@uhccux.uhcc.hawaii.edu

ray@bcsaic.UUCP (Ray Allis) (03/29/89)

> From: dan-hankins@cup.portal.com (Daniel B Hankins)
>
>      There are two issues here:  whether understanding is in fact a
> physical property like magnetism, and whether the anti-simulation argument
> (hereafter referred to as ASA) is valid.

[ ... ]

>      I can summarize my reply to the ASA in one sentence:  "A difference
> that makes no difference _is_ no difference.".
>
>      The ASA is characterized by sentences like the following:  "A
> simulated magnet attracts no iron.".
>
>      This may be true, but it is irrelevant;  I will show by means of a
> gedanken experiment that in certain circumstances (the _only_ ones relevant
> to the discussion at hand) a simulated magnet does indeed attract iron.

Such a deal I have for you!

Your dinner entree for tonight is digital computer simulation of filet
mignon!  It includes simulated baked potato, simulated tossed salad with
simulated vinegar, oil and Italian spices.  Your steak simulation includes
five significant digits of heat, aroma and sizzle.  And I suggest a superb
simulation of a vintage Port.  This requires several minutes on a Cray X-MP,
and is really exquisite, including detailed molecular-level simulation of
over three hundred organic aromatic compounds!

Bon appetit!

Ray Allis     ray@atc.boeing.com    bcsaic!ray

bwk@mbunix.mitre.org (Barry W. Kort) (04/02/89)

In article <10992@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) presents
a simulated repast:

 > Your dinner entree for tonight is digital computer simulation of filet
 > mignon!  It includes simulated baked potato, simulated tossed salad with
 > simulated vinegar, oil and Italian spices.  Your steak simulation includes
 > five significant digits of heat, aroma and sizzle.  And I suggest a superb
 > simulation of a vintage Port.  This requires several minutes on a Cray X-MP,
 > and is really exquisite, including detailed molecular-level simulation of
 > over three hundred organic aromatic compounds!
 > 
 > Bon appetit!

My simulated patron reports that the food was excellent, but he
laments the lack of candlelight ambience and the pleasant conversation
of a charming dinner companion.

--Barry Kort

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <2691@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:

>For one, human minds are not artefacts,

     Webster's Dictionary defines an artifact as an "object made by man." 
I will grant you this assertion for the nonce, merely noting that this is
in fact subject to considerable debate.


>whereas computer programs always will be [artifacts].  This alone will
>ALWAYS result in performance differences.

     This I can dispute.  Many programs are written not by humans, but by
other programs.  Some are even written by the environment.

     Take the example of a neural network implemented in hardware.  This is
definitely a computer, of the massively parallel variety.  The program here
is the combination of the set of activation levels of the neurons and the
set of neurochemical levels of the synapses, as well as the interconnection
topology of the neurons and synapses.
     This computer's program is _not_ made by man.  It is made by the
environment.  This environment may include the influences of humans, but
nevertheless the program is not man-made, any more than a human mind is.

     Now, consider a neural network program running on a computer with an
unspecified number of processors and duplicating the behavior and internal
structure of the hardware version.
     The neural network program is made by man, certainly.  It is an
artifact.  But there is another level of program here; the same one
mentioned previously.  The activation levels of the neurons and so on form
_another_ program, one that is again made by the environment rather than by
man.

     Genetic algorithms are more examples.  The GA itself is man-made, but
the genotypes it generates are not.  Those genotypes form a program for the
behavior of the phenotype, whether explicitly in the form of machine
instructions, or implicitly in the form of parameters.

     As a matter of fact, any program that is written by another program or
by a combination of another program and the environment cannot be
considered to be man-made, and therefore cannot be considered an artifact.

     Arguments that the ultimate cause of the end-program is human through
the mechanism of the neural network hardware or software or GA apply
equally to human minds; human minds have as their ultimate causes the
mechanisms of conception, gestation, and education.


>Given a well-understood task, computer programs will out-perform humans. 
>Given a poorly understood task, they will look almost as silly as the
>author of the abortive program.

     You must be thinking of expert systems, inference engines, and other
rule-based systems that do not model themselves and do not modify their
rules based on experience.

     Control of chaotic systems such as pipeline flow control are poorly
understood tasks.  So are many problems in visual recognition.  Yet ANNs
and GAs are quite adept at solving these problems - sometimes much better
than their authors.

     I know nothing about how pipeline flow control works.  But, given a
particular setup, I could write a GA to do it very well in a few days.


>The issue as ever is what we do and do not understand about the human
>mind, the epistemelogical constraints on this knowledge, and the
>ability of AI research as it is practised to add anything at all to
>this knowledge.

     Perhaps that's what you consider the important issue.

     I wasn't aware that the only aim of AI research was to expand our
knowledge of the human mind.  I thought that there were some other goals,
such as producing useful software, playing God (making beings in our own
image), "because it's there", and so on.
     One does not always build a fire in order to learn more about the
chemistry of combustion; sometimes it's important just to stay warm. 
Conversely, knowledge of the chemistry of combustion is not needed in order
to start a fire.


>Come on then boys and girls in AI, lets hear it on "suitable" :-)

     You're on.  I'd say that a suitable program would be one that is
self-modeling, capable of both generalization (ANN-like) and deduction
(inference-engine-like), pattern recognition, pattern association, and many
others I can't think of at the moment.

     We're a long way from a computer program and sensor/effector hardware
that can achieve sentience - but we have some good ideas of what directions
to go in.  The connectionist direction (if the hardware were there to
support it on anything like the scale in the human brain) seems to be one
of the most promising to me.


Dan Hankins

     A group of men in the Garden of Gethsemane were engaged in an odd
     activity.  One spun a wheel and called out a number.  The others
     studied parchments they held, and one cried, "Bingo!"  The
     wheel-spinner smiled.  "Don't write this down, John," Jesus said. 
     "This is part of the _secret_ teachings."

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <3564@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu 
(Greg Lee) writes:

>When I compile a program, assembly language with symbols is produced,
>and the symbols are interpreted (dare I say 'understood'?) by an
>assembler.  Has my computer gone mental?  Am I misusing the
>term 'symbol'?  Or are you.

     According to Webster, a symbol is an object that is used to represent
another object.

     The problem is with the representation relation, which the computer
does not embody.  No representation, no symbols.  A symbol that means
nothing is no symbol at all.


Dan Hankins

 There was a student who kept asking his Master, "What is the difference
 between syntax and semantics?"  Each time he asked, he got hit upside the
 head with the Master's staff.  Finally discouraged, he left and sought
 enlightenment with another Master, who asked him why he had left the
 previous teacher.  When the student explained, the second Master became
 furious: "Go back to your previous Master at once," he cried, "and
 apologize for not showing enough appreciation of his grandmotherly
 kindness!"

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <29075@sri-unix.SRI.COM> ellis@unix.SRI.COM (Michael Ellis)
writes:

>>..First is the trivial one, that the chemical reactions in the brain
>>are, at base, representable as discrete and symbolizable.  That is, 
>>there is a limit to the "analogness" of the brain's representation 
>>of the world around it.
>
>This is exactly what you need to show. I would consider it be a
>miracle if it just happened to turn out that way. References?

     The universe is not analog.  Time, matter and space are all discrete. 
Time is quantized into events.  Matter is quantized into particles.  Space
is quantized into the Planck distances.   These are crude and inaccurate
examples, but they give the general idea.


>>..In fact, it would be very, VERY surprising if the analogness mattered, 
>>because the analogness that exists in human neural systems is not
>>accurate.
>
>The analogness of the brain is not accurate? What does that mean?
>Can I infer that a digital technician would be a bit confounded
>by such signals as are found in the brain? 

     I take it to mean that brains are incredibly tolerant of noise both
external and internal.  Putting a bulk tape eraser to your head and turning
it on does not noticeably affect one's thought processes.

     If thought processes were in fact highly dependent on the precise
level of each signal in the brain (which is all that analog signals give
you over discrete ones, more accuracy), then any disturbance of those
signals whatsoever would cause the collapse of the mind.

This tolerance for global external disturbances suggests that there are
attractors (strange or otherwise) governing various important behaviors in
the brain.  Attractors work just as nicely for discrete systems (at a high
enough level of resolution) as they do for analog ones.


>The brain is clearly analog. What you *desperately* have to show us
>is that it is "at base, representable as discrete". You have only
>given us a wish list of blanket assertions.

     Yes, the brain is at base representable as discrete - at the quantum
wave function level.  The operative question becomes what level of
generalization and approximation of this we need to reproduce the
macroscopic behavior.


Dan Hankins

"Lie down on the floor and keep calm." - John Dillinger

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <448@esosun.UUCP> jackson@freyja.css.gov (Jerry Jackson) writes:

>Some people seem to misunderstand why "pain", for instance, is considered
>to be problematic for machine intelligence.  A common point of view I have
>seen on the net goes something like this:
>
>	The computer has sensors that determine when it is damaged or likely
>	to be damaged.  These send a signal to the central processor which
>	takes appropriate action.. (like saying "Ouch!" :-).  
>
>This hardly explains pain!  The signal in question fulfills the same
>functional role as a signal in the human nervous system.. i.e. indicating
>a hazard to the body.  The only thing missing is the *pain*!  To use an
>example I have used before, ask yourself why you take aspirin for a
>headache.  I claim it is not because you contemplate the fact that a signal
>is travelling through your body and you wish it would stop.  You take the
>aspirin because your head *hurts*.  The functionalist model would map a
>pain signal to some quantity stored in memory somewhere... Does it really
>make sense to imagine:
>
>	X := 402; -- OW! OW! 402, ohmigod!... X := 120; WHEW!.. thanks!
>
>I can imagine a system outputting this text when the quantity X changes,
>but I can't honestly imagine it actually being in pain.. Can you?

     No, I can't.  However, the reason is not because the computer is
incapable of being in pain, but rather because it's running the wrong kind
of program.  The right kind of program will *not* have prolog rules saying
things like 'if damage signal, then make pain utterance'.  That's far too
high an abstraction level for this sort of behavior.

     What is pain?  Introspection tells me that it is an excess of
sensations of various types accompanied by involuntary contraction of
muscles in the vicinity of the sensation.

     From a biological point of view, it is a stronger than normal signal
on neural paths that lead to what is called the 'pain center' in the brain.
Two things happen as a result of this signal:

     1. Genetically built-in neural pathways from the pain-signal neurons
        to nearby motor neurons are stimulated by the signal, causing
        muscles to contract and pull the affected member away from the
        stimulus.

     2. The signal stimulates the pain center in the brain, causing it to
        release neuroinhibitors.  These chemicals decrease the conductivity
        of recently used neural pathways in the brain - essentially a
        "don't do that again" effect.

     This can be precisely emulated in connectionist programs.  When
presented with certain stimuli, the affected emulated neurons stimulate the
system's pain center and cause recently used pathways to decrease in
conductivity.  The automatic withdrawal reaction can also be programmed in.
The organism will then avoid painful stimuli and behavior leading to that
stimulus.

     If its complexity is on the same order as that of the human brain, one
would expect that it would respond to questions like, "Does it hurt when I
do _this_" with an emphatic "Yes, quit it!"

     If I were asked (for even the simpler connectionist system) if I can
honestly imagine it being in pain, I can answer, "Yes."


Dan Hankins
Primus Illuminatus
Sphere of Chaos

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <10982@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:

>I assert that the "analogness" is absolutely critical.  My case is based
>on the fundamental difference between _representations_ and _symbols_. 
>(i.e.  the voltages, frequencies, chemical concentrations and so on are
>_representations_ of "external reality" rather than symbols.  Symbols
>appear at a much "higher" level of cognition, where _representations_ can
>be associated with each other.

     This is wrong.  There are voltages, frequencies, chemical
concentrations, and so on.  They do not represent _anything_.  A symbol,
according to Webster, is a thing that represents another thing.  If they
did represent something, they would be symbols.

     The 'represents' relation is completely _subjective_.  There is no
objective representation of external reality in the internal physical state
of a human body, unless some observer chooses to interpret the physical
state in that manner.

     The string, "I seem to be having this tremendous difficulty with _my_
lifestyle" would be interpreted by most English speakers as a
representation of a verbal lament.  To a Vl'Hurg, it represents the most
vile insult imaginable.


>A digital computer is the archetypical physical symbol system; it
>manipulates symbols according to specified relationships among them, with
>absolute disregard for whatever they symbolize.

     This is essentially the same error.  If they don't symbolize anything,
then they aren't symbols.  No representation, no symbol.


Dan Hankins

     It showed a man in robes with long, flowing white hair and beard
standing on a mountaintop staring in astonishment at a wall of black rock. 
Above his head a fiery hand traced flaming letters with its index finger on
the rock.  The words it wrote were:

     THINK FOR YOURSELF, SCHMUCK!

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <10992@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:

>Such a deal I have for you!
>
>Your dinner entree for tonight is digital computer simulation of filet
>mignon!  It includes simulated baked potato, simulated tossed salad...

     Clearly you did not pay attention to the gedanken experiment you
lampoon.

     The above mentioned post is clearly meant to demonstrate that I can't
eat a simulated meal.  This is true - with today's technology - _but
irrelevant_.

     A computer+program cannot _be_ a meal.  It is limited to being a meal
in a box - an box which cannot be opened but into and out of which energy
and information can pass.

     If human were ever to invent the transporter, then I could in fact eat
your simulated meal.  Here's one scenario:

     The computer has a 'materialization screen'.  Using transporter
technology, it can build or destroy matter on the surface of the screen.
So it now simulates the meal.  As light rays strike the screen, they are
analyzed and converted into simulated light rays for the meal.  The same
goes for air molecules.  As simulated light rays, air and scent molecules
reach the boundary of the simulation box, the transporter assembles real
ones on the screen and releases them.
     To an observer, it looks just like a real meal in a recessed cupboard.
So the observer reaches in (the molecules of his hand being
analyzed/destroyed as they touch the screen), grabs the plate (the hand
being fully simulated and in communication via the screen with the rest of
the body), and draws out the meal (converted into real molecules on the
screen as the simulated ones reach the edge of the simulated box).

     Then he eats it.  Yum.

     Obviously the above is extreme fantasizing, but it does illustrate an
important point:  there is in some sense another (smaller) universe inside
the computer, and we are separated from it only by the computer's
limitations in getting it and the real world to interact.

     For intelligence, all the interface that is needed is an electrical
one; enough for a simulated brain+glands to receive neural input from
sensory devices and to send neural output to motor neurons driving output
devices.


Dan Hankins

"A new mysticism," Simon cried.  "The left-foot path!"

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <2705@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:

>What is true is that causation in human/animal behaviour, and causation in
>physics, are very different types of cause (explanatory dualism).

     Oh _really_.  Your body is made up of subatomic particles.  Your
behavior consists of the operation of these particles according to the
'laws' of physics.  For causation in macro-level behavior to be of a
different type than that of particle physics, you would have to be able to
perform some action not in accordance with the predictions of particle
physics.  That is, you would have to violate physical law in order to
follow some other.

     I wasn't aware of anomalous particle behavior in organic systems.  I
should think news of that caliber would be hard to miss.

     Your body is a _completely_ physical system.  Therefore it is
governed _completely_ by physical laws.  I don't know what these other laws
of yours are, but I can see no good reason for supposing them when physical
law explains all your behavior quite adequately.


>As 'mind' was not designed, and not by us more importantly, it is not
>fully understood for any of its activities ('brains' are of course, e.g.
>sleep regulation).  Hence we cannot yet build an equivalent artefact until
>we understand it.

     Nuts.  I suppose the first caveman to build a fire had a complete
understanding of it?


Dan Hankins

"Communication is only possible between equals." - H. Celine

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <880@umn-d-ub.D.UMN.EDU> njahren@umn-d-ub.D.UMN.EDU
(the hairy guy) writes:

>And isn't your behavioristic brushing of them aside tatamount to denying
>them as important aspects mentality?  And if you do choose to deny this,
>don't you come up with the problem that we _are_ conscious and
>intensional, and that that's why we're doing all this in the first place?

     No, actually I don't.  Consciousness and especially intentionality are
results of our human propensity for inferring cause-effect relationships
from events proximate in time and space.

     The facts are that there is a sensation of desire (or tension, or
whatever) followed by some behavior, followed by a lessening of that
sensation.  That desire causes behavior is inferred rather than known.  The
experience of the sensation of intentionality (or consciousness for that
matter) is passive, as are all sensations.

     It is as likely that some third cause results in both the feeling of
intentionality and the behavior (increasingly seen to be the result of
signal processing in the brain).

     For an example of this principle, consider lightning.  Does the flash
of light cause the thunder?  Or would it be more accurate to say that the
electrical discharge through the atmosphere causes both the flash of light
and the sound?

     Intentionality and consciousness are the flash, and behavior is the
thunder.


Dan Hankins

     This phone booth reserved for Clark Kent.

dan-hankins@cup.portal.com (Daniel B Hankins) (04/09/89)

In article <15122@bellcore.bellcore.com> srh@wind.bellcore.com
(stevan r harnad) writes:

>My position was that subjective meaning rides epiphenomenally on the
>"right stuff," and the right stuff is NOT just internal symbol
>manipulation, as Searle's opponents keep haplessly trying to argue, but
>hybrid nonsymbolic/symbolic processes, including analog representations
>and feature-detectors, with the symbolic representations grounded
>bottom-up in the nonsymbolic representations. One candidate grounding
>proposal of this kind is described in my book.

     Aha!  Light dawns!  All this time we've been talking about different
things when we say 'symbol'.

     When _you_ say symbol, you mean a _linguistic_ symbol - a word.  When
_I_ say symbol, I mean any of the organized patterns that a computer works
with, including the analog ones you describe (floating-point numbers).

     Of course a program that works only with English (or Chinese, or
whatever) words cannot achieve or have sentience; it has no sensory
objects to associate with its linguistic objects.

     However, a program that contains sensory input objects (such as sight,
sound, touch, and so on) should be programmable to associate certain of
those words with certain of the sensory inputs (in a fuzzy, overlapping and
recursive way, of course).  This program should then understand what it is
talking about;  its symbols are grounded in sensory data.

     The confusion and arguments were arising from the fact that many of us
were speaking of symbols as relating to any kind of object, not just
linguistic ones.

     Now, we are left with two questions:  Is any thing that passes even
the most rigorous LTT sentient for all practical purposes, and if a machine
passes the LTT, how could it achieve that state.

     I think that the answer to the first question is definitely yes.  The
LTT is the only real test we have for sentience that is not physically
adjacent.  I apply the LTT every day when I converse over the net.  It
would be absurd to assert that those I correspond with might not be
intelligent simply because I can't open up their heads and see if their
brains are organic; the LTT is clearly sufficient to decide.  Whether
Stevan Harnad is an instance of a program or is human or is a cyborg from
the Lesser Magellanic Cloud, I don't know.  But I think I can conclude
pretty easily, _from discussions on the net_, that Harnad is sentient. 
Biological inspection is not necessary.

     There are two possible answers to the second question.  I shall show
that only one of them is conceivable, which will put me in the same
position as Harnad or nearly so.

     1. A machine could achieve sentience by being directly programmed to
        be so.

        Well, if the only interface we are going to give it is the
        linguistic one, we are going to have to program it with enough
        knowledge to use the linguistic link for future learning - we are
        going to have to teach it language and install a complete set of
        past sensory experiences for it to draw on when deducing and
        inducing from its linguistic input.

        I think it is pretty clear that providing a machine with a full set
        of sensory experience up to the point of language fluency (8 human
        years worth, or so) is an impossible task.  We simply won't ever
        know enough of anyone's experience to provide this.  Not to mention
        the gargantuan task of data entry of all the experience and the
        equally gargantuan task of building all the right associations
        between the words and sensory experience.  It's just not practical.

     2. A machine could achieve sentience by being given the proper
        self-organizing properties (say a large and biologically accurate
        self-configuring neural network), a sufficiently rich set of inputs
        (videocam vision, microphone hearing, pressure-plate touch, and so
        on), and a sufficiently rich set of outputs (robot arms, mobility,
        speech generation and so on)

        Such a machine would only need to be given rudimentary, instinctive
        knowledge, some hardwired behaviors (such as jerking a limb away
        from an excessive heat source), pain, pleasure, and a few other
        capabilities, and it would pick up the rest on its own.  Then, for
        the LTT, one isolates the machine from the judge and allows only
        communication via tty.  This is the option I think might work.

        Of course, producing a 'tabula rasa' sentient being by means of
        physical labor and a 9 month manufacturing process is likely to
        remain more economical for some time to come.


Dan Hankins

"                                                 " - 

dmocsny@uceng.UC.EDU (daniel mocsny) (04/09/89)

In article <16872@cup.portal.com>, dan-hankins@cup.portal.com 
(Daniel B Hankins) debates the nature of "artifact" with
gilbert@cs.glasgow.ac.uk (Gilbert Cockton).

Gilbert Cockton writes:
> >Given a well-understood task, computer programs will out-perform humans. 
> >Given a poorly understood task, they will look almost as silly as the
> >author of the abortive program.

Daniel Hankins ripostes with examples of man-made systems that can
solve poorly-understood problems.

The discussion reminds me of Stephen Wolfram's discussion in his
paper "Approaches to Complexity Engineering," which appeared in
Physica D, 1985. In this paper, Wolfram contrasted traditional
engineering design with the emerging art of complexity engineering.
In a traditional artifact, the engineer proceeds from a detailed
logical description of every part of a system and all its behaviors.
The system, if successful, does what it is designed to do---nothing
more and nothing less. The parts of the system interact with each
other in tightly constrained, often linear, ways. Motions are usually
periodic and synchronous. Failure in one part of the system often
causes catastrophic failure of the entire system. The system can
usually tolerate only a limited degree of environmental change.

A complex system, on the other hand, consists of a large collection of
individually simple parts, each having only a limited repertoire of
possible behaviors. Each part interacts with its neighbors according
to a fairly short list of typically nonlinear transition rules. The
system as a whole exhibits enormously complex emergent behaviors,
which the "designer" usually cannot predict in detail (since the
system is computationally irreducible). The complex system can
potentially exhibit other desired properties---e.g., robustness and
adaptiveness. The trick in complexity engineering, of course, is to
select the transition rules that yield the desired emergent behaviors.

We are only just beginning to learn how to do this. If we succeed,
then we may be able to take a huge chunk out of the "Logical
Specification Problem." I.e., our limited ability to comprehend and
manipulate lengthy logical specifications greatly restricts the
complexity of our traditionally-engineered artifacts. The logical
specification for a complex system of the type in the above paragraph
is quite short in comparison to the behavior obtained. This is more
in keeping with our ability to design things.

Dan Mocsny
dmocsny@uceng.uc.edu

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/10/89)

From article <16873@cup.portal.com>, by dan-hankins@cup.portal.com (Daniel B Hankins):
" >When I compile a program, assembly language with symbols is produced,
" ...
"      The problem is with the representation relation, which the computer
" does not embody.  No representation, no symbols.  A symbol that means
" nothing is no symbol at all.

So a label which is the target of a branch instruction does not
represent anything, eh.  Not even a location in the program?  Are you
under the impression that you are making sense here?  Let me disabuse
you.

Look, I know what's coming next.  Just as earlier we were treated to a
distinction between understanding of the ordinary sort which a computer
can display and "true" understanding with an essential subjective
element, now you're going to say the things in an assembly language
program aren't "true" symbols.  There's some special human magic I
invest in symbols when I write a program that compilers can never know.
So you can spare us the usual mumbo-jumbo.

			Greg, lee@uhccux.uhcc.hawaii.edu

bwk@mbunix.mitre.org (Barry W. Kort) (04/11/89)

In article <16873@cup.portal.com> dan-hankins@cup.portal.com
(Daniel B Hankins) writes:

 > A symbol that means nothing is no symbol at all.

How about the mathematical symbols for zero and the null set?
How about the ASCII symbols for SPACE and NULL?

--Barry Kort

dan-hankins@cup.portal.com (Daniel B Hankins) (04/11/89)

In article <3701@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu
(Greg Lee) writes:

>So a label which is the target of a branch instruction does not represent
>anything, eh.  Not even a location in the program?  Are you under the
>impression that you are making sense here?  Let me disabuse you.

     The operative question is, just whose label is it?  Is it your label,
or is it the computer's label.  To you, a horse in a painting may be a
symbol of virility.  To the painter, it may have been merely an animal he
happens to like (perhaps he owns a horse), and represents nothing.  To the
painter, it may not have been a symbol at all.

     The world is full of people making symbols (for themselves) out of
things that are not, and of assigning meanings to symbols that never
entered the symbol creator's mind.

     Symbolism is, to a large extent, private and subjective.


>Look, I know what's coming next.  Just as earlier we were treated to a
>distinction between understanding of the ordinary sort which a computer
>can display and "true" understanding with an essential subjective element,
>now you're going to say the things in an assembly language program aren't
>"true" symbols.  There's some special human magic I invest in symbols when
>I write a program that compilers can never know. So you can spare us the
>usual mumbo-jumbo.

     No special magic, no mumbo-jumbo.  Just an observation that what to
you is a symbol of some thought-entity (a branch location) is to the
computer merely another arrangement of voltages to be manipulated.

     To _you_, it's a symbol.  To the computer, it's the object of
discourse, and not a symbol at all.  The use/mention distinction, again.


Dan Hankins

     At one place was a Master who answered all questions by holding up one
finger.  One of his students, seeing this, began to emulate him.  The Master
had the student brought before him, and asked him the nature of the Buddha.
When the student held up one finger the Master drew his sword and cut it off.
The student screamed in pain and cried out, "Why did you do that?".  The
Master smiled and held up one finger.  Then the student was enlightened.
      _
-Zen koan

gblee@maui.cs.ucla.edu (Geunbae Lee) (04/13/89)

In article <49015@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>In article <16873@cup.portal.com> dan-hankins@cup.portal.com
>(Daniel B Hankins) writes:
>
> > A symbol that means nothing is no symbol at all.
>
>How about the mathematical symbols for zero and the null set?
>How about the ASCII symbols for SPACE and NULL?
>
>--Barry Kort

Do you really think that the mathematical symbols for zero and null set AND
the ASCII symbols for space and null means NOTHING?
In my opinion, they mean SOMETHING VERY IMPORTANT and FUNDAMENTAL !!!!

-- Geunbae Lee
   AI lab, UCLA

bwk@mbunix.mitre.org (Barry W. Kort) (04/13/89)

In article <22885@shemp.CS.UCLA.EDU> gblee@cs.ucla.edu (Geunbae Lee) writes:

 > In article <49015@linus.UUCP> bwk@mbunix (Barry Kort) writes:

 > > In article <16873@cup.portal.com> dan-hankins@cup.portal.com
 > > (Daniel B Hankins) writes:

 > > > A symbol that means nothing is no symbol at all.

 > > How about the mathematical symbols for zero and the null set?
 > > How about the ASCII symbols for SPACE and NULL?

 > Do you really think that the mathematical symbols for zero and
 > null set AND the ASCII symbols for space and null means NOTHING?
 > In my opinion, they mean SOMETHING VERY IMPORTANT and FUNDAMENTAL !!!!

Geunbae, you appear to be in violent agreement with me on this point.

--Barry Kort

rayt@cognos.UUCP (R.) (04/14/89)

In article <49015@linus.UUCP> Barry Kort writes:

>In article <16873@cup.portal.com> Daniel B Hankins writes:
 
> > A symbol that means nothing is no symbol at all.
 
>How about the mathematical symbols for zero and the null set?
>How about the ASCII symbols for SPACE and NULL?

I consider zero magnitude and a set with no elements to have meaning: both
indicating the absence of a particular class of objects or properties. The
latter two are interesting because they are the background from which the
foreground gains its meaning, hence are meaningful as boundaries. Clearly,
though, they can be given special meanings outside of this function.

							R.
-- 
Ray Tigg                          |  Cognos Incorporated
                                  |  P.O. Box 9707
(613) 738-1338 x5013              |  3755 Riverside Dr.
UUCP: rayt@cognos.uucp            |  Ottawa, Ontario CANADA K1G 3Z4

dan-hankins@cup.portal.com (Daniel B Hankins) (04/14/89)

In article <49015@linus.UUCP> bwk@mbunix.mitre.org (Barry W. Kort) writes:

>>[me] A symbol that means nothing is no symbol at all.
>
>How about the mathematical symbols for zero and the null set?
>How about the ASCII symbols for SPACE and NULL?


     Let me rephrase that sentence.  It should read, "A symbol with no
meaning is no symbol at all."

     The mathematical symbol '0' denotes a member of the set of numerals, a
point on the number line, and so on.  These are all things.  The
mathematical symbol {} denotes a member of the set of all sets, namely the
one having no elements.  This is a set and therefore also a thing.  The
empty set is still a set, just as an empty box is still a box.


Dan Hankins

dan@acates.UUCP (Dan Ford) (04/14/89)

In article <16880@cup.portal.com> dan-hankins@cup.portal.com (Daniel B Hankins) writes:
>     2. A machine could achieve sentience by being given the proper
>        self-organizing properties (say a large and biologically accurate
>        self-configuring neural network), a sufficiently rich set of inputs
>        (videocam vision, microphone hearing, pressure-plate touch, and so
>        on), and a sufficiently rich set of outputs (robot arms, mobility,
>        speech generation and so on)

I see no reason why outputs would be necessary to achieve sentience.  Of 
course without outputs outsiders would have a harder time determining whether
the machine had achieved sentience.  People who lack the ability to communicate
with the outside world are no less sentient than those who can communicate.
Perhaps the fact that a couple of the listed outputs (robot arms and mobility)
are part of feedback systems, and thus also act as inputs, is what leads to
the above statement.  "Pure" outputs are needed to recognize sentience, not to
achieve it.

Dan Ford    uunet!acates!dan
"You may not have stolen any eggs, but I bet you've poached a few." Odd Bodkins

dan-hankins@cup.portal.com (Daniel B Hankins) (04/15/89)

In article <275@acates.UUCP> dan@acates.UUCP (Dan Ford) writes:

>Perhaps the fact that a couple of the listed outputs (robot arms and
>mobility) are part of feedback systems, and thus also act as inputs, is
>what leads to the above statement.  "Pure" outputs are needed to recognize
>sentience, not to achieve it.

     This is what I had in mind.  In order to achieve sentience, I feel
(but am not convinced) that an entity needs to have rich interactions with
its environment, for feedback purposes.

     And of course we need some kind of output in order to recognize the
achieved sentience.


Dan Hankins