[comp.ai] Hayes vs. Searle

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/02/90)

In article <16875@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:

>(2)                    SEARLE'S CHINESE ROOM
>
>            Pat Hayes <hayes@parc.xerox.com>
>
>The basic flaw in Searle's argument is a widely accepted misunderstanding
>about the nature of computers and computation: the idea that a computer
>is a mechanical slave that obeys orders. This popular metaphor suggests
>a major division between physical, causal hardware which acts, and
>formal symbolic software, which gets read. This distinction runs
>through much computing terminology, but one of the main conceptual
>insights of computer science is that it is of little real scientific
>importance. Computers running programs just aren't like the Chinese
>room.
>
>Software is a series of patterns which, when placed in the proper
>places inside the machine, cause it to become a causally different
>device. Computer hardware is by itself an incomplete specification of a
>machine, which is completed - i.e. caused to quickly reshape its
>electronic functionality - by having electrical patterns moved within
>it. The hardware and the patterns together become a mechanism which
>behaves in the way specified by the program.
>
>This is not at all like the relationship between a reader obeying some
>instructions or following some rules. Unless, that is, he has somehow
>absorbed these instructions so completely that they have become part of
>him, become one of his skills. The man in Searle's room who has done
>this to his program now understands Chinese.

The AI community must be pretty annoyed with Searle by now.  He
writes papers, gives talks, inspires newspaper articles. In the
UK (at least), he even hosted his own philosophical chat show.
And throughout it all he refuses to accept that his simple little
argument just doesn't show what he thinks it does.  It would be
nice, therefore, to have a straightforward refutation of the
Chinese Room, preferably one with some intuitive appeal, and even
better, I suppose, if it could be shown that Searle was in the
grip of a fundamental misunderstanding of computation.

But how plausible is the argument outlined in this abstract?  I
know it's not fair to assume these three paragraphs are all there
is to it.  However, I think there's enough for us to draw at
least some tentative conclusions.

To begin, I'd prefer to describe the conceptual insight in a
different way.  What happened was the discovery of a certain
class of universal, programmable machines.  Rather than wire
up a number of different hardware devices, it's possible to
make one that, by executing a program, can emulate all the
others.

It's not unreasonable to say the program becomes part of the
machine.  After all, we could always produce another machine
that embodied the program in hardware; and we accept that
such a step is equivalent (modulo execution speed and maybe
a few other things) to loading a program into a general purpose
machine.

However, we can follow the hardware / software equivalence both
ways.  We don't have to think of a computer + software only as a
new machine; we can also think of it as a computer + software.
Indeed, there are a number of configurations that are formally
equivalent, including one where the program is stored as text in
a book and read by a camera, with some mechanical device for
turning the pages and making notes on scraps of paper.

Now it doesn't seem so different from the Chinese Room after
all; and, given that the configurations are equivalent, Searle
can pick whatever one is best for making his point, provided, of
course, that he does not rely on arguments that do not apply to
the other configurations as well.

Indeed, there may always be a suspicion that Searle is getting
too much mileage out of the presence of a person in the room.
On the other hand, it's hard to see why replacing the person
with a simpler, more "mechanical", device will suddenly cause
"understanding" to occur if it wasn't there before.

This brings us to the suggestion that if the person in the room
somehow absorbed the instructions so completely that they became
part of him, he would then understand Chinese.  Whether or not
this follows from a correct understanding of computers and
computation, it has to be considered.

One point to bear in mind is that we don't have a very complete
or precise notion of what the instructions being followed by the
person in the Room are like.  If the instructions are largely
unspecified, then the changes involved in absorbing them
completely are largely unspecified too.  There are certainly some
changes that would result in the person in the Room understanding
Chinese, and perhaps they amount to absorbing some program.

However, we're not yet, given our limited knowledge of how
understanding works and our rather vague notion of what the
instructions to be absorbed might be, in a position to go beyond
this "perhaps" to the claim that absorbing a program, much less
the program used in the Chinese Room, would definitely result in
understanding.

Indeed, suppose someone does acquire the skill represented by
the Chinese Room.  That is, when presented with written questions
in Chinese they can produce reasonable written responses, also
in Chinese.

If this behavior counts as understanding in itself, or is
sufficient evidence for understanding, then we didn't need any of
these arguments, because the Chinese Room already had this
behavior and hence already understood Chinese.  That is, we're
back to a version the "system reply".

Nor is it sufficient to say that this behavior counts as
understanding when it occurs in a person, because that wouldn't
tell us what we need to know about computers.  At best, it might
let us block the step where Searle goes from the person not
understanding to the Room not understanding either.  However, an
argument that makes people such a special case seems more likely
to undermine the case for understanding in computers than to
support it.

Worse, it's far from clear that such a skill would show that
a person understood Chinese (unless we we inclined to count
the behavior as sufficient in any case, in which case we don't
need these arguments).  If we ask the person in English what
is going on in Chinese, he wouldn't know (unless we suppose
more than that he has acquired the skill of writing replies to
written questions).  This is hardly what we'd expect from a
person who understood both English and Chinese.

In the end, we're only slightly closer to defeating Searle, if
that, than we were before.

-- Jeff

ml@unix.cis.pitt.edu (Michael Lewis) (06/02/90)

In my view Searle's argument is correct but attempts  to  be
philosophically  "safe"  by  leaving  the  word "understand"
undefined (this has been said here often enough before).   I
hold  a  variant  of  Harnad's symbol grounding position and
believe that uninterpreted symbols/non-understanding are not
restricted  to  gedanken experiments but are quite common in
our experience.  In fact I would  claim  that  it  is  quite
feasible  to  endow  computers with human non-understanding.
The only question is whether we choose to make  "understand-
ing"  a prerequisite of intelligence.  I lean in that direc-
tion but would not be bothered by the claim  that  an  idiot
savant  machine  was  intelligent  (providing  of course the
definition of intelligence included idiocy).

     Consider this example:

 Laplace transforms are meaningless to me,  although  I  can
 use the symbol and its tables, it remains magical, yet Mar-
 tin, the EE in the next office, assures me  that  it  makes
 perfect sense.  He claims that it is its discrete z version
 which he "cannot see", that is  shrouded  in  mystery.   In
 either  case  we  can rotely manipulate our magical symbols
 and provide their linguistic descriptions on cue, but to us
 these symbols remain opaque.

This passage is a replay of the  Chinese  room  illustrating
how we habitually distinguish between understanding "things"
and recognizing  and  manipulating  symbols.   The  language
associating  "making sense" of Laplace transforms with "see-
ing" them was lifted directly  from  our  conversation.   In
this  context  "understanding" is referring to possessing an
imaginal (more on  this  later)  model  of  the  transform's
behavior  not  merely  producing its linguistic description.
We would both describe its effect as "translating  equations
from  the  time  domain to the frequency domain", yet Martin
claims he "understands" it and I claim I don't.

     Let's define this usage of  the  word,  understand,  as
understanding  in  the  strong sense.  I would be willing to
say that I "understand about"  Laplace  transforms  but  not
that I "understand" the transform itself.  This "understand-
ing about" things is the weak sense  of  the  term.   If  we
employ  this  distinction  in  usage, it is not difficult to
find similar examples.  Consider an  electric  circuit.   To
say  that  I understand it implies that I possess a model of
how it operates, perhaps similar to the fluid or  mechanical
models  studied by Gentner and Gentner.  To say I understand
about electrical circuits implies only that  I  am  familiar
with  Ohm's law and similar symbolic descriptions of circuit
behavior.  I may not have the foggiest idea of "how/why  the
circuit  behaves  as  it does" even though I could find vol-
tages at test points and compute for you it's every  capaci-
tance.

     Searle's man and even his room could be said to "under-
stand  about"  Chinese symbols but neither will ever "under-
stand" them.  This is an ecological realist  position  main-
taining  that  the "meaning" of symbols arises through their
association with experience not vice versa.   This  observa-
tion  is  hardly profound and could only raise eyebrows in a
discussion such as this or among cognitive psychologists who
have confused their programs with their subjects.

(There, Did I get the Searle tone right?)

Actually I enjoy this perpetual  discussion.   The  argument
above  in  no way rules out the possibility of AI, it simply
suggests that for  machines  to  manifest  intelligence  (if
"understanding"  in  Searle's sense is to be the criterion),
their symbols must be grounded in an environment (a  simula-
tion would be fine).  The notion of a "disembodied" intelli-
gence or an intelligent symbol system is ruled out not by 
any  lack  of cleverness in programming but because the poor
programs never get to interact  with  anything  "understand-
able".  Yes,  the robot counter example to the "combination"
reply is still convincing, but no one ever said  (except  to
funding  agencies) that creating artificial Intelligence was
going to be easy.

forbis@milton.acs.washington.edu (Gary Forbis) (06/03/90)

In article <24653@unix.cis.pitt.edu> ml@unix.cis.pitt.edu (Michael Lewis) writes:
>     Searle's man and even his room could be said to "under-
>stand  about"  Chinese symbols but neither will ever "under-
>stand" them.  This is an ecological realist  position  main-
>taining  that  the "meaning" of symbols arises through their
>association with experience not vice versa.

I consider myself fairly well versed in the tricks people use to fool
programs.  No simple program will always pass the Turing test.  The
Chinese Room argument fools many because it places the emphasis on the
wrong observation point.  I want to consider an "American Room" run by
Chinese who are not allowed to stay in the room for more than an hour.
I will withhold judgement on this room for one month.  At the end of that
month I ask "Do you remember our conversation of a month ago?"  If the room
responds, "no," then I withhold judgment.  If the room responds, "yes," then
I continue with the prior conversation.  The memory of the prior conversation
cannot exist in the individual becuase it is not the same individual.  But
there is nothing in the room but the individual and the symbols.  Where is
this memory of a prior experience kept?

When I strip away the layers of symbols and paterns of symbols from myself
that others claim are not understanding I am left with nothing I can find.
Where is this seat of understanding which exist ouside symbols and paterns?

--gary forbis@milton.u.washington.edu

eliot@phoenix.Princeton.EDU (Eliot Handelman) (06/04/90)

In article <2629@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
;                                                    It would be
;nice, therefore, to have a straightforward refutation of the
;Chinese Room, preferably one with some intuitive appeal, and even
;better, I suppose, if it could be shown that Searle was in the
;grip of a fundamental misunderstanding of computation.


How's this for intuitive appeal: no such "book" as the one Searle presupposes
can exist. If this were true, then the argument is based on an impossible
premise, hence there's no argument.

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/05/90)

In article <2629@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>The AI community must be pretty annoyed with Searle by now.  He
>writes papers, gives talks, inspires newspaper articles. In the
>UK (at least), he even hosted his own philosophical chat show.
>And throughout it all he refuses to accept that his simple little
>argument just doesn't show what he thinks it does.  It would be
>nice, therefore, to have a straightforward refutation of the
>Chinese Room, preferably one with some intuitive appeal, and even
>better, I suppose, if it could be shown that Searle was in the
>grip of a fundamental misunderstanding of computation.
>
Isn't there a point in A MAN FOR ALL SEASONS where Common Man says, "And I wish
we all had wings and could fly up to Heaven?"  I used to have a .signature
in which Mencken claimed that every complicated problem has a simple
solution . . . which is wrong.  The whole reason Searle's pot keeps
boiling is that everyone thinks that issues like "understanding" can
be resolved by a simple appeal to intuition . . . if only we finally
figure out the right angle from which to view things.  What if there
IS no "right angle?"  What if "understanding" is, by its very nature,
a rather vague and sloppy word which we can use socially because the
dynamics of discourse can keep up from wandering too far off the track
but which may never be able to pin down in any serious analytic sense?
If "understanding" is, indeed, such a slippery piece of terminology, then
Searle can always be very clever about encouraging it to slither away from
any attempt to refute his argument.  There seems to be only one sensible way
out for those who are serious about DOING artificial intelligence:  DON'T MESS
AROUND WITH WORDS LIKE "UNDERSTANDING!"

There is a new view of computer science which I seem to have discovered
independently of several colleagues who have made similar observations
in different contexts.  The way I like to formulate it is that we study
computer science in order to get a better grasp on what we are talking
about.  When the computer is analytical, it is so in the most rigorous
sense of the word;  and if WE want to be analytical, the ultimate proof
of our pudding lies in our ability to implement our theory on a computer.
John Pollock has observed that the computer is now a sufficiently powerful
tool that one can no longer do epistemology from the comfort of one's armchair.
Any theory of epistemology today must be held up to the test of validation
through a computer model (or so says Pollock).  Gian-Carlo Rota has offered
similar opinions in the arena of phenomenology.

In the presence of such a powerful tool for our own thinking, what are we to
make of Searle.  He writes as if his only contact with a computer is through
its capacity as a word processor.  He seems to believe that if he can
understand THAT aspect of the machine, he knows all he needs to know.
It does not take much computer literacy to see how ludicrous such a position
is, and anyone who appreciates such naivete has every right to be annoyed with
the man.  However, this annoyance will not be resolved by simple answers (which
is why Searle can make a television personality out of himself, since even on
the BBC television thrives on reducing issues to simple conclusions).  Let
Searle have his way with those who strive for simplicity, and let those who
recognize that those problems are too elusive to take seriously go about their
business of laying out better-formed problems.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"So, philosophers of science have been fascinated with the fact that elephants
and mice would fall at the same rate if dropped from the Tower of Pisa, but not
much interested in how elephants and mice got to be such different sizes in the
first place."
					R. C. Lewontin

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/06/90)

In article <16960@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:
;In article <2629@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
;;                                                    It would be
;;nice, therefore, to have a straightforward refutation of the
;;Chinese Room, preferably one with some intuitive appeal, and even
;;better, I suppose, if it could be shown that Searle was in the
;;grip of a fundamental misunderstanding of computation.

;How's this for intuitive appeal: no such "book" as the one Searle presupposes
;can exist. If this were true, then the argument is based on an impossible
;premise, hence there's no argument.

Not bad, but a program could be printed, and there's the book.
If you think Searle couldn't work fast enough, imagine that he
has 1000 helpers.

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/07/90)

In article <13772@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>In article <2629@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>>
>>The AI community must be pretty annoyed with Searle by now.  He
>>writes papers, gives talks, inspires newspaper articles. In the
>>UK (at least), he even hosted his own philosophical chat show.
>>And throughout it all he refuses to accept that his simple little
>>argument just doesn't show what he thinks it does.

>                                 The whole reason Searle's pot keeps
>boiling is that everyone thinks that issues like "understanding" can
>be resolved by a simple appeal to intuition . . . if only we finally
>figure out the right angle from which to view things.  What if there
>IS no "right angle?"

I think this is an important point, but it cuts both ways.  Those who
think they have refuted Searle shouldn't suppose they have shown that
computers really can understand.  [And that they don't need to show
that to refute Searle.]  In a sense, what Searle has done is to
puncture the Turing Test "if it types like it understands then it does
understand" balloon.  His pot keeps boiling because people keep trying
to reinflate the balloon with the system reply, the robot reply,
various combinations of the two, and so on.

Indeed, I don't think we know enough about what, if anything,
understanding in humans amounts to, or about what programs that could
pass the Turing test would look like (if they are possible at all)
to arrive at a definite conclusion about whether or not computers
can understand.  And, as you suggest, one way it might turn out
is that "understanding" wasn't really a fruitful question to ask.

>What if "understanding" is, by its very nature,
>a rather vague and sloppy word which we can use socially because the
>dynamics of discourse can keep up from wandering too far off the track
>but which may never be able to pin down in any serious analytic sense?

I think we have to be careful about this line of reasoning lest
we start to think the right thing to do would be to find a precise
definition of understanding.  Making definitions before we know
more about it seems to me rather pointless.

>There is a new view of computer science which I seem to have discovered
>independently of several colleagues who have made similar observations
>in different contexts.  The way I like to formulate it is that we study
>computer science in order to get a better grasp on what we are talking
>about.

I agree with this as well.  One of the great advantages of computer
models is that they can force you to say what you mean in sufficient
detail.

>John Pollock has observed that the computer is now a sufficiently powerful
>tool that one can no longer do epistemology from the comfort of one's
>armchair.  Any theory of epistemology today must be held up to the test
>of validation through a computer model (or so says Pollock). 

Has anyone actually made a model of an epistemological theory?
I'd like to know more about this.

-- Jeff

eliot@phoenix.Princeton.EDU (Eliot Handelman) (06/07/90)

In article <2687@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
;In article <16960@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:
;;In article <2629@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
;;;                                                    It would be
;;;nice, therefore, to have a straightforward refutation of the
;;;Chinese Room, preferably one with some intuitive appeal, and even
;;;better, I suppose, if it could be shown that Searle was in the
;;;grip of a fundamental misunderstanding of computation.
;
;;How's this for intuitive appeal: no such "book" as the one Searle presupposes
;;can exist. If this were true, then the argument is based on an impossible
;;premise, hence there's no argument.
;
;Not bad, but a program could be printed, and there's the book.
;If you think Searle couldn't work fast enough, imagine that he
;has 1000 helpers.

He could have 1,000,000 helpers and it wouldn't make any difference.
The chinese room argument uses the word "understand" in two different 
ways: Searle doesn't understand the chinese language, and Searle doesn't 
understand the import of the symbols he's manipulating. If it's possible to 
encode all answers to any possible question via rules without referents, 
as is posited by the book of rules Searle has in hand, then the chinese 
language itself (or any other language) is just as plausibly a bunch of 
rules, nothing more.  Searle manipulating rules doesn't understand; therefore 
Searle speaking English is really just manipulating rules of the english 
language and isn't therefore understanding English, which is absurd.  
Conclusion, rules insufficient for encoding of language as is commonly 
used: therefore Searle Chinese rule book can't exist, end of argument.

--eliot

aipdc@castle.ed.ac.uk (Paul D. Crowley) (06/07/90)

Even if we dismiss words like "understanding", it still seems unwise to
concede that there is _any_ significant quality of human thought which
machines can never share.  As far as I can see, the weasel phrase in the
Chinese Room is not "understanding" but "causative power".  Just what is
the magical difference between neurons and logic gates allowing one but
not the other to participate in "Searle-type-understanding"?

-- 
\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
/\__/ "Trust me, I know what I'm doing" - Sledge Hammer

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin) (06/08/90)

In article <2703@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>  In a sense, what Searle has done is to
>puncture the Turing Test "if it types like it understands then it does
>understand" balloon.  His pot keeps boiling because people keep trying
>to reinflate the balloon with the system reply, the robot reply,
>various combinations of the two, and so on.

I think it's important to keep in mind what Turing really ment by the
Turing Test.  My understanding of what he was saying is not that the
computer IS intelligent, but that we must CONSIDER it intelligent because
we can't tell the difference between the human and computer.
Aparently, Turings definition of "intelligence" or "understanding"
relates to its actions, not the processes it engages in to produce
what looks like "understanding".

- Jim Ruehlin

dsa@dlogics.COM (David Angulo) (06/08/90)

In article <2687@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
> In article <16960@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:
> 
> ;How's this for intuitive appeal: no such "book" as the one Searle presupposes
> ;can exist. If this were true, then the argument is based on an impossible
> ;premise, hence there's no argument.
> 
> Not bad, but a program could be printed, and there's the book.
> If you think Searle couldn't work fast enough, imagine that he
> has 1000 helpers.


No, a program couldn't be printed (if by program you mean a list of
questions and their answers) because such a book or program is always
incomplete.  To prove this, all you have to do is ask in English all of
the possible addition problems.  This is infinite so the book cannot list
all of the questions nor can it list all of the answers.

-- 
David S. Angulo                  (312) 266-3134
Datalogics                       Internet: dsa@dlogics.com
441 W. Huron                     UUCP: ..!uunet!dlogics!dsa
Chicago, Il. 60610               FAX: (312) 266-4473

wallingf@cps.msu.edu (Eugene Wallingford) (06/08/90)

Paul D. Crowley writes:

> .........................  As far as I can see, the weasel phrase in the
> Chinese Room [argument] is not "understanding" but "causative power".
> Just what is the magical difference between neurons and logic gates
> allowing one but not the other to participate in
> "Searle-type-understanding"?

     Searle basically argues: Well, *I* understand some things, so
     neurons and the rest of our biology must have such causative powers
     [an existence proof of sorts].   But the Chinese Room does not
     understand *anything* and [given the method of his argument] never
     can.  So logic gates and the like must not have the same causative
     powers.

     In this sense, Searle is correct.  But his argument rests on the
     shaky assumption that I can grant understanding to another human
     because s/he is like me, but the machine is so unlike me that I
     will not grant that it understands.

     Pushed to the extreme, I think that Searle's argument forces him
     to admit that he is a solipsist (in a weak sense) -- humans under-
     stand, but nothing else does.  Searle mentions somewhere [in the
     _Scientific American_ article, I think] the possibility of, say,
     a Martian whose chemistry is radically different than ours.  He
     claims that he might be willing to grant that such a being does
     understand, but he provides no principled reasons for just how he
     would come to such a conclusion.  And, given his line of reasoning,
     I think that he would be unable to...


--
~~~~ Eugene Wallingford            ~~~~    AI/KBS Laboratory         ~~~~
~~~~ wallingf@cpsvax.cps.msu.edu   ~~~~    Michigan State University ~~~~

dg1v+@andrew.cmu.edu (David Greene) (06/08/90)

Excerpts from netnews.comp.ai: 7-Jun-90 Re: Hayes vs. Searle David
Angulo@dlogics.COM (1092)

> No, a program couldn't be printed (if by program you mean a list of
> questions and their answers) because such a book or program is always
> incomplete.  To prove this, all you have to do is ask in English all of
> the possible addition problems.  This is infinite so the book cannot list
> all of the questions nor can it list all of the answers.

This raises a question that has not been clear in the discussion, that
is, it seems to confuse intelligence with omiscience.  It seems
perfectly reasonable to allow that the entity (book, human, room) does
not know a particular line of inquiry.  The distinction (at least for
the turing test) has always been that the pattern of response is
indistinguishable from an "intelligent being" (usually human). 
Constantly saying "I don't know" to all questions won't get you too far,
but it is appropriate at certain times.

So does the intelligent book/ program have to have the correct answer to
all questions -- is it an oracle?
If not, what is meant by "correct" answer?  Is it an "intelligent"
answer -- such as, "Gee that's a tough question... I'll have to get back
to you that." ?


-David

--------------------------------------------------------------------
 David Perry Greene        ||    ARPA:          dg1v@andrew.cmu.edu
 GSIA /Robotics            ||                   dpg@isl1.ri.cmu.edu
 Carnegie Mellon Univ.     ||    BITNET:  dg1v%andrew@vb.cc.cmu.edu
 Pittsburgh, PA 15213      ||    UUCP: !harvard!andrew.cmu.edu!dg1v
--------------------------------------------------------------------
"You're welcome to use my opinions, just don't get them all wrinkled."

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin) (06/08/90)

In article <4550@castle.ed.ac.uk> aipdc@castle.ed.ac.uk (Paul D. Crowley) writes:
>Chinese Room is not "understanding" but "causative power".  Just what is
>the magical difference between neurons and logic gates allowing one but
>not the other to participate in "Searle-type-understanding"?

According to Searly in the Chinese Room paper, the difference is that
human brain tissue has some "magical" (my word) quality that provides for
intelligence/understanding/causitive powers, while mere silicon doesn't.
He states that just what this quality is and how it works is a matter for
empirical study.  Neat way to sidestep the issue, no?

It seems to me that this is the real point of his paper - brain mass is
different from silicon mass in some fundamental way.  There's some
molecular/atomic/?? quality or structure that makes brain mass causitive 
and silicon not.  He may not have intended this, but thats what it comes
down to, and it seems patently silly.  There was no evidence for this when
he wrote his paper, and there still isn't.

-Jim Ruehlin
>
>-- 
>\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
>/\__/ "Trust me, I know what I'm doing" - Sledge Hammer

martin@oahu.cs.ucla.edu (david l. martin) (06/08/90)

In article <17046@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:

>He [Searle] could have 1,000,000 helpers and it wouldn't make any difference.
>The chinese room argument uses the word "understand" in two different 
>ways: Searle doesn't understand the chinese language, and Searle doesn't 
>understand the import of the symbols he's manipulating. If it's possible to 
>encode all answers to any possible question via rules without referents, 
>as is posited by the book of rules Searle has in hand, then the chinese 
>language itself (or any other language) is just as plausibly a bunch of 
>rules, nothing more.  Searle manipulating rules doesn't understand; therefore 
>Searle speaking English is really just manipulating rules of the english 
>language and isn't therefore understanding English, which is absurd.  
>Conclusion, rules insufficient for encoding of language as is commonly 
>used: therefore Searle Chinese rule book can't exist, end of argument.

This seems a confused line of argument to me.  You seem to be saying that
_if_ the Chinese language can be captured in a rule book, then the English
language can be captured in a rule book, and moreover, it _must be_ that
Searle is just manipulating rules when he speaks English, which leads to
a contradiction.  In the first place, just because English hypothetically
could be captured in a rule book, how does it automatically follow that
Searle is _just_ manipulating those rules when he speaks English?

Secondly, and more important, the assumption that the Chinese rule book
could exist is just that - an assumption made for the purposes
of a reductio argument.  For a computer program to pass the Turing test in
Chinese, some set of rules about responding to Chinese would have to be laid
down in the program (that's just what the program would be).  
Let's grant that, Searle says, and then ask whether the computer
understands.  It seems to me that if you don't won't to grant that,
it's true that Searle's argument doesn't get off the ground, but it's
also true that you've already ruled out the possibility of a computer
passing the Turing test in the first place.

Dave Martin

page@uicadb.csl.uiuc.edu (Ward Page) (06/09/90)

In article <3204@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin) writes:
>According to Searly in the Chinese Room paper, the difference is that
>human brain tissue has some "magical" (my word) quality that provides for
>intelligence/understanding/causitive powers, while mere silicon doesn't.
>He states that just what this quality is and how it works is a matter for
>empirical study.  Neat way to sidestep the issue, no?
>
>It seems to me that this is the real point of his paper - brain mass is
>different from silicon mass in some fundamental way.  There's some
>molecular/atomic/?? quality or structure that makes brain mass causitive 
>and silicon not.  He may not have intended this, but thats what it comes
>down to, and it seems patently silly.  There was no evidence for this when
>he wrote his paper, and there still isn't.

You're right.  I believe that this is at the heart of his argument.  There
is an interesting thought experiment in Moravecs 'Mind Children' that
talks about this.  The argument goes this way: If an artificial neuron were
developed that exactly mimics (functionally) a brain cell and you replaced
one neuron in the brain with this artificial neuron, would you still be
capable of thought?  If the answer is yes, how many neurons could you replace
before you are incapable of thought?  At the heart of this thought experiment
is the ability to exactly mimic a neuron.  Searle would have to reject this
to refute the argument (assuming the artificial neuron is made of different
stuff than the real neuron).


Ward Page
Visual Perception Lab
University of Illinois

eliot@phoenix.Princeton.EDU (Eliot Handelman) (06/10/90)

In article <36091@shemp.CS.UCLA.EDU> martin@oahu.cs.ucla.edu (david l. martin) writes:
;In article <17046@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:
;
;>He [Searle] could have 1,000,000 helpers and it wouldn't make any difference.
;>The chinese room argument uses the word "understand" in two different 
;>ways: Searle doesn't understand the chinese language, and Searle doesn't 
;>understand the import of the symbols he's manipulating. If it's possible to 
;>encode all answers to any possible question via rules without referents, 
;>as is posited by the book of rules Searle has in hand, then the chinese 
;>language itself (or any other language) is just as plausibly a bunch of 
;>rules, nothing more.  Searle manipulating rules doesn't understand; therefore 
;>Searle speaking English is really just manipulating rules of the english 
;>language and isn't therefore understanding English, which is absurd.  
;>Conclusion, rules insufficient for encoding of language as is commonly 
;>used: therefore Searle Chinese rule book can't exist, end of argument.
;
;This seems a confused line of argument to me.  You seem to be saying that
;_if_ the Chinese language can be captured in a rule book, then the English
;language can be captured in a rule book, and moreover, it _must be_ that
;Searle is just manipulating rules when he speaks English, which leads to
;a contradiction.  

I only said Searle PLAUSIBLY is manipulating rules (in being Searle), 
because if the Book exists then clearly the structure of discourse can 
be described via a set of rules (as many and as complicated as you like). 
It's not out of the question, that's all, in which case it's not claer to me
that the dual distinctions of understanding necessarily pertain. If it were
absolutely clear that understanding could NOT be rule-based, then the book
couldn't exist, in which case the argument is vacuous. If understanding CAN
be rule-based, then Searles's distinction is empty and the systems argument
wins. Searle, after all, is just an IO device.

;Secondly, and more important, the assumption that the Chinese rule book
;could exist is just that - an assumption made for the purposes
;of a reductio argument.  

Yes, but it's an assumption whose implications aren't necessarily
restricted to the book itself -- I'm trying to show that one of these
implications affects Searle.

;For a computer program to pass the Turing test in
;Chinese, some set of rules about responding to Chinese would have to be laid
;down in the program (that's just what the program would be).  
;Let's grant that, Searle says, and then ask whether the computer
;understands.  It seems to me that if you don't won't to grant that,
;it's true that Searle's argument doesn't get off the ground, but it's
;also true that you've already ruled out the possibility of a computer
;passing the Turing test in the first place.

Not really! What I doubt is that the Turing test can be used to ascertain
an intentionality in the machine -- conversation is only one aspect
of intelligence, not necessarily its substrate. We'll never know if anything
is conscious (cats or octopi or Searle) until it becomes possible (if ever) to
experience it directly. Until then we're stuck to ATTRIBUTING intentional
states to other organisms or to machines.

eliot@phoenix.Princeton.EDU (Eliot Handelman) (06/10/90)

In article <17102@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (I) wrotes:
;Yes, but it's an assumption whose implications aren't necessarily
;restricted to the book itself -- I'm trying to show that one of these
;implications affects Searle.


Here's a related argument. Searle wants to know if he can predict the
future. Commonsensically we say of course, he can't. Then Searle
gets hold of a book, written in chinese, that is a detailed oracle of
the next million years. Searle doesn't understand the book, so he still
can't predict the future. But the point is now that a book exists which
does predict the future, contrary to our commonsensical view! The entire
problem has changed from 1. whether Searle can predict the future to 2.
the demonstration that Searle COULD predict the future if he understood
the book. Similarly, the commonsensical view of underrstanding says that
whatever it is, it's not pushing around symbols; but Searle is making
an assumption that says it could be exactly just that.

swf@tdatirv.UUCP (swf) (06/10/90)

In article <1990Jun9.154316.29020@ux1.cso.uiuc.edu> (Ward Page) writes:
>
>You're right.  I believe that this is at the heart of his argument.  There
>is an interesting thought experiment in Moravecs 'Mind Children' that
>talks about this.  The argument goes this way: If an artificial neuron were
>developed that exactly mimics (functionally) a brain cell and you replaced
>one neuron in the brain with this artificial neuron, would you still be
>capable of thought?  If the answer is yes, how many neurons could you replace
>before you are incapable of thought?  At the heart of this thought experiment
>is the ability to exactly mimic a neuron.  Searle would have to reject this
>to refute the argument (assuming the artificial neuron is made of different
>stuff than the real neuron).
>
This is very interesting, given that most current research in neural
networks is being done using simulations on serial computers, rather
than actual neural net hardware.  I think I can now give a skeleton
outline of the "rule book" used in the Chinese Room.

There are two subsets to the rule-set.  The first is a functional
description of the neural network complex that is formed by the
linguistic centers of the human brain. (Sufficient to either simulate
or construct a replica of it)  The second rule-set is a description of
the particular connection weights by which the linguistic NN is
specialized for Chinese, rather than some other language.  Now simulate
the result on a sufficiently powerful computer, using a standard NN
simulation (or in a Chinese Room with enough workers).  The result:
	A Chinese Room that processes Chinese *exactly* the same way
	as humans do, but in silicon rather than carbon!!

Now, either the CR "understands" Chinese, or *no* human "understands"
Chinese, since both are using the same mechanism!!

Now admittedly, this is beyond current technology, but it is certainly
theoretically plausible, given current directions in AI research.  Perhaps
this is the "intuitive" counter-argument that was previously asked for?

----------------------
uunet!tdatirv!swf				(Stanley Friesen)
swf@tdatirv.UUCP

sfleming@cs.strath.ac.uk (Stewart T Fleming IE87) (06/11/90)

In article <3192@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin) writes:
>
>I think it's important to keep in mind what Turing really ment by the
>Turing Test.  My understanding of what he was saying is not that the
>computer IS intelligent, but that we must CONSIDER it intelligent because
>we can't tell the difference between the human and computer.

Absolutely !  The TT is a test of our perceptions of intelligence,
not of intelligence itself (whatever that may be).

>Aparently, Turings definition of "intelligence" or "understanding"
>relates to its actions, not the processes it engages in to produce
>what looks like "understanding".

Again, agreed.  What happens if you ask the machine a question to which
it cannot reply, simply because it has never encountered (experienced)
the material relating to the question.  Is the machine less intelligent
simply because it does not have a particular piece of knowledge ?
The intelligent aspect of the machine is the response it gives
("I don't know"..."Never heard of it"..."Say what ?") in response to
the question.

Stewart
>
>- Jim Ruehlin


-- 
4th Year Information Engineering, University Of Strathclyde, Scotland.
Dick Turpin Memorial Maternity Hospital : "Stand And Deliver".

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (06/12/90)

In article <17102@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:

>Not really! What I doubt is that the Turing test can be used to ascertain
>an intentionality in the machine -- conversation is only one aspect
>of intelligence, not necessarily its substrate. We'll never know if anything
>is conscious (cats or octopi or Searle) until it becomes possible (if ever) to
>experience it directly. Until then we're stuck to ATTRIBUTING intentional
>states to other organisms or to machines.

I think this all comes down to internal representation.
If a device is exhibiting ome behavior which requires a
certain internal representation to have been developed, then
we can readily assume that there is an internal "understanding"
provided to the system by the internal representation.

Let's say we have a neural network which has as input an amplitude
vs. time graph of a sound.  If we ask the network to differentiate
between 1000 Hz and 10 kHz sounds, we know the network must have
formed an internal representation of the frequency of the
sounds (assume many slightly different examples at the same frequency
for input to avoid pattern memorization...more on that later).
We can say the network has an understanding of frequency as it
related to the classification task.

If the Turing Test can only be performed by an AI system which must
have internal representations similar to a humans, then it is a valid
test for human-like understanding.  Of course, some people can be fooled
by ELIZA or DOCTOR, which shows that a much simpler internal representation
is sometimes adequate.  Most people, however, would require a program
with a much more human-like internal representation.

Something important to remember is that to perform useful tasks,
human-like understanding and interal representations is not
neccessary.  On the contrary, human-like understanding of arithmatic
is poor compared to a hand calculator.  

Oh yeah, about memorization.  I have found that neural networks
tend to develop classifications based on the simplest internal
representation possible.  For example, if we teach a network
to differentiate between two 2d patterns, it might just count up the
number of on-pixels in each pattern, and classify on that!
That is why in order to assure learning of higher-order features,
we must present noisy, misaligned, and variously scaled patterns to
them.

-Tom

dsa@dlogics.COM (David Angulo) (06/12/90)

In article <36091@shemp.CS.UCLA.EDU>, martin@oahu.cs.ucla.edu (david l. martin) writes:
> 
> Secondly, and more important, the assumption that the Chinese rule book
> could exist is just that - an assumption made for the purposes
> of a reductio argument.  For a computer program to pass the Turing test in
> Chinese, some set of rules about responding to Chinese would have to be laid
> down in the program (that's just what the program would be).  
> 
> Dave Martin

I think that's the crux of Searle's flaw (or perhaps just one of many).  How
can you assume what the computer program is going to look like when you have
no idea what the problem even is yet?  Granted, given Searle's type of book
of all possible questions and answers, there would probably be no intelligence.
That's probably not what the program will look like, however.  Maybe the
program won't look ANYTHING like language.  Does a database management software
program look like accounting data?  Couldn't we prove that database management
software is impossible using Searle's method?
-- 
David S. Angulo                  (312) 266-3134
Datalogics                       Internet: dsa@dlogics.com
441 W. Huron                     UUCP: ..!uunet!dlogics!dsa
Chicago, Il. 60610               FAX: (312) 266-4473

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/12/90)

In article <2703@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>In article <13772@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar)
>writes:
>
>>John Pollock has observed that the computer is now a sufficiently powerful
>>tool that one can no longer do epistemology from the comfort of one's
>>armchair.  Any theory of epistemology today must be held up to the test
>>of validation through a computer model (or so says Pollock). 
>
>Has anyone actually made a model of an epistemological theory?
>I'd like to know more about this.
>
The jury is still out.  John Pollock just published a book entitled HOW TO
BUILD A PERSON.  In his own words this book is a "prolegomenon" to the
construction of such a model.  He provides what may best be described
as a system architecture and has even implemented the easiest pieces.
The book ends with a "road map" discussion his subsequent steps.  I just
submitted a review of this book to ARTIFICIAL INTELLIGENCE in which I observe
that he has a rough road ahead.  Nevertheless, I have to admire him for trying.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"So, philosophers of science have been fascinated with the fact that elephants
and mice would fall at the same rate if dropped from the Tower of Pisa, but not
much interested in how elephants and mice got to be such different sizes in the
first place."
					R. C. Lewontin

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/12/90)

In article <IaPsbKG00VsUM0s0h_@andrew.cmu.edu> dg1v+@andrew.cmu.edu (David
Greene) writes:
>Excerpts from netnews.comp.ai: 7-Jun-90 Re: Hayes vs. Searle David
>Angulo@dlogics.COM (1092)
>
>> No, a program couldn't be printed (if by program you mean a list of
>> questions and their answers) because such a book or program is always
>> incomplete.  To prove this, all you have to do is ask in English all of
>> the possible addition problems.  This is infinite so the book cannot list
>> all of the questions nor can it list all of the answers.
>
>This raises a question that has not been clear in the discussion, that
>is, it seems to confuse intelligence with omiscience.  It seems
>perfectly reasonable to allow that the entity (book, human, room) does
>not know a particular line of inquiry.  The distinction (at least for
>the turing test) has always been that the pattern of response is
>indistinguishable from an "intelligent being" (usually human). 
>Constantly saying "I don't know" to all questions won't get you too far,
>but it is appropriate at certain times.
>
Turing was well aware of this point.  Perhaps not enough readers have actually
read Turing's paper.  Take a good look at the sample dialog he proposes:

	Q:  Please write me a sonnet on the subject of the Forth Bridge.
	A:  Count me out on this one.  I never could write poetry.
	Q:  Add 34957 to 70764.
	A:  (Pause about 30 seconds and then give as answer) 1015621.  (sic)
	Q:  Do you play chess?
	A:  Yes.
	Q:  I have K at my K1, and no other pieces.  You have only K at K6 and
		R at R1.  It is your move.  What do you play?
	A:  (After a pause of 15 seconds) R-R8 mate.

It should be clear from this example that Turing was more interested in the
behavior which went into the conversation than in the content of the
conversation itself.

I found myself thinking about Searle again over the weekend, provoked primarily
by his silly letter to THE NEW YORK REVIEW.  I think John Maynard Smith
presented an excellent reply, but it occurred to me that Searle may be
very seriously confused in how he wants to talk about symbols.  This thought
was further cultivated while I was reading Wittgenstein's "Blue Book."  Let
me try to elaborate my recent thoughts.

Wittgenstein is discussing the concept of solidity.  Here is the relevant
passage:

	We have been told by popular scientists that the floor on which we
	stand is not solid, as it appear to common sense, as it has been
	discovered that the wood consists of particles filling space so
	thinly that it can almost be called empty.  This is liable to perplex
	us, for in a way of course we know that the floor is solid, or that,
	if it isn't solid, this may be due to the wood being rotten but not
	to its being composed of electrons.  To say, on this latter ground,
	that the floor is not solid is to misuse language.  For even if the
	particles were as big as grains of sand, and as close together as
	these are in a sandheap, the floor would not be solid if it were
	composed of them in the sense in which a sandheap is composed of
	grains.  Out perplexity was based on a misunderstanding;  the
	picture of the thinly filled space had been wrongly APPLIED.
	For this picture of the structure of matter was meant to explain
	the very phenomenon of solidity.

Leaving the issue of understanding aside for a moment, I think Searle is having
a similar problem of misunderstanding with regard to computational behavior.
The bottom line of Church's thesis is that symbol manipulation serves to
EXPLAIN computational behavior, just as a theory based on the nature of
atoms and molecules serves to explain solidity.  Thus, just as Wittgenstein
has warned us against letting the specifics of the atomic model interfere with
our understanding of solidity, so we should be careful about letting the
specifics of symbol manipulation be confused with the behavior which they
model.  In a previous article I accused Searle of being rather naive about
what computers actually do in practice;  now I am inclined to believe he is
just as naive about the general theory of computational behavior.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"So, philosophers of science have been fascinated with the fact that elephants
and mice would fall at the same rate if dropped from the Tower of Pisa, but not
much interested in how elephants and mice got to be such different sizes in the
first place."
					R. C. Lewontin

aipdc@castle.ed.ac.uk (Paul D. Crowley) (06/12/90)

Anything can be in the book - Searle does not specify that the book is a
list of questions and answers.  If we say "we've built an AI using
neural nets" then the book is a list of the status of all the neurons,
with instructions at the beginning as to how to alter them.  Whatever
program is written can be implemented as a room.  The guy in the room
follows the instructions in this humungous book and eventually finds out
what characters to draw.  The room is a perfectly reasonable
philosophical idea: the guy takes a pill which means that he lives
forever, and sits in this room in which time runs much faster than it
does outside, and follows the instructions in the book for the rest of
eternity. 

(Searle puts himself in the room - I think this is probably fitting
punishment.)

Not that I'm defending Searle.  Many posters have quite correctly
pointed out that "understanding" is such a useless word that Searle
hasn't said much anyway:  but he has said that there is _some_
fundamental difference between silicon and neuron which means that
neuron can do something that silicon can't, and I don't see why we
should let even that pass. 

-- 
\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
/\__/ "Trust me, I know what I'm doing" - Sledge Hammer

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/12/90)

JD = jeff@aiai.UUCP (Jeff Dalton)
EH = eliot@phoenix.Princeton.EDU (Eliot Handelman)

New text is the stuff that's not indented.

JD: It would be nice, therefore, to have a straightforward refutation of
    the Chinese Room, preferably one with some intuitive appeal, and even
    better, I suppose, if it could be shown that Searle was in the
    grip of a fundamental misunderstanding of computation.

EH: How's this for intuitive appeal: no such "book" as the one Searle
    presupposes can exist. If this were true, then the argument is based
    on an impossible premise, hence there's no argument.

JD: Not bad, but a program could be printed, and there's the book.
    If you think Searle couldn't work fast enough, imagine that he
    has 1000 helpers.

EH: He could have 1,000,000 helpers and it wouldn't make any difference.


OK, so speed's not the issue.

Nonetheless, if we did have a computer that understood merely by
instantiating the right program (Searle's actual claim is at least
close to that), we could print the program, thus producing the
book.  So if you can show that the book can't exist, it seems to
me that you'll also show that the program can't exist, hence making
the point against strong AI another way.  So the people who would
like to refute Searle wouldn't end up better off, although it
might change which person was going around doing chat shows, etc.


EH: The chinese room argument uses the word "understand" in two
    different ways: Searle doesn't understand the chinese language,
    and Searle doesn't understand the import of the symbols he's
    manipulating. If it's possible to encode all answers to any
    possible question via rules without referents, as is posited by
    the book of rules Searle has in hand, then the chinese language
    itself (or any other language) is just as plausibly a bunch of
    rules, nothing more.


There remains the possibility that one could answer questions either
with or without understanding, depending on whether one merely used
rules without referents or did some other thing.  Indeed, Searle's
argument addresses precisely this gap between behavior and
understanding; and you haven't rules out the possibility of it
existing.


EH: Searle manipulating rules doesn't understand; therefore Searle
    speaking English is really just manipulating rules of the english
    language and isn't therefore understanding English, which is
    absurd.

I don't see how you get from "any language is just as plausibly
a bunch of rules, nothing more" to "therefore Searle speaking
English is really just manipulating rules".  I think you need
more than "just as plausibly".

    Conclusion, rules insufficient for encoding of language
    as is commonly used: therefore Searle Chinese rule book can't
    exist, end of argument.

Again, this would work as an argument against AI as well as an
argument against Searle.  It might even be a more convincing one.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/12/90)

In article <586@dlogics.COM> dsa@dlogics.COM (David Angulo) writes:
>In article <2687@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>> Not bad, but a program could be printed, and there's the book.

>No, a program couldn't be printed (if by program you mean a list of
>questions and their answers) because such a book or program is always
>incomplete.

By "program" I do not mean a list of questions and their answers
and neither does Searle.

Now, perhaps there are some things that can't be printed that might
count as programs.  But if there are infinite programs, finite
computers couldn't run them.

On the other hand, finite "always incomplete" programs are certainly
possible, because they might always be reprogramming themselves by
generating new data structures.  (Data manipulations can often be seen
as interpreting the data as a program.)

>To prove this, all you have to do is ask in English all of
>the possible addition problems.  This is infinite [...]

That's one reason why programs that do addition aren't written that
way.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/12/90)

In article <3204@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin) writes:
>
>According to Searly in the Chinese Room paper, the difference is that
>human brain tissue has some "magical" (my word) quality that provides for
>intelligence/understanding/causative powers, while mere silicon doesn't.
>He states that just what this quality is and how it works is a matter for
>empirical study.  Neat way to sidestep the issue, no?
>
>It seems to me that this is the real point of his paper - brain mass is
>different from silicon mass in some fundamental way.  There's some
>molecular/atomic/?? quality or structure that makes brain mass causative 
>and silicon not.  He may not have intended this, but thats what it comes
>down to, and it seems patently silly.  There was no evidence for this when
>he wrote his paper, and there still isn't.

In a sense, you have it backwards.  Searle thinks he has shown that
computers do not understand (merely by instantiating a computer
program), and he takes it as given that people do understand.

If both were so, it would follow that there must be some difference
between computers (at least as far as they are merely instantiating
programs) and people.  If we accept his argument, there is "some
evidence", namely that people do understand.  _Something_ has to
account for it.

Note that Searle doesn't say that running the program in a person
would result in understanding.  Indeed, in his answer to the system
reply, he says it wouldn't.  On the hand, he would allow that
something made of silicon, etc could understand -- but not merely
by running the right program.

So it's something about people beyond merely running a program that
results in understanding.  That is, those who suppose that all the
aspects of people needed for understanding can be captured in a
program that we could then run on any machine with the right
formal properties are wrong.

Searle is, moreover, a materialist.  Understanding is produced by
the physical brain, by it's causal powers if you will.  So he figures
that something with equivalent causal powers would also produce
understanding.

However, Searle doesn't know enough to say what the relevant
properties of the brain actually are.  He thinks empirical
investigation is the way to find out.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/12/90)

In article <1990Jun9.154316.29020@ux1.cso.uiuc.edu> page@ferrari.ece.uiuc.edu.UUCP (Ward Page) writes:
>
>There is an interesting thought experiment in Moravecs 'Mind Children' that
>talks about this.  The argument goes this way: If an artificial neuron were
>developed that exactly mimics (functionally) a brain cell and you replaced
>one neuron in the brain with this artificial neuron, would you still be
>capable of thought?  If the answer is yes, how many neurons could you replace
>before you are incapable of thought?  At the heart of this thought experiment
>is the ability to exactly mimic a neuron.  Searle would have to reject this
>to refute the argument (assuming the artificial neuron is made of different
>stuff than the real neuron).

But Searle doesn't have to refute this argument.  The Chinese Room
argument leads to Searle to conclude that there must be some
difference between computers, at least as far as they are merely
executing the right program, and people to account for the presence of
understanding in one but not in the other.  He does not say that this
difference is just the materials they are made of.  (For a longer,
and possibly clearer, version of this, see my previous message.)

All the stuff about the causal powers of the brain making a difference
follows from the CR argument -- the argument in no way depends on it.
Nor does the argument imply that entities that have "artificial"
neurons could not understand, just that the artificial neurons would
have to be equivalent to real neurons in the necessary ways. 

It's important to note that you are not talking about capturing the
relevant aspects of the brain in a program -- which is what Searle is
attacking; you are talking about duplicating the physical functionality.
Since Searle thinks it's the physical properties that matter (since
he's a materialist, the famed "causal powers" are physical ones),
he isn't going to be refuted if duplicating them in different
materials still results in understanding.

If, on the other hand, you could show that all of the properties
necessary to understanding in brains could be duplicated by 
artificial brains *and* that the necessary properties of
artificial brains could be captured by a program, you might
have Searle in trouble.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (06/12/90)

In article <17102@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:
>I only said Searle PLAUSIBLY is manipulating rules (in being Searle), 
>because if the Book exists then clearly the structure of discourse can 
>be described via a set of rules (as many and as complicated as you like). 

So far I agree.

>It's not out of the question, that's all, in which case it's not clear to me
>that the dual distinctions of understanding necessarily pertain. 

Ok, so maybe it _isn't_ clear.  I might agree with that too.

>If it were absolutely clear that understanding could NOT be rule-based,
>then the book couldn't exist, in which case the argument is vacuous.
>If understanding CAN be rule-based, then Searles's distinction is empty
>and the systems argument wins. Searle, after all, is just an IO device.

But all Searle assumes is that the _behavior_ is rule-based.  And
then he askes whether there is understanding as well and concludes
that, in the case of the Chinese room, there wouldn't be.  To say
that the right behavior is all that's required for understanding
would be to beg the question.

If understanding could not be rule-based, maybe the question-
answering behavior still could be.  If the behavior couldn't be
rule-based either, then the book couldn't exist Searle's argument
would be unnecessary.

Of course, if understanding can be rule-based, the behavior can be
too.  If all we have is that the behavior can be rule-based, however,
we can still ask whether understanding results (just by following the
rules) or not.

aipdc@castle.ed.ac.uk (Paul D. Crowley) (06/13/90)

It is true that Searle doesn't need to refute any of this:  the
ridiculous consequences of accepting his idea don't constitute a
refutation of it (or at least not an intellectually satisfying one) but
they lead me to search for a good refutation in the certain knowledge
that he is wrong.

Let's see:  Searle would accept that an artificial neuron which exactly
duplicated a real one could form part of a brain that understands.  But
a Turing equivalent machine, according to his demonstration, cannot
understand.  Therefore neurons do something uncomputable.  Penrose
attempts to back this view up with some quantum mechanics.

The idea that "neurons do something uncomputable" is irrefutable, since
you cannot set up a test certain to catch that something.  That we have
pushed opponents of strong AI into an irrefutable position is a good
sign.  What we now have to demonstrate is that the Chinese Room lends no
weight to this idea.

As far as I can see the Room rests on a contradiction:

We would expect anyone who knows the rules by which a Turing equivalent
machine which understands Chinese would understand Chinese. 

But we would not expect anyone who knows the rules by which a Turing
equivalent machine which understands Chinese would understand Chinese. 

As they say in London, "Do wot, John?"

-- 
\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
/\__/ "Trust me, I know what I'm doing" - Sledge Hammer

martin@oahu.cs.ucla.edu (david l. martin) (06/13/90)

In article <587@dlogics.COM> dsa@dlogics.COM (David Angulo) writes:
>I think that's the crux of Searle's flaw (or perhaps just one of many).  How
>can you assume what the computer program is going to look like when you have
>no idea what the problem even is yet?  Granted, given Searle's type of book
>of all possible questions and answers, there would probably be no intelligence.
>That's probably not what the program will look like, however.  Maybe the
>program won't look ANYTHING like language.

I think out of fairness to Searle we have to grant that he has been arguing
about computer programs in the conventional sense of the term.  Any such
program can be viewed as a set of instructions in some language.  If there
is a computer that can run the program, then conceivably Searle could carry out
the same set of instructions (albeit at a much slower pace, with the
help of pencil and paper, etc., etc.).

Dave Martin

martin@oahu.cs.ucla.edu (david l. martin) (06/13/90)

In article <13871@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>
>Leaving the issue of understanding aside for a moment, I think Searle is having
>a similar problem of misunderstanding with regard to computational behavior.
>The bottom line of Church's thesis is that symbol manipulation serves to
>EXPLAIN computational behavior, just as a theory based on the nature of
>atoms and molecules serves to explain solidity.  Thus, just as Wittgenstein
>has warned us against letting the specifics of the atomic model interfere with
>our understanding of solidity, so we should be careful about letting the
>specifics of symbol manipulation be confused with the behavior which they
>model.  In a previous article I accused Searle of being rather naive about
>what computers actually do in practice;  now I am inclined to believe he is
>just as naive about the general theory of computational behavior.
>

Although I'm certainly in sympathy with the desire to bring Wittgenstein
into the discussion, I don't find this comparison to be quite "solid";
at any rate, it could use a little fleshing out.

In the case of the atomic model of solidity, we all have some sort of notion
(depending on how much physics we had) of just how it manages to explain
solidity.  I mean, it really does succeed in being an explanation.  In the
case of the use of symbol manipulation, it _doesn't_ explain the things
that we'd like to have explained (things like "intelligence" and 
"understanding"), and isn't that just what Searle's point is?

Dave Martin

forbis@milton.acs.washington.edu (Gary Forbis) (06/13/90)

In article <36194@shemp.CS.UCLA.EDU> martin@oahu.cs.ucla.edu (david l. martin) writes:
>In the
>case of the use of symbol manipulation, it _doesn't_ explain the things
>that we'd like to have explained (things like "intelligence" and 
>"understanding"), and isn't that just what Searle's point is?

I'm not sure what Searle's point is but I think his argument is an
exercise in bigotry.  It reduces to the other mind problem and is applicable
to other humans as it is to computers.  If those who buy into Searle's 
argument were to ask themselves how they choose to whom they grant the 
attributes "intelligence" and "understanding" they will find it capricious.

Science works with observables.  It should come as no surprise behavioralism
is favored by science even if it should be philosophically disproved.  An
explanation for an unmeasurable phenominon is no explanation at all.

--gary forbis@milton.u.washington.edu

frank@bruce.cs.monash.OZ.AU (Frank Breen) (06/13/90)

From article <36194@shemp.CS.UCLA.EDU>, by martin@oahu.cs.ucla.edu (david l. martin):
> 
> In the case of the atomic model of solidity, we all have some sort of notion
> (depending on how much physics we had) of just how it manages to explain
> solidity.  I mean, it really does succeed in being an explanation.  In the
> case of the use of symbol manipulation, it _doesn't_ explain the things
> that we'd like to have explained (things like "intelligence" and 
> "understanding"), and isn't that just what Searle's point is?
> 
Perhaps the problem is that no-one can agree exactly what understanding
is.  How can we argue about whether or not something has a quality that
we can't even define.  If Searle is just saying we don't really know
what understanding is why not just say so.  From the bits of this
discussion that I've read it seems that Searle hasn't really worked
out exactly what understanding is and is searching for an answer.

In the end I think it is irrelevant - we should just define something
that understands as something that appears to understand.  Then we
can say that the man in the chinese room does not understand chinese
(if you took him out of the room) but the man together with the books
does understand.  After all I could understand even a cobol program
but only with a decent cobol manual.  Without such a book I doubt
if I could work it out.  There's nothing mysterious or paradoxical
here.  Eventually I would end up learning cobol but the man in the
room would also eventually learn Chinese - it would just take longer.

Frank Breen

news@ism780c.isc.com (News system) (06/14/90)

In article <2750@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>Nonetheless, if we did have a computer that understood merely by
>instantiating the right program (Searle's actual claim is at least
>close to that), we could print the program, thus producing the
>book.  So if you can show that the book can't exist, it seems to
>me that you'll also show that the program can't exist, hence making
>the point against strong AI another way.  So the people who would
>like to refute Searle wouldn't end up better off, although it
>might change which person was going around doing chat shows, etc.

I believe Searle is correct in that a machine mearly executing a program
cannot exibit inteligent behavior.  In order to be inteligent one must react
to the environment in a reasonable way.  After all "only mad dogs and
English men go out in the noon day sun".

The problem with the CR is that Serale posits that the room could work
without the ability to sense the environment.  This is what I feel is wrong
with the argument.  The ability to sense the environment is essential.
Lacking that ability a conversion might look like this.  (written in
English)

  Q: Is is raining?
  A: I cannot compute an answer.
  Q: Is this question submitted on green paper?
  A: I cannot compute an answer.

Given a set of questions and answers like the above one would conclude the
CR is not inteligent.  If we allow the CR to sense the environment then the
book could the be written to cope with questions like the above.  One
solution would be to have instructions like the following.  (I use <> to
enclose Chinese text).

   If (the_question == <is it raining>) {
      look outside.
      if (it_is_raning) answer <yes>
      else answer <no>
   }

What is more the CR operator would in time come to know that <yes> means
"yes" and <no> means "no".  And probably could even compute that <is it
raining> means "is it raining".  i.e., the operator would come to understand
Chinese just like other people come to understand Chinese.

I would like to see if Searle believes that the ECR (extended Chinese room)
could not be built with a non-human as the operator.

     Marv Rubinstein

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/14/90)

In article <36194@shemp.CS.UCLA.EDU> martin@oahu.cs.ucla.edu (david l. martin)
writes:
>In article <13871@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar)
>writes:
>>
>>Leaving the issue of understanding aside for a moment, I think Searle is
>>having
>>a similar problem of misunderstanding with regard to computational behavior.
>>The bottom line of Church's thesis is that symbol manipulation serves to
>>EXPLAIN computational behavior, just as a theory based on the nature of
>>atoms and molecules serves to explain solidity.  Thus, just as Wittgenstein
>>has warned us against letting the specifics of the atomic model interfere with
>>our understanding of solidity, so we should be careful about letting the
>>specifics of symbol manipulation be confused with the behavior which they
>>model.  In a previous article I accused Searle of being rather naive about
>>what computers actually do in practice;  now I am inclined to believe he is
>>just as naive about the general theory of computational behavior.
>>
>
>Although I'm certainly in sympathy with the desire to bring Wittgenstein
>into the discussion, I don't find this comparison to be quite "solid";
>at any rate, it could use a little fleshing out.
>
>In the case of the atomic model of solidity, we all have some sort of notion
>(depending on how much physics we had) of just how it manages to explain
>solidity.  I mean, it really does succeed in being an explanation.  In the
>case of the use of symbol manipulation, it _doesn't_ explain the things
>that we'd like to have explained (things like "intelligence" and 
>"understanding"), and isn't that just what Searle's point is?
>
I think this misses the point of my original paragraph.  It seems to me that
the point Searle keeps returning to is that there is some significant
difference between computational behavior and human behavior and that
this difference is captured by the concept of intentionality.  He plays
his games with slippery terms like "intelligence" and "understanding" in
his attempts to discuss the nature of this difference;  but, ultimately,
the question reduces to the nature of these two forms of behavior.  Now,
whether or not I accept the point that we all have some sort of notion of
how atoms and molecules explain solidity (personally, I think that is a rather
generous use of the word "all"), I think that SEARLE'S notion of the
relationship between symbol manipulation models (such as those of Turing
or Post) and computational behavior is confused to the point of distortion.
His confusion resides in his assumption that symbol manipulation is all there
is to computational behavior.  However, some of us (although Ken Presting will
probably disagree with me on this point) view the computational BEHAVIOR as
something which EMERGES from the symbol manipulation which should not be
confused with either the symbols or the formal operations which manipulate
them.  If we put more effort into thinking about appropriate ways to talk
about both computational behavior and human behavior, we may discover more
elements of similarity in our two languages than differences.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"So, philosophers of science have been fascinated with the fact that elephants
and mice would fall at the same rate if dropped from the Tower of Pisa, but not
much interested in how elephants and mice got to be such different sizes in the
first place."
					R. C. Lewontin

churchh@ut-emx.UUCP (Henry Churchyard) (06/14/90)

In article <3204@se-sd.SanDiego.NCR.COM> Jim Ruehlin writes:

>According to Searle in the Chinese Room paper, the difference is that
>human brain tissue has some "magical" (my word) quality that provides for
>intelligence/understanding/causitive powers, while mere silicon doesn't.

>It seems to me that this is the real point of his paper - brain mass is
>different from silicon mass in some fundamental way.  There's some
>molecular/atomic/?? quality or structure that makes brain mass causitive 
>and silicon not.  He may not have intended this, but thats what it comes
>down to, and it seems patently silly.  There was no evidence for this when
>he wrote his paper, and there still isn't.

    What about the Penrose book (_The_Emperor's_New_Mind_, 1989),
where he argues that brain tissue might be different because of
quantum mechanical effects.  I'm not saying that this position is
necessarily correct, but the argument has been seriously made.

                         --Henry Churchyard

loren@tristan.llnl.gov (Loren Petrich) (06/15/90)

In article <31624@ut-emx.UUCP> churchh@ut-emx.UUCP (Henry Churchyard) writes:
>In article <3204@se-sd.SanDiego.NCR.COM> Jim Ruehlin writes:
>
>>According to Searle in the Chinese Room paper, the difference is that
>>human brain tissue has some "magical" (my word) quality that provides for
>>intelligence/understanding/causitive powers, while mere silicon doesn't.
>
>>It seems to me that this is the real point of his paper - brain mass is
>>different from silicon mass in some fundamental way.  There's some
>>molecular/atomic/?? quality or structure that makes brain mass causitive 
>>and silicon not.  He may not have intended this, but thats what it comes
>>down to, and it seems patently silly.  There was no evidence for this when
>>he wrote his paper, and there still isn't.
>
>    What about the Penrose book (_The_Emperor's_New_Mind_, 1989),
>where he argues that brain tissue might be different because of
>quantum mechanical effects.  I'm not saying that this position is
>necessarily correct, but the argument has been seriously made.

	Every time I see people invoke "quantum effects" in a context
like this, I am tempted to puke. Quantum effects are essentially
damped out at the length/time scales at which brain components
operate. And this "standpoint of the observer" has (I'm sure) been
misunderstood. All it means is that quantum systems are inevitably
affected by attempts to observe them, and not by the presence of some
mystical "observer". Quantum-mechanical effects will not make ESP
possible, for example (which is what some people seem to think).

	I have not read Penrose's book, but I am not impressed by what
he seems to be arguing for -- something like Searle's position that we
have some mystical ability to think that can't be duplicated in a
computer.

	There is the curious Searle/Penrose argument to the effect
that the simulation of thought is not thought. But how does one tell
the difference? I think that the essence of Searle's "Chinese Room"
argument is "I don't find any mind inside, so it cannot be thinking."
But how does one tell?

	A challenge for the Searle/Penrose school of thought is:

	How can they tell that other people can think? According to
their argument, you can't. After all, they claim that what seems like
thought may only be a simulation of thought, which is supposedly
different, and perhaps what goes on in other people's minds is just a
simulation.

	And I think that this challenge is what the Turing Test is all
about.

						        ^    
Loren Petrich, the Master Blaster		     \  ^  /
	loren@sunlight.llnl.gov			      \ ^ /
One may need to route through any of:		       \^/
						<<<<<<<<+>>>>>>>>
	lll-lcc.llnl.gov			       /v\
	lll-crg.llnl.gov			      / v \
	star.stanford.edu			     /  v  \
						        v    
For example, use:
loren%sunlight.llnl.gov@star.stanford.edu

My sister is a Communist for Reagan

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin) (06/15/90)

In article <31624@ut-emx.UUCP> churchh@ut-emx.UUCP (Henry Churchyard) writes:
>In article <3204@se-sd.SanDiego.NCR.COM> Jim Ruehlin writes:
>
 >According to Searle in the Chinese Room paper, the difference is that
>>human brain tissue has some "magical" (my word) quality that provides for
>>intelligence/understanding/causitive powers, while mere silicon doesn't.
>
 >It seems to me that this is the real point of his paper - brain mass is
>    What about the Penrose book (_The_Emperor's_New_Mind_, 1989),
>where he argues that brain tissue might be different because of
>quantum mechanical effects.  I'm not saying that this position is
>necessarily correct, but the argument has been seriously made.

Interesting!  I hadn't heard about this.  Can you give more details on
this book, Penrose's arguments, etc.?  I've put it on my reading list,
but it's so long already it may take a while for me to get to it.




- Jim Ruehlin

dsa@dlogics.COM (David Angulo) (06/15/90)

In article <2752@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
> In article <586@dlogics.COM> dsa@dlogics.COM (David Angulo) writes:
> >No, a program couldn't be printed (if by program you mean a list of
> >questions and their answers) because such a book or program is always
> >incomplete.
> 
> By "program" I do not mean a list of questions and their answers
> and neither does Searle.
> 

But that's DEFINATELY the spirit of Searle's argument.  Otherwise, he
needs to have a memory to serve as a database.  With THIS you WOULD say
that the "room understands."  He had no such concept of memory: just
Searle and the book.







-- 
David S. Angulo                  (312) 266-3134
Datalogics                       Internet: dsa@dlogics.com
441 W. Huron                     UUCP: ..!uunet!dlogics!dsa
Chicago, Il. 60610               FAX: (312) 266-4473

dsa@dlogics.COM (David Angulo) (06/15/90)

In article <36193@shemp.CS.UCLA.EDU>, martin@oahu.cs.ucla.edu (david l. martin) writes:
> I think out of fairness to Searle we have to grant that he has been arguing
> about computer programs in the conventional sense of the term.  Any such
> program can be viewed as a set of instructions in some language.  If there
> is a computer that can run the program, then conceivably Searle could carry out
> the same set of instructions (albeit at a much slower pace, with the
> help of pencil and paper, etc., etc.).
> 

I think that this is comparable to data base managers.  Does DBase II
understand accounting?  How about SQL?  Searle certainly made his "book"
seem to have all the answers for any question somewhat like a list.  That's
why it seems so counter-intuitive.
-- 
David S. Angulo                  (312) 266-3134
Datalogics                       Internet: dsa@dlogics.com
441 W. Huron                     UUCP: ..!uunet!dlogics!dsa
Chicago, Il. 60610               FAX: (312) 266-4473

churchh@ut-emx.UUCP (Henry Churchyard) (06/16/90)

In article <3305@se-sd.SanDiego.NCR.COM> Jim Ruehlin writes: 

>> Penrose book (_The_Emperor's_New_Mind_, 1989), where he argues that
>>brain tissue might be different because of quantum mechanical
>>effects.  I'm not saying that this position is necessarily correct,
>
> Can you give more details on this book, Penrose's arguments, etc.?

   I've only read the first few chapters so far, and Penrose covers
wide areas of the theory of Physics, so peeking at the end doesn't
really help!  I can say, however, that Penrose's objections come out
of the details of heoretical physics, rather than the more abstract
philosophical considerations raised by Searle.  (Penrose is an actual
theoretical physicist/mathemetician who worked closely with Hawking in
the past, so that his use of QM is _not_ a vague pseudo-mystical
tao/zen-ish kind of thing!)  Sorry I can't be of more help yet...

             --Henry Churchyard

frank@bruce.cs.monash.OZ.AU (Frank Breen) (06/26/90)

In <57800@bbn.BBN.COM> dredick@bbn.com (Barry Kort) writes:

>There are evidently some situations in which the brain's decision
>can be determined by a quantum-mechanical roll of the dice.  The
>parable of the hungry/thirsty donkey caught midway between water
>and food illustrates the need for random tie-breaking when the
>brain's decision-making machinery is precariously balanced on the
>razor's edge of two equally desirable choices.
>  When there is no information on which way
>to turn, we choose one fork at random.  And quantum-mechanical
>events are as good a way as any to cast lots and get on with the
>story.

I don't think that this is the case or that it would be good if it
were.  Also this is not a good example.  The donkey might make a
random choice but a second later it would still be stuck midway
between the two, make another choice which is just as likely to
be different and around we go again.  The solution to the dilemma
of the donkey is the ability to concentrate on a task until it is
finished and this has nothing to do with random decisions or the 
lack of them.

One reason that I don't think it is the case is that when there is
a choice between two posibilities which appear as good as each other
it is very difficult to decide.  This is despite the fact that this
is the case where my decision does not matter very much.  If my brain
was equipped with a random decision maker it would be easy to decide,
but it is not.

One reason that I don't think it would be good to have a random decision
maker in my brain is that often when there are two apparently equally
good choices it is often better to find out more about the two choices
than to make a premature decision.  If I where equipped with a random
decider then it would often pre-empt this process since my decision
would already be made.  One should be able decide to make random decisions
conciously by a non-random decision.

BTW I do think that randomness could well be usefull but it could easily
be destructive to good thinking so it can be dangerous too.  Personally
I am of the opinion that our brains can come up with enough chaos
without being randomised and that there are no coin tossers built in.
Still it could well be the case that such a mechanism exists but I doubt
it would be of great significance.

Frank Breen

ylikoski@csc.fi (07/20/90)

I posted this in the summer, but it probably did not make it to the
network.  My apologies if this reaches anyone twice.  It is a comment
on the discussion involving "Hayes vs. Searle".

I think I can show that the Chinese Room argument merely shows that a
very restricted system does not understand, but this does not prove
that no computer system is capable of understanding.  Searle's main
point is that computer programs merely manipulate symbols (they are
syntactic), without reference to meaning (they do not attach semantics
to the symbols), and so are fundamentally incapable of understanding.

The human brain attaches semantics to neuron impulse trains and its
symbols.  I would claim that if we build a computer system that attaches
semantics to its symbols in the same way as the human brain attaches
semantics to its symbols, then we have a computer program that
understands.

I invite the reader to carry out some simple introspection.  Let a
human, say John, read a book involving real analysis and understand
Rolle's theorem.  What gives the semantics to the symbol strurtures that
exist in John's head concerning Rolle's theorem?

It seems to me that two things give the semantics to John's symbol
structures:

1) They are connected to other symbol structures in his mind.  John
understands how Rolle's theorem is related to other theorems involving
real analysis, he has problem solving schemata involving how to apply
Rolle's theorem, and so forth.  Many AI researchers seem to support
the opinion that the symbol structures in the human mind are semantic
networks, and R. Carnap's meaning postulates are very similar to links
in semantic networks, I wonder if Searle is familiar with Carnap's
work.

2) Many agencies in John's Society of Mind possess capabilities
involving Rolle's theorem: for example his Inference agency knows how
to utilize Rolle's theorem while proving simple theorems.

If we build an Artificial Intelligence program that has the same
problem solving capabilities as John --- and I believe this can be
done fairly straightforwardly with the current state of the art of AI
technology --- does our program understand Rolle's theorem?  In a
sense, it does, and it gives semantics to the symbol structures
involving Rolle's theorem.

-------------------------------------------------------------------------------
Antti (Andy) Ylikoski              ! Internet: YLIKOSKI@CSC.FI
Helsinki University of Technology  ! UUCP    : ylikoski@opmvax.kpo.fi
Helsinki, Finland                  !
-------------------------------------------------------------------------------

Artificial Intelligence people do it with resolution.

smoliar@vaxa.isi.edu (Stephen Smoliar) (07/21/90)

In article <129.26a5feab@csc.fi> ylikoski@csc.fi writes:
>
>The human brain attaches semantics to neuron impulse trains and its
>symbols.  I would claim that if we build a computer system that attaches
>semantics to its symbols in the same way as the human brain attaches
>semantics to its symbols, then we have a computer program that
>understands.
>
This argument assumes that the human brain HAS symbols (or, at least, that is
implied through the use of the possessive "its").  There is no evidence that
this is the case.  I think it would be fair to say that the point is still up
for debate, just like the premise that the human brain "has" mental images.

It seems to me that the only reason we are arguing about symbols is because
they are critical in Searle's argument.  This is because Searle is bound and
determined that his precious concept of "understanding" should not be related
to behavior.  By factoring behavior out of the problem, he feels secure in
being left with a problem of interpreting symbol structures.

This still strikes me as specious.  Machines behave;  and they don't "have"
symbol structures.  WE invent symbol structures in order to explain and predict
machine behavior, but I defy anyone to find any symbols in the inner guts of
any machine architecture!  (This is why Newell deals with the "symbol level"
as the highest layer of description of a computer architecture.  It is the
layer through which we, as humans, can understand what is going on in all
the lower layers.)  Likewise, I think it very unlikely that we are going
to find any symbols in the architecture of the brain (and, to make life
even more interesting, we are probably NOT going to find the elegant hierarchy
of layers which Newell invokes in describing computer architectures).

Back when he was working on the ENTSCHEIDUNGSPROBLEM, Turing discovered that
symbols were a great lever for understanding computational behavior.  By the
time Church offered up his famous thesis, many researchers had made the same
discovery.  Church's thesis argues that since it is always the same
computational behavior, all these different symbolic perspectives are,
in some sense, equivalent.  It does NOT argue that any computing agent
must actually possess such symbol structures.  This is a subtle point;
and it seems to have escaped Searle, thus leading him to say all sorts
of silly things about symbols which fly in the face of any objective
observation of either machine or human behavior.
>
>2) Many agencies in John's Society of Mind possess capabilities
>involving Rolle's theorem: for example his Inference agency knows how
>to utilize Rolle's theorem while proving simple theorems.
>
This part of Antti's argument I can accept.  However, close inspection of
Minsky's text will reveal that he, too, does not expect his agencies to embody
symbol structures.  He discusses how various constructs which WE would deal
with as symbol structures may be implemented with his agencies, but his
argument essentially extrapolates on the idea that we can implement an
"interpreter" for "machine code" out of electronic hardware.  Anything
we try to say about "understanding Rolle's theorem" ultimately reduces
to how those agencies behave.  Any symbol structures we invoke are merely
an abstraction to facilitate the description of that behavior.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

blenko-tom@CS.YALE.EDU (Tom Blenko) (07/22/90)

In article <14385@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
|It seems to me that the only reason we are arguing about symbols is because
|they are critical in Searle's argument.  This is because Searle is bound and
|determined that his precious concept of "understanding" should not be related
|to behavior.  By factoring behavior out of the problem, he feels secure in
|being left with a problem of interpreting symbol structures.

Symbols appear in Searle's argument because they provide a reasonable
approach to characterizing the nature of programs.  One can just as
well say that programs "merely" manipulate information rather than
manipulating things in the real world.

If I show someone a sorting algorithm, I don't think we'd have any
trouble agreeing that the algorithm doesn't "know" how to rank Olympic
atheletes or determine, based on grades awarded, the best student in a
class.  Of course, if I build a system in which the algorithm is hooked
up to the right inputs and outputs, the system will correctly rank
Olympic atheletes or determine which student has the highest grades.
And without much trouble I can probably alter the system so that it is
able to determine the worst student in the class.  Or, I can alter the
system so that it runs the same algorithm but doesn't do anything
sensible at all.

Searle is making the same point about intelligence. The algorithm
doesn't suffice.

That's all that the syntax vs. semantics issue consists of.  What is
more difficult is the issue of what it means for a system to "have"
semantics (or equivalent, to "understand").


|...  Church's thesis argues that since it is always the same
|computational behavior, all these different symbolic perspectives are,
|in some sense, equivalent.  It does NOT argue that any computing agent
|must actually possess such symbol structures.  This is a subtle point;
|and it seems to have escaped Searle, thus leading him to say all sorts
|of silly things about symbols which fly in the face of any objective
|observation of either machine or human behavior.

Searle's claim is precisely that this equivalence relation is not fine
enough -- that if two systems are extentionally (behaviorally)
equivalent, it might still be the case that one was "intelligent" and
one was not.

	Tom

daryl@oravax.UUCP (Steven Daryl McCullough) (07/23/90)

In article <25618@cs.yale.edu>, blenko-tom@CS.YALE.EDU (Tom Blenko) writes:
> If I show someone a sorting algorithm, I don't think we'd have any
> trouble agreeing that the algorithm doesn't "know" how to rank Olympic
> atheletes or determine, based on grades awarded, the best student in a
> class.  Of course, if I build a system in which the algorithm is hooked
> up to the right inputs and outputs, the system will correctly rank
> Olympic atheletes or determine which student has the highest grades.
> And without much trouble I can probably alter the system so that it is
> able to determine the worst student in the class.  Or, I can alter the
> system so that it runs the same algorithm but doesn't do anything
> sensible at all.
> 
> Searle is making the same point about intelligence. The algorithm
> doesn't suffice.

Tom, I tried to make the same point earlier, that a computer program
cannot be said to "understand" something or "know" something
independently of how its inputs are derived, and how the outputs are
to be interpreted. It follows, as you say, that the algorithm alone
isn't sufficient. At the very least, it is also necessary to specify
the "wiring": how inputs and outputs are generated and interpreted.

Before the "wiring" is specified, I wouldn't say that the program has
*no* semantics; I would say rather that it doesn't have a *unique*
semantics. The program can simultaneously be for sorting Olympic
athletes or students.

This dependence on wiring doesn't automatically disprove Strong AI,
however, for the reason that there is no good argument (that I know
of) that human minds have a unique semantics, either. I happen to
believe that it is only the extraordinary complexity of the human mind
that makes it unlikely that anyone could come up with two completely
different, and equally consistent interpretations of human thinking,
as you did for a sort routine.

> Searle's claim is precisely that this equivalence relation is not fine
> enough -- that if two systems are extentionally (behaviorally)
> equivalent, it might still be the case that one was "intelligent" and
> one was not.

I think you are right about what Searle is claiming; that behavior is
not a sufficient test for intelligence. However, my old argument is:
what, if not behavior, allows one to infer that other *people* are
intelligent?

Daryl McCullough

blenko-tom@CS.YALE.EDU (Tom Blenko) (07/23/90)

In article <1607@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes:
|...
|Before the "wiring" is specified, I wouldn't say that the program has
|*no* semantics; I would say rather that it doesn't have a *unique*
|semantics. The program can simultaneously be for sorting Olympic
|athletes or students.

I don't know of anyone (including Searle) who would disagree with
this.  Searle, however, is using "semantics" in a narrower sense that
applies to the relationship between the states of a system and the
(physical) state of its environment. In particular, he is claiming that
it is necessarily a bi-directional, causal relationship, and that no
program, including any one produced by an AI researcher, has this
property.

|This dependence on wiring doesn't automatically disprove Strong AI,
|however, for the reason that there is no good argument (that I know
|of) that human minds have a unique semantics, either. I happen to
|believe that it is only the extraordinary complexity of the human mind
|that makes it unlikely that anyone could come up with two completely
|different, and equally consistent interpretations of human thinking,
|as you did for a sort routine.

I don't understand this logic.  There is no assumption that the human
mind has a "unique" semantics, only that it has a causual relationship
to its environment.  If you accept that programs have no such
relationship, then their complexity is irrelevant.  If you did have a
candidate program, there are an infinite variety of ways of "hooking it
up" to its environment that would produce insensible behavior.

|I think you are right about what Searle is claiming; that behavior is
|not a sufficient test for intelligence. However, my old argument is:
|what, if not behavior, allows one to infer that other *people* are
|intelligent?

Searle has written a book which is likely to do a better job of arguing
the position than anything that appears here:

%A John R. Searle
%T Intentionality: An Essay in the Philosophy of Mind
%I Cambridge University Press
%C Cambridge
%D 1983

	Tom

daryl@oravax.UUCP (Steven Daryl McCullough) (07/23/90)

In article <25621@cs.yale.edu>, blenko-tom@CS.YALE.EDU (Tom Blenko) writes:
> Searle, however, is using "semantics" in a narrower sense that
> applies to the relationship between the states of a system and the
> (physical) state of its environment. In particular, he is claiming that
> it is necessarily a bi-directional, causal relationship, and that no
> program, including any one produced by an AI researcher, has this
> property.

Why would you say that no program has this property? It is true that a
program can be "hooked up" to the environment in an infinite number of
ways, so the relationship between program states and real-world states
is not unique. But given a particular way of hooking up a program, it
is certainly possible to create a program which has a causal
relationship to the real world. The programs which control aircraft,
for example, certainly do.

> There is no assumption that the human mind has a "unique" semantics,
> only that it has a causual relationship to its environment.  If you
> accept that programs have no such relationship, then their complexity
> is irrelevant.  If you did have a candidate program, there are an infinite
> variety of ways of "hooking it up" to its environment that would produce
> insensible behavior.

The same could be said for a human mind. If you stuck stimulating
electrodes directly into a human brain, you could produce insensible
behavior in humans, as well. Is your point simply that humans, unlike
programs, have a natural notion of a correct hookup to the real world?

Daryl McCullough

dredick@bbn.com (Barry Kort) (07/24/90)

In article <14385@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:

 > This argument assumes that the human brain HAS symbols (or, at least,
 > that is implied through the use of the possessive "its").  There is
 > no evidence that this is the case.  I think it would be fair to say
 > that the point is still up for debate, just like the premise that the
 > human brain "has" mental images.

I don't understand this, Steve.  Spoken and written language symbolize
the elements of our world.  And these word-symbols are stored and processed
in the brain.  (At least that's where *I* store them!)  So while the
point may be worthy of debate, how can you argue that there is no evidence?



Barry Kort                       bkort@bbn.com
Visiting Scientist
BBN Labs

smoliar@vaxa.isi.edu (Stephen Smoliar) (07/25/90)

In article <58376@bbn.BBN.COM> bkort@BBN.COM (Barry Kort) writes:
>In article <14385@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar)
>writes:
>
> > This argument assumes that the human brain HAS symbols (or, at least,
> > that is implied through the use of the possessive "its").  There is
> > no evidence that this is the case.  I think it would be fair to say
> > that the point is still up for debate, just like the premise that the
> > human brain "has" mental images.
>
>I don't understand this, Steve.  Spoken and written language symbolize
>the elements of our world.  And these word-symbols are stored and processed
>in the brain.  (At least that's where *I* store them!)  So while the
>point may be worthy of debate, how can you argue that there is no evidence?
>
I guess we are in a position of mutual conflict, here, Barry.  How can you
argue that there IS evidence?  Your introspection is your personal abstraction
of what you think is going on.  There is nothing wrong with that, as long as
you don't fall into the trap of confusing the abstraction with the reality.

Look, let's try to establish a level playing field.  In his "Symbol Grounding
Problem" paper, Stevan Harnad defines a symbol system to be "(1) a set of
arbitrary 'PHYSICAL TOKENS' (scratches on paper, holes on a tape, events
in a digital computer, etc.) that are (2) manipulated on the basis of 'EXPLICIT
RULES' that are (3) likewise physical tokens and STRINGS of tokens.  The rule-
governed symbol-token manipulation is based (4) purely on the SHAPE of the
symbol tokens (not their 'meaning'), i.e., it is purely SYNTACTIC, and consists
of (5) 'RULEFULLY COMBINING' and recombining symbol tokens.  There are
(6) primitive ATOMIC symbol tokens and (7) COMPOSITE symbol-token strings.
The entire system and all its parts -- then atomic tokens, the composite tokens,
then syntactic manipulation (both actual and possible) and the rules -- are all
(8) 'SEMANTICALLY INTERPRETABLE:'  The syntax can be SYSTEMATICALLY assigned a
meaning (e.g., as standing for objects, as describing states of affairs)."
Given the constraints of such a definition, I think that the level of debate
may descend to the point of arguing whether or not the sorts of events which
take place at the neuronal level constitute the sorts of physical tokens which
form the core of Harnad's definition.  In other words we can, indeed, argue
over whether or not there is any evidence!

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

dredick@bbn.com (Barry Kort) (07/26/90)

In article <14385@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar)
wrote:

 > > > This argument assumes that the human brain HAS symbols (or, at least,
 > > > that is implied through the use of the possessive "its").  There is
 > > > no evidence that this is the case.  I think it would be fair to say
 > > > that the point is still up for debate, just like the premise that the
 > > > human brain "has" mental images.

In article <58376@bbn.BBN.COM> bkort@BBN.COM I interrupted:

 > > I don't understand this, Steve.  Spoken and written language symbolize
 > > the elements of our world.  And these word-symbols are stored and
 > > processed in the brain.  (At least that's where *I* store them!)  So
 > > while the point may be worthy of debate, how can you argue that there
 > > is no evidence?

In article <14417@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar)
responds:

 > I guess we are in a position of mutual conflict, here, Barry.  How can
 > you argue that there IS evidence?  Your introspection is your personal
 > abstraction of what you think is going on.  There is nothing wrong with
 > that, as long as you don't fall into the trap of confusing the abstraction
 > with the reality. 

OK, but isn't my introspection at least weak evidence that something going
on in my brain looks suspiciously like symbols rattling around?

 > Look, let's try to establish a level playing field.  In his "Symbol
 > Grounding Problem" paper, Stevan Harnad defines a symbol system to
 > be "(1) a set of arbitrary 'PHYSICAL TOKENS' (scratches on paper,
 > holes on a tape, events in a digital computer, etc.) that are (2)
 > manipulated on the basis of 'EXPLICIT RULES' that are (3) likewise
 > physical tokens and STRINGS of tokens.  The rule-governed symbol-token
 > manipulation is based (4) purely on the SHAPE of the symbol tokens
 > (not their 'meaning'), i.e., it is purely SYNTACTIC, and consists of
 > (5) 'RULEFULLY COMBINING' and recombining symbol tokens.  There are
 > (6) primitive ATOMIC symbol tokens and (7) COMPOSITE symbol-token strings.
 > The entire system and all its parts -- then atomic tokens, the composite
 > tokens, then syntactic manipulation (both actual and possible) and the
 > rules -- are all (8) 'SEMANTICALLY INTERPRETABLE:'  The syntax can be
 > SYSTEMATICALLY assigned a meaning (e.g., as standing for objects, as
 > describing states of affairs)."  Given the constraints of such a
 > definition, I think that the level of debate may descend to the point
 > of arguing whether or not the sorts of events which take place at the
 > neuronal level constitute the sorts of physical tokens which form the
 > core of Harnad's definition.  In other words we can, indeed, argue
 > over whether or not there is any evidence! 

Oh.  Gee, Steve, I don't think I store or process symbols like that
at all.  First of all, I don't rely exclusively on rules to drive
my thinking.  I do a lot of model-based reasoning, along with
visual reasoning and lots of generate-and-test.  As far as I can
tell by introspection, I don't use explicit rules for these forms
of information processing.  I tend to use rules when I am doing
formal analysys, like parsing sentences or solving equations.

But if you let me include my keyboard and monitor as extensions of
my symbol-processing system, then it is true that much of my thinking
takes place with formal symbols and strings of symbols.  For some
strange reason, thinking and writing have become so mutually intertwined
for me, that I can't think without writing.  And I can't write very much
without using a full screen word-processor.

Still, I find Stevan Harnad's definition uncomfortably confining, and
based on those ground rules, I agree that the point is quite debatable.

Barry Kort                       bkort@bbn.com
Visiting Scientist
BBN Labs

blenko-tom@cs.yale.edu (Tom Blenko) (07/26/90)

In article <1608@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes:
|Why would you say that no program has this property? It is true that a
|program can be "hooked up" to the environment in an infinite number of
|ways, so the relationship between program states and real-world states
|is not unique. But given a particular way of hooking up a program, it
|is certainly possible to create a program which has a causal
|relationship to the real world. The programs which control aircraft,
|for example, certainly do.

The latter "program" is something entirely different than the sorting
algorithm that I earlier used as an example of a program.  It is
presumably a description of (a subset of) the physical state of the
device in question and nothing more.

The sorting algorithm describes the information-transforming properties
of (some part of) a device that implements it.  But it is not a
sufficient description of the device, any more than the color of the
device, its properties as a heat-producer, and so forth, are.

If you think this issue (syntactic vs. semantic) is of less than
earthshaking importance, I think Searle agrees with you. In fact, he
expresses surprise in the Scientific American article that it has
generated such confusion.  I think it could be answered by saying,
"Well, we understood certain assumptions about the realization of the
algorithm to be implicit," or, "Yes, this is may become an issue at
some point, but for the purposes of our current research it is
sufficient to assume some simplified connection, and that has been
implicit in our discussion." I think this was just his opening skirmish
-- Searle's main issue concerns intentional properties of intelligence
and the problem of capturing these as part of some artifact.

|> There is no assumption that the human mind has a "unique" semantics,
|> only that it has a causual relationship to its environment.  If you
|> accept that programs have no such relationship, then their complexity
|> is irrelevant.  If you did have a candidate program, there are an infinite
|> variety of ways of "hooking it up" to its environment that would produce
|> insensible behavior.
|
|The same could be said for a human mind. If you stuck stimulating
|electrodes directly into a human brain, you could produce insensible
|behavior in humans, as well. Is your point simply that humans, unlike
|programs, have a natural notion of a correct hookup to the real world?

My intent is to try to convey Searle's point of view, as I understand
it. I think the point here is that specifying the algorithm is not
sufficient to describe any device, including one that is claimed to be
"intelligent". The reason is that the "hookup" plays an essential role,
as can be demonstrated by positing different ways of realizing that
"hookup".

	Tom

kenp@ntpdvp1.UUCP (Ken Presting) (08/01/90)

In article <25645@cs.yale.edu>, blenko-tom@cs.yale.edu (Tom Blenko) writes:
> 
> If you think this issue (syntactic vs. semantic) is of less than
> earthshaking importance, I think Searle agrees with you. In fact, he
> expresses surprise in the Scientific American article that it has
> generated such confusion.  I think it could be answered by saying,
> "Well, we understood certain assumptions about the realization of the
> algorithm to be implicit," or, "Yes, this is may become an issue at
> some point, but for the purposes of our current research it is
> sufficient to assume some simplified connection, and that has been
> implicit in our discussion." I think this was just his opening skirmish
> -- Searle's main issue concerns intentional properties of intelligence
> and the problem of capturing these as part of some artifact.

Tom, I agree that there are two distinct issues in Searle's argument; one
concerned with semantic content, and the other concerned with "causal
powers".

But I see the semantic issue as primary for Searle, at least in terms of
his certainty that he's right, and the staying power of the problem.
Syntax really *doesn't* determine semantics.  So what can a program specify
besides the syntax of its I/O language?

Here's where I see the "implicit assumptions about the realization" coming
in.  It's implicit that the realization will have some "causal powers",
as David Chalmers pointed out.  I would add that it must have the specific
"causal power" to generate human-readable symbol-tokens, which undercuts
Putnam's "theorem" and Searle's claim that programs are "purely formal".

So now we have a real live implemented system, but all it does is shuffle
symbols!  We're still stuck in the Chinese room, until we know what it
means to "attach semantics to the symbols".   This is the big question.


Ken Presting   ("Krazy glue?")

kenp@ntpdvp1.UUCP (Ken Presting) (08/01/90)

In article <1607@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough) writes:
> In article <25618@cs.yale.edu>, blenko-tom@CS.YALE.EDU (Tom Blenko) writes:
> 
> > Searle's claim is precisely that this equivalence relation is not fine
> > enough -- that if two systems are extentionally (behaviorally)
> > equivalent, it might still be the case that one was "intelligent" and
> > one was not.
> 
> I think you are right about what Searle is claiming; that behavior is
> not a sufficient test for intelligence. However, my old argument is:
> what, if not behavior, allows one to infer that other *people* are
> intelligent?
> 

There are a lot of things that *could* be used to make that inference.
The first thing is the relation between behavior and the environment.
You may want to include that in the concept of "behavior", but it deserves
special mention, because the I/O behavior of a Turing machine is mostly
independent of its environment.

Then there is the fact that other people seem to be made out of the same
stuff that we are.  That is useful and relevant, even if it is not as
definitive as vitalists (not Searle) would urge.  The overwhelming majority
of organisms with human brains are indeed thinking things.  And vice versa.

Another thing is the system's past experience, if you happen to have any
knowledge of it.  If you happen to know that an organism with a human brain
is less than a month old, then you can reliably infer that it has little
intelligence, at least in the sense of knowing its way around in the world.
 
Evidence of all these types is objective and public, and has nothing to do
with anything as confusing as introspection.  There is no need to settle
too quickly for immediate I/O activity alone as a criterion for deciding
the success of AI, even if we are deliberately excluding criteria based on
the internals of a system.
 

Ken Presting  ("My brain hurts")

daryl@oravax.UUCP (Steven Daryl McCullough) (08/01/90)

In article <616@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
> In article <1607@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough) writes:
> > ...what, if not behavior, allows one to infer that other *people* are
> > intelligent?
> > 
> 
> ...
> The first thing is the relation between behavior and the environment.
> You may want to include that in the concept of "behavior", but it deserves
> special mention, because the I/O behavior of a Turing machine is mostly
> independent of its environment.

To me, behavior necessarily includes interaction with the environment.
For a Turing machine, the idea of environment is purposely limited,
but it is not absent: the environment is the input tape, which the
TM's behavior certainly does depend on.

> ...
> Then there is the fact that other people seem to be made out of the
> same stuff that we are. The overwhelming majority of organisms with
> human brains are indeed thinking things.  And vice versa.

In the context of my original question, this appears to be circular
reasoning:

     Q: How can one know that other human beings are intelligent?
     A: Because they all have have human brains.
     Q: But, how do you know that human brains indicate intelligence?
     A: Because all human beings have them.

In practice, how is this vicious circle broken? I claim it is by first
concluding, based on behavior alone, that most human beings are
intelligent, and then by asking what feature of human beings seems to
be responsible for this behavior. It seems to me that judging
intelligence from behavior must come first.
	
> Evidence of all these types is objective and public, and has
> nothing to do with anything as confusing as introspection.

I think you would see the role of introspection in all of this if you
would only use a little 8^) The question that started all this Chinese
Room stuff off was: "Is behavior sufficient to determine whether a
system is intelligent"? The Strong AI position assumes that the answer
to this question is "yes". Your disussion of radical translation seems
to show your general agreement with this position. Yet, in my reading
of Searle, he is in disagreement with this position.

To me, the Chinese Room is an attempt to show that a system may
behaviorally show intelligence and yet not be intelligent. Searle's
argument depends critically on introspection: Searle claims that deep
down, we all know the difference between *really* understanding
something and being able to follow a set of rules to fake
understanding. If introspection didn't tell us that there was a
difference, why would it occur to us to make such a distinction?

If you ignore introspection, I would see no plausibility to Searle's
Chinese Room argument at all. As it is, I still don't find it
compelling for precisely the reason that the introspection of the man
in the room is irrelevant.

> There is no need to settle too quickly for immediate I/O activity
> alone as a criterion for deciding the success of AI, even if we are
> deliberately excluding criteria based on the internals of a system.  

You seem to have a much narrower notion of "behavior" than I have. To
me, a system's behavior is the relationship between the system's past
history and its future actions, and not simply "immediate I/O
activity". If you deliberately exclude criteria based on internals of
a system, then behavior is all that one has left.

Daryl McCullough

smoliar@vaxa.isi.edu (Stephen Smoliar) (08/02/90)

In article <616@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>In article <1607@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough)
>writes:
>> 
>> I think you are right about what Searle is claiming; that behavior is
>> not a sufficient test for intelligence. However, my old argument is:
>> what, if not behavior, allows one to infer that other *people* are
>> intelligent?
>> 
>
>There are a lot of things that *could* be used to make that inference.
>The first thing is the relation between behavior and the environment.
>You may want to include that in the concept of "behavior", but it deserves
>special mention, because the I/O behavior of a Turing machine is mostly
>independent of its environment.
>
>Then there is the fact that other people seem to be made out of the same
>stuff that we are.  That is useful and relevant, even if it is not as
>definitive as vitalists (not Searle) would urge.  The overwhelming majority
>of organisms with human brains are indeed thinking things.  And vice versa.
>
>Another thing is the system's past experience, if you happen to have any
>knowledge of it.  If you happen to know that an organism with a human brain
>is less than a month old, then you can reliably infer that it has little
>intelligence, at least in the sense of knowing its way around in the world.
> 
>Evidence of all these types is objective and public, and has nothing to do
>with anything as confusing as introspection.  There is no need to settle
>too quickly for immediate I/O activity alone as a criterion for deciding
>the success of AI, even if we are deliberately excluding criteria based on
>the internals of a system.
> 
However, the issue is not one of introspection but rather whether or not
behavior is a primary source of evidence.  Except for the argument based
on what stuff the intelligent agent is made out of, the sources of evidence
cited above still involve the observation and interpretation of behavior.
Daryl's question still seems to stand.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

kenp@ntpdvp1.UUCP (Ken Presting) (08/07/90)

In article <1620@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough) writes:
> . . . . It seems to me that judging
> intelligence from behavior must come first.

Daryl, you are simply overlooking an immense variety of possibilites, but
rather than list more of them, I want to discuss the CR from the perspective
of the *program* that generates the behavior.

> (Ken Presting) wrote:
> > Evidence of all these types is objective and public, and has
> > nothing to do with anything as confusing as introspection.
> 
> I think you would see the role of introspection in all of this if you
> would only use a little 8^) The question that started all this Chinese
> Room stuff off was: "Is behavior sufficient to determine whether a
> system is intelligent"? The Strong AI position assumes that the answer
> to this question is "yes". Your disussion of radical translation seems
> to show your general agreement with this position. Yet, in my reading
> of Searle, he is in disagreement with this position.

Searle certainly thinks that radical interpretation is inadequate to
establish the meaning of words in a language (I appreciate David Chalmers
correcting me on this).  But he does *NOT* think that behavior in the
wide sense you advocate is inadequate to determine the intelligence of
a system.  He says so in the original BBS article, under "The Combination
Reply".  Searle is opposed to behaviorism, which is usually defined more
narrowly than the position you have taken.

But the CR is not based on behavior - it is based on programs.  Searle
conspicuously defines Strong AI in terms of its position on what programs
say about the systems that implement them.  If Searle is right (and I think
he is not) then the inadequacy of the Turing Test would follow.  IMO, the
TT needs to be more specific, but you, me, and Searle all agree that
empirical observations of behavior over time are sufficient to convince
any reasonable person of another's intelligence.

> 
> . . .  If introspection didn't tell us that there was a
> difference, why would it occur to us to make such a distinction?
> 
> If you ignore introspection, I would see no plausibility to Searle's
> Chinese Room argument at all. As it is, I still don't find it
> compelling for precisely the reason that the introspection of the man
> in the room is irrelevant.
> 

If I thought the argument had any dependence on introspection at all, I
would probably take no interest in it at all.  I agree that introspection
by the man in the room is irrelevant!

Let me try to make another example.  Suppose you and I each go the library
and read a book about Chinese.  I read a book called "Introduction to
Chinese Syntax" and you read one called "Introduction to Chinese Semantics".
We do *not* need to perform any introspection to infer that I will learn
something about syntax but little about semantics, while you will learn
something about semantics but little about syntax.  Searle draws the 
inference that the man in the room cannot understand the symbols, because
the books which hold the program do not say anything about Chinese semantics. 

This is supposed to be so obvious that nobody could possibly think 
otherwise, and I think that's why Searle is so recalcitrant about the 
systems reply.  From his perspective, it very simply misses the point.
Searle thinks that the program contains information only about the syntax
of Chinese conversations, and therefore neither he nor any other implement
can acquire semantic information from the program.  If you grant that
understanding requires semantic information, then his conclusion follows.
(Recall that my position is that *some* programs *can* contain semantic
information.  I am NOT defending Searle.)

Let me put it another way.  Suppose we write a program for factoring large 
numbers.  It should be obvious that memorizing this program will not give
anyone the ability to calculate fast Fourier transforms.  That is the basic
argument structure of the Chinese Room Example - obvious and trivial.  Just
by assuming that the syntax and the semantics of a language are two different
bodies of knowledge, Searle sets the stage for a straightforward conclusion.
(This easy conclusion is independent of the biology/silicon "causal powers"
issue, which is a little more complicated.)

So the issue becomes, what kind of program can contain semantic information?
The problem is that whatever program you write for any automaton, it is
equivalent to a language-recognition automaton, which by definition has
only semantic information in it.  This is just mathematics, and you can
find it in Hopcroft and Ullman, "Formal Languages and their Relation to
Automata".


> > There is no need to settle too quickly for immediate I/O activity
> > alone as a criterion for deciding the success of AI, even if we are
> > deliberately excluding criteria based on the internals of a system.  
> 
> You seem to have a much narrower notion of "behavior" than I have. To
> me, a system's behavior is the relationship between the system's past
> history and its future actions, and not simply "immediate I/O
> activity". If you deliberately exclude criteria based on internals of
> a system, then behavior is all that one has left.

To mathematically describe the operation of a program which is dependent
on more than immediate I/O activity, you cannot restrict yourself to
Turing machines operated in the usual way.  You need some sort of
permanent memory (which everday computers always have).  I don't think
Searle can be blamed too much for assuming the properties which are
universally attributed to Turing machines in his argument.  TM programs
are purely formal, and TM's (and their implementations) are purely
syntactic devices.  That folows directly from Church's thesis, nothing
more.  No TM can do anything a context-sensitive grammar couldn't do
just as well.

I have tried to show that Searle's conclusion follows from his premises,
by adding a few steps that involve no introspection and no assumptions
of non-behavioral observations.  Nobody (except Husserl fans like
Stephen Smoliar) wants to base any conclusions on introspection.  Searle
certainly does not need introspection in his argument.


Ken Presting   ("Thou shalt not covet thy neighbor's program")

dave@cogsci.indiana.edu (David Chalmers) (08/08/90)

In article <619@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:

>But he does *NOT* think that behavior in the wide sense
>you advocate is inadequate to determine the intelligence of asystem.  He says
>so in the original BBS article, under "The Combination Reply".  
> [...]
>but you, me, and Searle all agree that
>empirical observations of behavior over time are sufficient to convince
>any reasonable person of another's intelligence.

This is still a misstatement of Searle's position.  He is deeply
opposed to *any* behavioural criteria for intelligence.  I presume the
passage that you're referring to is the one that goes:

 "If we could build a robot whose behaviour was indistinguishable over a
  large range from human behaviour, we would attribute intentionality to it,
  pending some reason not to."

This passage has confused a few people into thinking that Searle really
does subscribe to some behavioural criteria -- but the most important part
of the passage is "pending some reason not to".  In the next couple of 
paragraphs, Searle spells out his position a little more clearly: that if
we discovered that all that was going on inside the robot was formal
symbol-processing, then we would cease to attribute any intentionality,
but instead would regard it as "an ingenious mechanical dummy".  "The
hypothesis that the dummy has a mind would now be unwarranted and unnecessary."

You're not the only one to place too much weight on this passage.  In a
"Continuing Commentary" in BBS in 1982, Yorick Wilks used this to
uncover an apparent "inconsistency" in Searle's position.  In reply, Searle
clarified the passage as follows:

 "[the passage] explains how we could be *fooled* into making false
  attributions of intentionality to robots"  (emphasis mine).

I think Searle's position is clear.

>I have tried to show that Searle's conclusion follows from his premises,
>by adding a few steps that involve no introspection and no assumptions
>of non-behavioral observations.  Nobody (except Husserl fans like
>Stephen Smoliar) wants to base any conclusions on introspection.  Searle
>certainly does not need introspection in his argument.

Actually, Searle's argument is all about introspection.  There might be other
arguments about the topic that aren't, but those arguments certainly aren't
Searle's.  As Searle frequently says: "in these discussions, always insist
on the first-person point of view."

The only trouble lies with the fact that Searle frequently phrases his
arguments in terms of "semantics" and "intentionality", rather than in terms
of phenomenology.  But this is a red herring: the arguments about "semantics"
go though only in virtue of Searle's idiosyncratic view that the right 
phenomenology (or consciousness) is a necessary prerequisite for true
intentionality.

There *is* an interesting argument about how syntax can determine semantics,
but it really has nothing to do with the Chinese Room argument, despite
Searle's protests.  The right answer to this question surely lies in some
form of the "Robot Reply" -- i.e. getting the right causal connection to the
world.  Notice that Searle's only answer to this reply is "but I still don't
feel any understanding" -- i.e. phenomenology-based semantics once
again.  Searle would have done us all a great favour if he could have stuck
to "consciousness" in the first place, without confusing the issue via
"intentionality".

At the bottom line, there are two quite separate Chinese Room problems: one
about consciousness (phenomenology), and the other about intentionality
(semantics).  These problems are quite separate -- the correct answer to the
first is the Systems Reply, and the correct answer to the second is the Robot
Reply.  One of the biggest sources of confusion in the entire literature
on the Chinese Room stems from Searle conflating these two issues.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.

"It is not the least charm of a theory that it is refutable."