[sci.philosophy.tech] more Chinese Room

pnf@cunixa.cc.columbia.edu (Paul N Fahn) (01/09/90)

I have always felt that there was something unfair and structurally
unsound in Searle's argument, and upon reading his recent article in 
Scientific American, I've put my finger on it.

In the recent article, the man in the Chinese room is Searle himself.
One of his conlusions is that after being in the Chinese room, he still
does not understand Chinese. He gives no arguments to support this 
conclusion, but simply states it. Putting himself in the room is unfair
because it puts Searle in an authoritative position to state what is
or isn't being "understood". Anyone else trying to argue that perhaps
the man does in fact understand Chinese is put at a structural dis-
advantage because no one would know better than Searle himself whether
or not he understands Chinese.
  I could just as easily say that I am in the Chinese room and that 
after a period of time I do indeed understand Chinese. How could Searle 
argue and tell me I don't? He would rightfully claim that my argument 
is unfair.

Let's say Searle uses a third-person man in the room. He still states
that the man does not understand Chinese after being in the room and
therefore, computers cannot understand. The (unstated) appeal of the
argument is "pretend you are in the room following the rules. you
wouldn't understand Chinese, right?". He is basically asking people to
try to identify with a computer cpu and then conclude non-understanding.
He doesn't state the argument this way because it is not a sound logical
argument.

The way the Chinese room problem *should* be presented is as an experiment:
  Take a man who doesn't understand Chinese and put him in a room with all
the necessary rules. Let him execute the rules, answering written questions
put him by Chinese speakers. After two years we, the experimenters, gather
our results: does the man understand Chinese?
  We can argue about the answer, and try to devise criteria which, if 
satisfied, would convince us that he does or does not understand Chinese.
This is the point of the Turing test.

Searle however, simply states the results of the experiment as if it were a
premise: the man does not understand Chinese. He is in fact using circular
reasoning: because he does not understand Chinese, syntax is not enough to
get semantics. But what is his reason for concluding the man does not 
understand Chinese? simply his prior conviction that syntax is not enough 
for semantics.

The need for an external test (Turing or otherwise) is due to the fact that
we cannot directly know the man's internal mental state. While admitting 
that the man passes the Turing test, Searle does not present some other 
test which the man fails to pass. If Searle thinks that the Turing test 
is inadequate, let him devise another and argue why the man would fail 
this "better" test. "pretend you're the man" is not an adaquate test.

Basically, Searle's argument comes down to: "Pretend you're a 
computer. Do you understand Chinese?"  

Let us look at Searle's recent twist to the problem: the man memorizes
the rules and answers Chinese questions in public. We the experimenters
watch him do this for two years and then must decide whether he 
understands Chinese. A lot of people would conclude that he does indeed
understand Chinese, even if they "knew" that he was following rules.

--------------
Paul Fahn
pnf@cunixa.cc.columbia.edu

bill@twwells.com (T. William Wells) (01/10/90)

In article <2602@cunixc.cc.columbia.edu> pnf@cunixa.cc.columbia.edu (Paul N Fahn) writes:
: I have always felt that there was something unfair and structurally
: unsound in Searle's argument, and upon reading his recent article in
: Scientific American, I've put my finger on it.

I don't particularly agree with your views; however, there is
another, more serious, flaw in his argument.

Suppose that the assignment of meaning is a process rather than a
static relationship. Were that so, the Chinese room would be
irrelevant, since it only corresponds to a particular process.

Hence his assertion that he has demonstrated that strong AI is
false is simply false.

Just to add a little "balance", I don't find the arguments on the
other side particularly compelling, either. And, in fact, their
arguments fall to the same point: intelligence is very definitely
*not* an I/O mapping. It is a process.

My own view is that, until we have a reasonable idea of how
consciousness operates, arguing about whether computers can be
conscious is about as anachronistic as arguing about the number of
angels that can dance on the head of a pin.

---
Bill                    { uunet | novavax | ankh | sunvice } !twwells!bill
bill@twwells.com

ellis@chips.sri.com (Michael Ellis) (01/10/90)

> T. William Wells

>..there is another, more serious, flaw in his [Searle's]  argument.

>Suppose that the assignment of meaning is a process rather than a
>static relationship. Were that so, the Chinese room would be
>irrelevant, since it only corresponds to a particular process.

    This is hard to make sense of. In what way do either Searle or
    "Strong AI" imply that meaning-assignment is any of a {static
    relationship, process, particular process}? Are you saying that
    Searle thinks meaning assignment is a static relationship? Or are
    you saying that Strong AI makes that claim? Forgive me if I seem
    dense,  but I can't quite tell what you're getting at.

>Hence his assertion that he has demonstrated that strong AI is
>false is simply false.

    Perhaps you could clarify this criticism. 

>Just to add a little "balance", I don't find the arguments on the
>other side particularly compelling, either. And, in fact, their
>arguments fall to the same point: intelligence is very definitely
>*not* an I/O mapping. It is a process.

     Aren't instantiations of programs that map transducer inputs to
     effector outputs processes? Or do you have some special
     definition of "process" in mind?

>My own view is that, until we have a reasonable idea of how
>consciousness operates, arguing about whether computers can be
>conscious is about as anachronistic as arguing about the number of
>angels that can dance on the head of a pin.

    If somebody said that minds are just numbers or laboratory tables
    or planetary orbits, don't you think we'd have good reason to deny
    their claim? 

-michael

jeff@aiai.ed.ac.uk (Jeff Dalton) (01/11/90)

In article <2602@cunixc.cc.columbia.edu> pnf@cunixa.cc.columbia.edu (Paul N Fahn) writes:
>In the recent article, the man in the Chinese room is Searle himself.
>One of his conclusions is that after being in the Chinese room, he still
>does not understand Chinese. He gives no arguments to support this 
>conclusion, but simply states it.

I suspect Searle feels no argument is needed.  I tend to agree.  I'm
pretty sure that if I were running the Chinese room I wouldn't
understand Chinese, at least not right away.  But if I eventually
managed to understand it, it wouldn't be just because I followed the
instructions in the book.  I'd have to try to figure it out somehow.
And I don't think even that would be possible.  How would I ever know
that some Chinese symbol meant "tree", for example?  [I think this
is why some people think giving the room a camera and so on might
make a big difference.]

>                                   Putting himself in the room is unfair
>because it puts Searle in an authoritative position to state what is
>or isn't being "understood". Anyone else trying to argue that perhaps
>the man does in fact understand Chinese is put at a structural dis-
>advantage because no one would know better than Searle himself whether
>or not he understands Chinese.

I think you've hit on a clever argument, but I don't think it really
works.  I don't think the appeal of the C.R. argument depends on
Searle being an authority about himself (and you don't seem to think
so either, as you say below).  Strictly speaking, there may be a
logical dependency on the person in the room being Searle, because
Searle does after all say it that way.  [Doesn't he?  I haven't
looked at it for a while.]  But the Room's not worth much if it
shows something only about Searle, and I don't think anyone who
finds the C.R. convincing interprets it that way.

>  I could just as easily say that I am in the Chinese room and that 
>after a period of time I do indeed understand Chinese. How could Searle 
>argue and tell me I don't? He would rightfully claim that my argument 
>is unfair.

Well, if you said "I would understand Chinese", I'd think you were
just wrong, for the reasons I indicated above.  Just following the
instructions wouldn't cause you to understand Chinese.  You'd have to
at least do some extra work trying to figure things out.

>Let's say Searle uses a third-person man in the room. He still states
>that the man does not understand Chinese after being in the room and
>therefore, computers cannot understand. The (unstated) appeal of the
>argument is "pretend you are in the room following the rules. you
>wouldn't understand Chinese, right?". 

Just so.

>                                       He is basically asking people to
>try to identify with a computer cpu and then conclude non-understanding.
>He doesn't state the argument this way because it is not a sound logical
>argument.

Well, there are some other parts to the argument which amount to
saying that if there's understanding anywhere it has to be in the
person.  In any case, whether it's a sound logical argument is one of
the issues in question, isn't it?  You seem to be saying that it's
sound if Searle is the person in the room and not otherwise.  But your
interpretation for making it sound also makes it trivial, and we're
still left with the question of whether the non-trivial version is
correct.  I don't think anyone agrees with Searle because they agree
with the trivial version or because they think "well, Searle would
know whether or not Searle would understand."

>The way the Chinese room problem *should* be presented is as an experiment:
>  Take a man who doesn't understand Chinese and put him in a room with all
>the necessary rules. Let him execute the rules, answering written questions
>put him by Chinese speakers. After two years we, the experimenters, gather
>our results: does the man understand Chinese?

That wouldn't be right, because the man could do arbitrary things in
the room, not just execute the instructions.  Nonetheless, I think
it's pretty clear the man would not understand Chinese.  If you think
the man might be able to translate Chinese to English or to explain in
English (or whatever his native language is) what someone said in
Chinese, a much better test than asking him to reply to Chinese in
Chinese, I'd like to hear any argument that makes that seem even
remotely plausible.

>  We can argue about the answer, and try to devise criteria which, if 
>satisfied, would convince us that he does or does not understand Chinese.
>This is the point of the Turing test.

The point of the Turing Test is not to answer whether a computer does
or does not understand but rather to substitute a different question
which we may find good enough.  That is, the whole point is to avoid
having to deal with the philosophical confusions, errors, and
prejudices that might otherwise come up.  But for anyone who cares
about anything other than the external behavior that can be
transmitted by typing, the TT isn't good enough.

>Searle however, simply states the results of the experiment as if it were a
>premise: the man does not understand Chinese. He is in fact using circular
>reasoning: because he does not understand Chinese, syntax is not enough to
>get semantics. But what is his reason for concluding the man does not 
>understand Chinese? simply his prior conviction that syntax is not enough 
>for semantics.

I conclude that the man in the Room, whether it's Searle or me or
anyone else who doesn't already understand Chinese, wouldn't
understand Chinese.  Well, I've tried to say something about that
above.  I don't have any conviction, prior or otherwise, that
syntax is not enough for semantics, so I don't think I'm using
one to reach my conclusion.  I don't think Searle is using that
either.

Ok, there isn't a logical argument here, but I don't think one's
necessary at this point.  Where the argument comes in is in trying
to show that if the person doesn't understand there isn't any
understanding going on at all.

>The need for an external test (Turing or otherwise) is due to the fact that
>we cannot directly know the man's internal mental state. While admitting 
>that the man passes the Turing test, Searle does not present some other 
>test which the man fails to pass. If Searle thinks that the Turing test 
>is inadequate, let him devise another and argue why the man would fail 
>this "better" test. "pretend you're the man" is not an adequate test.

I don't think we know enough to devise a good test for understanding
(in general) at this point.  However, if you want a test that the man
in the room would fail to pass, it's this: translate some Chinese into
English.  That's a variation of the TT, but it's not the variation
the Room is assumed to be able to pass.  Unfortunately, we can't do
the same sort of thing to see whether the man understands English
(or whatever the last language we got to is), and we can't do it
for the Room as a whole, because the room's supposed to understand
only Chinese.  So it doesn't really answer the basic question about
understanding.

>Basically, Searle's argument comes down to: "Pretend you're a 
>computer. Do you understand Chinese?"  

I think you're right here, more or less.

>Let us look at Searle's recent twist to the problem: the man memorizes
>the rules and answers Chinese questions in public. We the experimenters
>watch him do this for two years and then must decide whether he 
>understands Chinese. A lot of people would conclude that he does indeed
>understand Chinese, even if they "knew" that he was following rules.

No they wouldn't, because they'd sooner or later ask him to restate
some Chinese in another language.  Besides, you're assuming that a
lot of people would accept a Turing Test as adequate.  That's begging
the question as far as the C.R. is concerned.  Searle takes passing
the TT as an assumption and then aims to so that -- even so -- there's
no understanding.  Whether that's wrong -- whether there must be
understanding whenever the TT is passed -- can't just be assumed.

bloch@mandrill.ucsd.edu (Steve Bloch) (01/11/90)

pnf@cunixa.cc.columbia.edu (Paul N Fahn) writes:
>                                       He is basically asking people to
>try to identify with a computer cpu and then conclude non-understanding.

Let's give Searle this, for the moment: the CPU does not understand,
by analogy with the Anglophone in the Chinese room.  It's still the
wrong question, since Searle said he would disprove that "computers
running programs cannot understand", a completely different
proposition.  Nobody's ever seriously claimed that the processor
itself somehow magically becomes intelligent when a certain program
is stuck into its memory; the claim (if we are to trust Searle's
statement of it) was that if a suitable computer, running a suitable
program, passes a Turing test, then it actually is intelligent.
The analogy of the Chinese room is invalid unless it includes the
whole system: Dr. Searle in his room, the book of rules (or just "the
rules", if Dr. Searle somehow manages to memorize enough formal rules
to completely specify the verbal behavior of a human being, which I
suspect exceeds the theoretical information content of a human brain),
and the window through which Chinese characters are passed.  And the
question of whether this system "understands" Chinese is no easier
than whether an AI program "understands" fairy tales.

jeff@aiai.UUCP (Jeff Dalton) writes:
>The point of the Turing Test is not to answer whether a computer does
>or does not understand but rather to substitute a different question
>which we may find good enough.
I'm sure Searle, and for that matter many AI researchers, would agree
with you: the Turing Test isn't sufficient to prove that a suitable
computer running a suitable program understands.
But he goes farther: his objective is to prove that a computer 
running a program CANNOT understand, although some other unspecified
kind of machine doing something else unspecified might.  His certainty
on this issue seems to me quite unwarranted, and sometimes downright
offensive.

Paul goes on:
>Let us look at Searle's recent twist to the problem: the man memorizes
>the rules and answers Chinese questions in public. We the experimenters
>watch him do this for two years and then must decide whether he 
>understands Chinese. A lot of people would conclude that he does indeed
>understand Chinese, even if they "knew" that he was following rules.

To which Jeff replies:
>No they wouldn't, because they'd sooner or later ask him to restate
>some Chinese in another language.  Besides, you're assuming that a
>lot of people would accept a Turing Test as adequate.

Don't we?  I'm using the Turing Test to conclude that both Paul and
Jeff are people, and I have no qualms about it.  Indeed, if Dr. Searle
were on this newsgroup I suspect he would do the same.

As for "they'd ask him to restate some Chinese in another language",
if they asked in Chinese, he would presumably answer in Chinese that
"I don't speak English."  If they asked in English (equivalent to
hitting BREAK on your terminal, falling into a ROM monitor, and
talking to the processor in machine language), this would require
throwing out part of the system that was passing the test, so the fact
that the new system fails indicates nothing about the old system.

********************************************************************

Another objection to Searle's article.  Searle makes much of the
distinction between a model and an actual object:

"a person does not get wet swimming in a pool full of ping-pong-ball
models of water molecules,"
"Simulation is not duplication."
"you could not run your car by doing a computer simulation of the
oxidation of gasoline, and you could not digest pizza by running the
program that simulates such digestion.  It seems obvious that a
simulation of cognition will similarly not reproduce the effects of
the neurobiology of cognition."

When you talk to someone on the phone, you are not actually hearing
the person's voice, but rather a simulation of it, carried out by
formal, mechanical processes, with a fair amount of digital signal
processing and no small amount of random noise introduced to boot.
Yet for practical purposes we treat this simulation as the reality,
because what we're interested in is communication of ideas, not the
physical presence of a person within earshot.  (In some cases there
isn't even a person at the other end, just a machine simulating a
person's voice straight into the wire.)  In other words, the
simulation preserves the part of reality we're interested in.
If I have an idea, it remains essentially the same idea whether I
speak it aloud, type it on a keyboard to store it in an ASCII file on
a magnetic disk, or write it in Chinese with a horsehair brush,
inkstone and inkblock on rice paper.  These representations are all
simulations of one another which preserve the essential part of the
idea, the real information.
By contrast, when you ask whether I've gotten wet, you're asking about
interactions between my skin and water molecules; the ping-pong-ball
model does not preserve interactions with real skin, unless it too is
modelled on the same scale.  When you ask whether a car has moved,
or whether I've digested a pizza, you're asking about the release of
energy (among other things), and a computer simulation of oxidizing
hydrocarbons (or carbohydrates) doesn't preserve the release of real
energy.

"Simulation is not duplication" is certainly true unless you have a
COMPLETE simulation of EVERY ASPECT of the simulated system.  But an
incomplete simulation CAN duplicate the aspects of the system you're
interested in, and if what we're interested in in cognition is
information flow, there's no reason to believe a computer program 
can't simulate that (it is their specialty, after all).
And while it may "seem obvious that a simulation of cognition will
similarly not reproduce the effects of the neurobiology of cognition,"
I don't care because neurobiology isn't what I'm interested in.
Searle, on the other hand, apparently takes it as axiomatic that
cognition cannot occur without certain biochemical reactions.
Everything depends on what aspects of cognition you care about:
the Turing test is perhaps a little overly behavioristic, but I
think Searle's demand is at least as far in the other direction.

"The above opinions are my own.  But that's just my opinion."
Stephen Bloch
bloch%cs@ucsd.edu

ian@mva.cs.liv.ac.uk (01/12/90)

In article <1527@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>                                                 How would I ever know
> that some Chinese symbol meant "tree", for example?
> 
You could see how it fitted with other symbols, how certain symbols appeared
together, how certain rules treated it.  You may not be able to say that you
would know what a *particular* symbol meant or which symbol meant *tree*, but
I am sure that you would eventually work out the meanings of *some* symbols.

>                                                 Just following the
> instructions wouldn't cause you to understand Chinese.  You'd have to
> at least do some extra work trying to figure things out.
>
I think that you would have to do extra work in order to *not* understand. As
you began to recognise rules, common groups of symbols, etc. you (or at least I)
would begin to reorganise the room to make life easier for myself. Since some
semantic knowledge is carried in the syntax of a language (for example cases in
German or particles in Japanese), this organisation would eventually lead you
to understand some of the meanings conveyed by some rules.

>                                                          If you think
> the man might be able to translate Chinese to English or to explain in
> English (or whatever his native language is) what someone said in
> Chinese, a much better test than asking him to reply to Chinese in
> Chinese, I'd like to hear any argument that makes that seem even
> remotely plausible.
> 
Rather than an argument, I will proffer an example of such a phenomenon.
From time to time, during human history, writings from long-extinct
civilisations have been found (for example Mayan codices, Runes or Egyptian
hieroglyphics).  All the information that the translators had to work with
were the rules they could deduce from the information.  With just this
syntactic knowledge, they deduced the semantic content.  Isn't this exactly
what Searle says cannot be done?  Code-breakers (for example Turing :-) must
have to do a similar task.

Ian Finch
---------

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/13/90)

From article <4921.25ad37f7@mva.cs.liv.ac.uk>, by ian@mva.cs.liv.ac.uk:
>...
>Rather than an argument, I will proffer an example of such a phenomenon.
>From time to time, during human history, writings from long-extinct
>civilisations have been found (for example Mayan codices, Runes or Egyptian
>hieroglyphics).  All the information that the translators had to work with
>were the rules they could deduce from the information.  With just this
>syntactic knowledge, they deduced the semantic content.  Isn't this exactly
>what Searle says cannot be done?  Code-breakers (for example Turing :-) must
>have to do a similar task.

I don't think it's possible to break a language code with only written
samples of the code.  The decipherers of ancient scripts have arrived at
meanings only through correspondences with known languages or pictorial
information.  For instance, in the decipherment of Linear B, Ventris
managed to arrive at the *pronunciation* of the script without having
much substantial information about meaning (only some conjectures about
place names) or the identity of the language transcribed.  A
confirmation of his decipherment was Bennet's discovery of a tablet with
a word which in Ventris' system came out ti-ri-po-do accompanied by a
picture of a tripod.  Without the correspondence to Greek or the
picture, it's hard to imagine how the meaning of this word could ever
have become known.

The relationship between script and pronunciation is systematic.  The
relationship between pronunciation and meaning for the primitive
units of a human language is not.

This might have been relevant to the CR question if semantic information
were not already incorporated into the Rules of the Room.  But of
course it must be, since the Room converses intelligently.

				Greg, lee@uhccux.uhcc.hawaii.edu

markh@csd4.csd.uwm.edu (Mark William Hopkins) (01/14/90)

In article <1527@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>                                                 How would I ever know
> that some Chinese symbol meant "tree", for example?

In article <4921.25ad37f7@mva.cs.liv.ac.uk> ian@mva.cs.liv.ac.uk writes:
>You could see how it fitted with other symbols, how certain symbols appeared
>together, how certain rules treated it.  You may not be able to say that you
>would know what a *particular* symbol meant or which symbol meant *tree*, but
>I am sure that you would eventually work out the meanings of *some* symbols.

This is where I can expand on my viewpoint a little more.  So I offer
this answer:

you'll know that *tree* means tree, because the rules for learning Chinese
would have TOLD you to go outside, hug a tree (which you would presumably
already know how to recognize) and say "tree" (in Chinese).

The rule linking symbol to actual neural-motor (and pattern recognition
routine) is a purely formal rule captured (perhaps) in a pushdown-transducer
formalism (or why not in a Finite State machine formalism -- just add stack
operations and other data structure operations in the augmentations).

It links the term "tree" to a routine which process purely formal symbols
that just happen to have been implemented in the architecture as actuator
or sensory signals (but that would have no bearing on the fact that the
intelligence is intelligent).

Ultimately, you'll link "tree" to other meanings, some of which may
already be one or more levels of indirection to actual signal processing
routines.  That is ... routines are first-class data structures in the
underlying architecture, and form the building blocks of symbols.

We can PROVE that the symbols we manipulate in our minds ARE meaningless
if we were able to "cross the wires" that link neural signals to
sensors and actuators -- so that willing yourself to move your arm would cause
you to start walking. :)  You'd breakdown in terrible confusion while you
tried to retrain yourself to adapt to the changed connections -- yet you'd
still be considered intelligent.

The lesson to be learned from that is that meaningfulness of symbols
has no bearing on intelligence.  Your brain doesn't care what the symbols
that correspond to signals do or where they come from -- i.e. it doesn't
care what the symbols ultimately mean.

schwuchow@uniol.UUCP (Michael Schwuchow) (01/15/90)

ian@mva.cs.liv.ac.uk writes:

>Rather than an argument, I will proffer an example of such a phenomenon.
>From time to time, during human history, writings from long-extinct
>civilisations have been found (for example Mayan codices, Runes or Egyptian
>hieroglyphics).  All the information that the translators had to work with
>were the rules they could deduce from the information.  With just this
>syntactic knowledge, they deduced the semantic content.  Isn't this exactly
>what Searle says cannot be done?  Code-breakers (for example Turing :-) must
>have to do a similar task.

>Ian Finch
>---------

IMHO the code-breaking of Mayan codices, Egyption hieroglyphics and so on
is not only based on syntactic knowledge, but on known semantics too.
I will specify this a bit:
The Mayas, the Egyptians are humans too. So you can suppose about what they
had written. Their culture is not totally lost, but relicts were traduced.
So you can fix some words like king, duke ,servant, slave; sun, water, rain,
moon, season; build, fight, govern; saw, grow, harvest, ... relatively
easy, because you can suppose, what a text could mean. Sometimes the words
are even pictures, which show, what they mean.

Suppose you get a message from extraterrestrian non-human intelligent
beeings. Some information transmitted in an unusual form. I think we could
not translate it by the syntax. A translation would suppose, that there are
parts included like
}Hey you out there! Are you intelligent too? Would you like to send letters?
}Send them to adress ...
And it might be, that there is a translation possible, that includes the
information of these statements. But how should we know, we are right??


And what would you suppose they think about us, if we send them back some
chinese poems?

thinking (at least i think so)
Micha
--
------Lieber ein Fachdilettant, als ein Universalidiot--------------
! Nickname: michel                     UUCP: schwuchow@uniol.UUCP  !
! Realname: Michael Schwuchow (biological version)                 !
! Position: Oldenburg, FRG             EARN: 122685@DOLUNI1        !
--------------------------------------------------------------------

ian@mva.cs.liv.ac.uk (01/17/90)

In article <1585@uniol.UUCP>, schwuchow@uniol.UUCP (Michael Schwuchow) writes:
> IMHO the code-breaking of Mayan codices, Egyption hieroglyphics and so on
> is not only based on syntactic knowledge, but on known semantics too.
> The Mayas, the Egyptians are humans too. So you can suppose about what they
> had written. Their culture is not totally lost, but relicts were traduced.
> So you can fix some words like king, duke ,servant, slave; sun, water, rain,
> moon, season; build, fight, govern; saw, grow, harvest, ... relatively
> easy, because you can suppose, what a text could mean.
> 
But, you would need to know something of Chinese culture to perform
translation.  One verb in English may be translated in several ways in
Chinese, dependant on the status of the participants.  To translate this,
there would be rules describing the relationships between kings, dukes,
servants and slaves.  Thus, some semantic knowledge must be included in the
syntatic knowledge.

Ian
---

kp@uts.amdahl.com (Ken Presting) (01/19/90)

In article <4941.25b48ec4@mva.cs.liv.ac.uk> ian@mva.cs.liv.ac.uk writes:
>In article <1585@uniol.UUCP>, schwuchow@uniol.UUCP (Michael Schwuchow) writes:
>> IMHO the code-breaking of Mayan codices, Egyption hieroglyphics and so on
>> is not only based on syntactic knowledge, but on known semantics too.
>> The Mayas, the Egyptians are humans too. So you can suppose about what they
>> had written. Their culture is not totally lost, but relicts were traduced.
>> So you can fix some words like king, duke ,servant, slave; sun, water, rain,
>> moon, season; build, fight, govern; saw, grow, harvest, ... relatively
>> easy, because you can suppose, what a text could mean.
>> 
>But, you would need to know something of Chinese culture to perform
>translation.  One verb in English may be translated in several ways in
>Chinese, dependant on the status of the participants.  To translate this,
>there would be rules describing the relationships between kings, dukes,
>servants and slaves.  Thus, some semantic knowledge must be included in the
>syntatic knowledge.

The Chinese Room argument doesn't really depend on the operator being
ignorant of Chinese.  Of course, Searle makes a big deal out of it, but
IMHO that is a symptom of his argument's being directed more toward our
prejudices than toward our reason.

Suppose the rule book for transforming input into output were written in
Chinese, and the operator were a native speaker.  The process occurring in
the room would still be entirely syntactic.   Searle shouldn't need any
more than this to draw his intended conclusion - that the operation of
the room lacks crucial attributes of mental processes, namely semantics.

The situation remains the same if a Chinese speaker memorizes Chinese
rules and uses them to transform verbal inputs into spoken outputs.  The
transformation is stil based entirely on syntax.

To object to the Chinese Room argument on the grounds that the room is
not really semantics-free, it's necessary to present examples of semantic
operations entering into the procedures specified in the rules.  A rule
such as "If the input is <xxxx>, then look at your watch and select symbol
<x> based on the hour and symbol <y> based on the minute".  Such a rule
has semantic content because the terms "hour", "minute", and "your watch"
do not refer to symbols or symbol-types.

jct@cci632.UUCP (John Thompson) (02/10/90)

In article <7758@sdcsvax.UCSD.Edu> bloch@thor.UUCP (Steve Bloch) writes:
> [ after some deletions ] :
>>
>>	1 ) "Thinking", as done by the human brain, is likely NOT
>>algorithmic
>
>Show me one shred of experimental evidence for that statement.
>And don't try to exclude parallelism; that's a hot area in current
>algorithmic complexity theory.

Please excuse what is probably naivette on my part.

I will agree that there is no evidence that deductive reasoning ( I use
reasoning rather than thinking because it seems that there is some semantic 
confusion on the meaning of "thinking" ) is algolrithmically based. Deductive
reasoning, by definition, follows rules and therefore is an algorithm.

I don't see a way that inductive reasoning, which by the definitions I am used
to does not always follow rules ( i.e. the jump without sufficent data ), can
be reduced to an algolrithm. Can anyone out there enlighten me as to where I am 
wrong with this?

>machine with no loudspeaker.) But it preserves the aspects of
>speech we're interested in: semantic content.  Similarly, I maintain
>that the aspects of "thinking" we're interested in are not biological
>or chemical, and so they need not be preserved by a simulation in
>order for that simulation to be good.  If you insist that thinking IS
>primarily a biochemical phenomenon, thus assuming your conclusion that
>only brains can give rise to it, I suspect the vast majority of human
>beings will agree with me.

I maintain that human thinking is so inextricibly linked into the biological
matrix from which it springs that to attempt to PRACTICALLY ( emphasis is
delibrate ) simulate such "thinking" will require either :

1) A level of complexity to the simulation that is indistinguishable from
   a biological matrix. By this I mean in hardware OR software or both.

Or :

2) A simplified simulation that does not closely approximate the required
   level of "thought".

Finally, and not intended as a flame, I do not see the need for the "vast
majority" of humanity to agree with your points. The vast majority of human
beings in this world are not capable ( by education or lack of ) to understand
these points, and can not affect the validity (or lack of) in your arguements.
Again, I am not flaming, just confused by the line of reasoning and wish to
have it explained.

================================================================================
	And home sings me of sweet things,    |  All opinions are my own. That's
	My life there has its own wings,      |  because no one will haul them
	Fly me o'er the mountain,             |  away at the price I'll pay.
	Though I'm standin' still ...
		-Bonnie Raitt                            John C. Thompson

================================================================================

escher@ucscb.UCSC.EDU (of Dreams and Shadows) (02/12/90)

In article <33868@cci632.UUCP> jct@ccird3.UUCP (John Thompson) writes:
>
>I don't see a way that inductive reasoning, which by the definitions I am used
>to does not always follow rules ( i.e. the jump without sufficent data ), can
>be reduced to an algolrithm. Can anyone out there enlighten me as to where I am 
>wrong with this?

	Inductive reasoning -- and I prefer as you do the term reasoning over
thinking -- is simply taking as given the conclusion and constructing a set of
simplifications that lead you back to specific atoms. This can be dealt with
by a recursive-like process, or any alogorithm [sp?] capable of searching the 
"tree" backwards and applying deductive tests to it.
	Also... there is such a thing as an indirect mode of proof where one
ASSUMES what they wish not to prove and concludes to a contradiction, thereby
showing the original assumption to be wrong and the negation of that assumption
to be right. This sort of "weak" proof corresponds roughly to quite a few so-
called "inductive" arguments.
	[no clever end block yet, but we're working on it]


	

rjones@a.gp.cs.cmu.edu (Randolph Jones) (02/13/90)

In article <33868@cci632.UUCP> jct@ccird3.UUCP (John Thompson) writes:
>I don't see a way that inductive reasoning, which by the definitions I am used
>to does not always follow rules ( i.e. the jump without sufficent data ), can
>be reduced to an algolrithm. Can anyone out there enlighten me as to where I am 
>wrong with this?

Inductive reasoning can definitely be accomplished with the use of rules.  
These rules are just not necessarily truth preserving.  For example, the rule
"The sun has risen every morning that I have experienced --> The sun will
 rise tomorrow morning" 
may or may not be true, but it is a rule that we humans use often to make
a useful induction.  If you are interested in algorithmic approaches to
inductive learning and reasoning, just look in the proceedings of the
Machine Learning conferences and workshops.  You will find a vast amount of 
research on this subject.