[comp.ai] more Chinese Room

pnf@cunixa.cc.columbia.edu (Paul N Fahn) (01/09/90)

I have always felt that there was something unfair and structurally
unsound in Searle's argument, and upon reading his recent article in 
Scientific American, I've put my finger on it.

In the recent article, the man in the Chinese room is Searle himself.
One of his conlusions is that after being in the Chinese room, he still
does not understand Chinese. He gives no arguments to support this 
conclusion, but simply states it. Putting himself in the room is unfair
because it puts Searle in an authoritative position to state what is
or isn't being "understood". Anyone else trying to argue that perhaps
the man does in fact understand Chinese is put at a structural dis-
advantage because no one would know better than Searle himself whether
or not he understands Chinese.
  I could just as easily say that I am in the Chinese room and that 
after a period of time I do indeed understand Chinese. How could Searle 
argue and tell me I don't? He would rightfully claim that my argument 
is unfair.

Let's say Searle uses a third-person man in the room. He still states
that the man does not understand Chinese after being in the room and
therefore, computers cannot understand. The (unstated) appeal of the
argument is "pretend you are in the room following the rules. you
wouldn't understand Chinese, right?". He is basically asking people to
try to identify with a computer cpu and then conclude non-understanding.
He doesn't state the argument this way because it is not a sound logical
argument.

The way the Chinese room problem *should* be presented is as an experiment:
  Take a man who doesn't understand Chinese and put him in a room with all
the necessary rules. Let him execute the rules, answering written questions
put him by Chinese speakers. After two years we, the experimenters, gather
our results: does the man understand Chinese?
  We can argue about the answer, and try to devise criteria which, if 
satisfied, would convince us that he does or does not understand Chinese.
This is the point of the Turing test.

Searle however, simply states the results of the experiment as if it were a
premise: the man does not understand Chinese. He is in fact using circular
reasoning: because he does not understand Chinese, syntax is not enough to
get semantics. But what is his reason for concluding the man does not 
understand Chinese? simply his prior conviction that syntax is not enough 
for semantics.

The need for an external test (Turing or otherwise) is due to the fact that
we cannot directly know the man's internal mental state. While admitting 
that the man passes the Turing test, Searle does not present some other 
test which the man fails to pass. If Searle thinks that the Turing test 
is inadequate, let him devise another and argue why the man would fail 
this "better" test. "pretend you're the man" is not an adaquate test.

Basically, Searle's argument comes down to: "Pretend you're a 
computer. Do you understand Chinese?"  

Let us look at Searle's recent twist to the problem: the man memorizes
the rules and answers Chinese questions in public. We the experimenters
watch him do this for two years and then must decide whether he 
understands Chinese. A lot of people would conclude that he does indeed
understand Chinese, even if they "knew" that he was following rules.

--------------
Paul Fahn
pnf@cunixa.cc.columbia.edu

bill@twwells.com (T. William Wells) (01/10/90)

In article <2602@cunixc.cc.columbia.edu> pnf@cunixa.cc.columbia.edu (Paul N Fahn) writes:
: I have always felt that there was something unfair and structurally
: unsound in Searle's argument, and upon reading his recent article in
: Scientific American, I've put my finger on it.

I don't particularly agree with your views; however, there is
another, more serious, flaw in his argument.

Suppose that the assignment of meaning is a process rather than a
static relationship. Were that so, the Chinese room would be
irrelevant, since it only corresponds to a particular process.

Hence his assertion that he has demonstrated that strong AI is
false is simply false.

Just to add a little "balance", I don't find the arguments on the
other side particularly compelling, either. And, in fact, their
arguments fall to the same point: intelligence is very definitely
*not* an I/O mapping. It is a process.

My own view is that, until we have a reasonable idea of how
consciousness operates, arguing about whether computers can be
conscious is about as anachronistic as arguing about the number of
angels that can dance on the head of a pin.

---
Bill                    { uunet | novavax | ankh | sunvice } !twwells!bill
bill@twwells.com

ellis@chips.sri.com (Michael Ellis) (01/10/90)

> T. William Wells

>..there is another, more serious, flaw in his [Searle's]  argument.

>Suppose that the assignment of meaning is a process rather than a
>static relationship. Were that so, the Chinese room would be
>irrelevant, since it only corresponds to a particular process.

    This is hard to make sense of. In what way do either Searle or
    "Strong AI" imply that meaning-assignment is any of a {static
    relationship, process, particular process}? Are you saying that
    Searle thinks meaning assignment is a static relationship? Or are
    you saying that Strong AI makes that claim? Forgive me if I seem
    dense,  but I can't quite tell what you're getting at.

>Hence his assertion that he has demonstrated that strong AI is
>false is simply false.

    Perhaps you could clarify this criticism. 

>Just to add a little "balance", I don't find the arguments on the
>other side particularly compelling, either. And, in fact, their
>arguments fall to the same point: intelligence is very definitely
>*not* an I/O mapping. It is a process.

     Aren't instantiations of programs that map transducer inputs to
     effector outputs processes? Or do you have some special
     definition of "process" in mind?

>My own view is that, until we have a reasonable idea of how
>consciousness operates, arguing about whether computers can be
>conscious is about as anachronistic as arguing about the number of
>angels that can dance on the head of a pin.

    If somebody said that minds are just numbers or laboratory tables
    or planetary orbits, don't you think we'd have good reason to deny
    their claim? 

-michael

jeff@aiai.ed.ac.uk (Jeff Dalton) (01/11/90)

In article <2602@cunixc.cc.columbia.edu> pnf@cunixa.cc.columbia.edu (Paul N Fahn) writes:
>In the recent article, the man in the Chinese room is Searle himself.
>One of his conclusions is that after being in the Chinese room, he still
>does not understand Chinese. He gives no arguments to support this 
>conclusion, but simply states it.

I suspect Searle feels no argument is needed.  I tend to agree.  I'm
pretty sure that if I were running the Chinese room I wouldn't
understand Chinese, at least not right away.  But if I eventually
managed to understand it, it wouldn't be just because I followed the
instructions in the book.  I'd have to try to figure it out somehow.
And I don't think even that would be possible.  How would I ever know
that some Chinese symbol meant "tree", for example?  [I think this
is why some people think giving the room a camera and so on might
make a big difference.]

>                                   Putting himself in the room is unfair
>because it puts Searle in an authoritative position to state what is
>or isn't being "understood". Anyone else trying to argue that perhaps
>the man does in fact understand Chinese is put at a structural dis-
>advantage because no one would know better than Searle himself whether
>or not he understands Chinese.

I think you've hit on a clever argument, but I don't think it really
works.  I don't think the appeal of the C.R. argument depends on
Searle being an authority about himself (and you don't seem to think
so either, as you say below).  Strictly speaking, there may be a
logical dependency on the person in the room being Searle, because
Searle does after all say it that way.  [Doesn't he?  I haven't
looked at it for a while.]  But the Room's not worth much if it
shows something only about Searle, and I don't think anyone who
finds the C.R. convincing interprets it that way.

>  I could just as easily say that I am in the Chinese room and that 
>after a period of time I do indeed understand Chinese. How could Searle 
>argue and tell me I don't? He would rightfully claim that my argument 
>is unfair.

Well, if you said "I would understand Chinese", I'd think you were
just wrong, for the reasons I indicated above.  Just following the
instructions wouldn't cause you to understand Chinese.  You'd have to
at least do some extra work trying to figure things out.

>Let's say Searle uses a third-person man in the room. He still states
>that the man does not understand Chinese after being in the room and
>therefore, computers cannot understand. The (unstated) appeal of the
>argument is "pretend you are in the room following the rules. you
>wouldn't understand Chinese, right?". 

Just so.

>                                       He is basically asking people to
>try to identify with a computer cpu and then conclude non-understanding.
>He doesn't state the argument this way because it is not a sound logical
>argument.

Well, there are some other parts to the argument which amount to
saying that if there's understanding anywhere it has to be in the
person.  In any case, whether it's a sound logical argument is one of
the issues in question, isn't it?  You seem to be saying that it's
sound if Searle is the person in the room and not otherwise.  But your
interpretation for making it sound also makes it trivial, and we're
still left with the question of whether the non-trivial version is
correct.  I don't think anyone agrees with Searle because they agree
with the trivial version or because they think "well, Searle would
know whether or not Searle would understand."

>The way the Chinese room problem *should* be presented is as an experiment:
>  Take a man who doesn't understand Chinese and put him in a room with all
>the necessary rules. Let him execute the rules, answering written questions
>put him by Chinese speakers. After two years we, the experimenters, gather
>our results: does the man understand Chinese?

That wouldn't be right, because the man could do arbitrary things in
the room, not just execute the instructions.  Nonetheless, I think
it's pretty clear the man would not understand Chinese.  If you think
the man might be able to translate Chinese to English or to explain in
English (or whatever his native language is) what someone said in
Chinese, a much better test than asking him to reply to Chinese in
Chinese, I'd like to hear any argument that makes that seem even
remotely plausible.

>  We can argue about the answer, and try to devise criteria which, if 
>satisfied, would convince us that he does or does not understand Chinese.
>This is the point of the Turing test.

The point of the Turing Test is not to answer whether a computer does
or does not understand but rather to substitute a different question
which we may find good enough.  That is, the whole point is to avoid
having to deal with the philosophical confusions, errors, and
prejudices that might otherwise come up.  But for anyone who cares
about anything other than the external behavior that can be
transmitted by typing, the TT isn't good enough.

>Searle however, simply states the results of the experiment as if it were a
>premise: the man does not understand Chinese. He is in fact using circular
>reasoning: because he does not understand Chinese, syntax is not enough to
>get semantics. But what is his reason for concluding the man does not 
>understand Chinese? simply his prior conviction that syntax is not enough 
>for semantics.

I conclude that the man in the Room, whether it's Searle or me or
anyone else who doesn't already understand Chinese, wouldn't
understand Chinese.  Well, I've tried to say something about that
above.  I don't have any conviction, prior or otherwise, that
syntax is not enough for semantics, so I don't think I'm using
one to reach my conclusion.  I don't think Searle is using that
either.

Ok, there isn't a logical argument here, but I don't think one's
necessary at this point.  Where the argument comes in is in trying
to show that if the person doesn't understand there isn't any
understanding going on at all.

>The need for an external test (Turing or otherwise) is due to the fact that
>we cannot directly know the man's internal mental state. While admitting 
>that the man passes the Turing test, Searle does not present some other 
>test which the man fails to pass. If Searle thinks that the Turing test 
>is inadequate, let him devise another and argue why the man would fail 
>this "better" test. "pretend you're the man" is not an adequate test.

I don't think we know enough to devise a good test for understanding
(in general) at this point.  However, if you want a test that the man
in the room would fail to pass, it's this: translate some Chinese into
English.  That's a variation of the TT, but it's not the variation
the Room is assumed to be able to pass.  Unfortunately, we can't do
the same sort of thing to see whether the man understands English
(or whatever the last language we got to is), and we can't do it
for the Room as a whole, because the room's supposed to understand
only Chinese.  So it doesn't really answer the basic question about
understanding.

>Basically, Searle's argument comes down to: "Pretend you're a 
>computer. Do you understand Chinese?"  

I think you're right here, more or less.

>Let us look at Searle's recent twist to the problem: the man memorizes
>the rules and answers Chinese questions in public. We the experimenters
>watch him do this for two years and then must decide whether he 
>understands Chinese. A lot of people would conclude that he does indeed
>understand Chinese, even if they "knew" that he was following rules.

No they wouldn't, because they'd sooner or later ask him to restate
some Chinese in another language.  Besides, you're assuming that a
lot of people would accept a Turing Test as adequate.  That's begging
the question as far as the C.R. is concerned.  Searle takes passing
the TT as an assumption and then aims to so that -- even so -- there's
no understanding.  Whether that's wrong -- whether there must be
understanding whenever the TT is passed -- can't just be assumed.

bloch@mandrill.ucsd.edu (Steve Bloch) (01/11/90)

pnf@cunixa.cc.columbia.edu (Paul N Fahn) writes:
>                                       He is basically asking people to
>try to identify with a computer cpu and then conclude non-understanding.

Let's give Searle this, for the moment: the CPU does not understand,
by analogy with the Anglophone in the Chinese room.  It's still the
wrong question, since Searle said he would disprove that "computers
running programs cannot understand", a completely different
proposition.  Nobody's ever seriously claimed that the processor
itself somehow magically becomes intelligent when a certain program
is stuck into its memory; the claim (if we are to trust Searle's
statement of it) was that if a suitable computer, running a suitable
program, passes a Turing test, then it actually is intelligent.
The analogy of the Chinese room is invalid unless it includes the
whole system: Dr. Searle in his room, the book of rules (or just "the
rules", if Dr. Searle somehow manages to memorize enough formal rules
to completely specify the verbal behavior of a human being, which I
suspect exceeds the theoretical information content of a human brain),
and the window through which Chinese characters are passed.  And the
question of whether this system "understands" Chinese is no easier
than whether an AI program "understands" fairy tales.

jeff@aiai.UUCP (Jeff Dalton) writes:
>The point of the Turing Test is not to answer whether a computer does
>or does not understand but rather to substitute a different question
>which we may find good enough.
I'm sure Searle, and for that matter many AI researchers, would agree
with you: the Turing Test isn't sufficient to prove that a suitable
computer running a suitable program understands.
But he goes farther: his objective is to prove that a computer 
running a program CANNOT understand, although some other unspecified
kind of machine doing something else unspecified might.  His certainty
on this issue seems to me quite unwarranted, and sometimes downright
offensive.

Paul goes on:
>Let us look at Searle's recent twist to the problem: the man memorizes
>the rules and answers Chinese questions in public. We the experimenters
>watch him do this for two years and then must decide whether he 
>understands Chinese. A lot of people would conclude that he does indeed
>understand Chinese, even if they "knew" that he was following rules.

To which Jeff replies:
>No they wouldn't, because they'd sooner or later ask him to restate
>some Chinese in another language.  Besides, you're assuming that a
>lot of people would accept a Turing Test as adequate.

Don't we?  I'm using the Turing Test to conclude that both Paul and
Jeff are people, and I have no qualms about it.  Indeed, if Dr. Searle
were on this newsgroup I suspect he would do the same.

As for "they'd ask him to restate some Chinese in another language",
if they asked in Chinese, he would presumably answer in Chinese that
"I don't speak English."  If they asked in English (equivalent to
hitting BREAK on your terminal, falling into a ROM monitor, and
talking to the processor in machine language), this would require
throwing out part of the system that was passing the test, so the fact
that the new system fails indicates nothing about the old system.

********************************************************************

Another objection to Searle's article.  Searle makes much of the
distinction between a model and an actual object:

"a person does not get wet swimming in a pool full of ping-pong-ball
models of water molecules,"
"Simulation is not duplication."
"you could not run your car by doing a computer simulation of the
oxidation of gasoline, and you could not digest pizza by running the
program that simulates such digestion.  It seems obvious that a
simulation of cognition will similarly not reproduce the effects of
the neurobiology of cognition."

When you talk to someone on the phone, you are not actually hearing
the person's voice, but rather a simulation of it, carried out by
formal, mechanical processes, with a fair amount of digital signal
processing and no small amount of random noise introduced to boot.
Yet for practical purposes we treat this simulation as the reality,
because what we're interested in is communication of ideas, not the
physical presence of a person within earshot.  (In some cases there
isn't even a person at the other end, just a machine simulating a
person's voice straight into the wire.)  In other words, the
simulation preserves the part of reality we're interested in.
If I have an idea, it remains essentially the same idea whether I
speak it aloud, type it on a keyboard to store it in an ASCII file on
a magnetic disk, or write it in Chinese with a horsehair brush,
inkstone and inkblock on rice paper.  These representations are all
simulations of one another which preserve the essential part of the
idea, the real information.
By contrast, when you ask whether I've gotten wet, you're asking about
interactions between my skin and water molecules; the ping-pong-ball
model does not preserve interactions with real skin, unless it too is
modelled on the same scale.  When you ask whether a car has moved,
or whether I've digested a pizza, you're asking about the release of
energy (among other things), and a computer simulation of oxidizing
hydrocarbons (or carbohydrates) doesn't preserve the release of real
energy.

"Simulation is not duplication" is certainly true unless you have a
COMPLETE simulation of EVERY ASPECT of the simulated system.  But an
incomplete simulation CAN duplicate the aspects of the system you're
interested in, and if what we're interested in in cognition is
information flow, there's no reason to believe a computer program 
can't simulate that (it is their specialty, after all).
And while it may "seem obvious that a simulation of cognition will
similarly not reproduce the effects of the neurobiology of cognition,"
I don't care because neurobiology isn't what I'm interested in.
Searle, on the other hand, apparently takes it as axiomatic that
cognition cannot occur without certain biochemical reactions.
Everything depends on what aspects of cognition you care about:
the Turing test is perhaps a little overly behavioristic, but I
think Searle's demand is at least as far in the other direction.

"The above opinions are my own.  But that's just my opinion."
Stephen Bloch
bloch%cs@ucsd.edu

ian@mva.cs.liv.ac.uk (01/12/90)

In article <1527@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>                                                 How would I ever know
> that some Chinese symbol meant "tree", for example?
> 
You could see how it fitted with other symbols, how certain symbols appeared
together, how certain rules treated it.  You may not be able to say that you
would know what a *particular* symbol meant or which symbol meant *tree*, but
I am sure that you would eventually work out the meanings of *some* symbols.

>                                                 Just following the
> instructions wouldn't cause you to understand Chinese.  You'd have to
> at least do some extra work trying to figure things out.
>
I think that you would have to do extra work in order to *not* understand. As
you began to recognise rules, common groups of symbols, etc. you (or at least I)
would begin to reorganise the room to make life easier for myself. Since some
semantic knowledge is carried in the syntax of a language (for example cases in
German or particles in Japanese), this organisation would eventually lead you
to understand some of the meanings conveyed by some rules.

>                                                          If you think
> the man might be able to translate Chinese to English or to explain in
> English (or whatever his native language is) what someone said in
> Chinese, a much better test than asking him to reply to Chinese in
> Chinese, I'd like to hear any argument that makes that seem even
> remotely plausible.
> 
Rather than an argument, I will proffer an example of such a phenomenon.
From time to time, during human history, writings from long-extinct
civilisations have been found (for example Mayan codices, Runes or Egyptian
hieroglyphics).  All the information that the translators had to work with
were the rules they could deduce from the information.  With just this
syntactic knowledge, they deduced the semantic content.  Isn't this exactly
what Searle says cannot be done?  Code-breakers (for example Turing :-) must
have to do a similar task.

Ian Finch
---------

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/13/90)

From article <4921.25ad37f7@mva.cs.liv.ac.uk>, by ian@mva.cs.liv.ac.uk:
>...
>Rather than an argument, I will proffer an example of such a phenomenon.
>From time to time, during human history, writings from long-extinct
>civilisations have been found (for example Mayan codices, Runes or Egyptian
>hieroglyphics).  All the information that the translators had to work with
>were the rules they could deduce from the information.  With just this
>syntactic knowledge, they deduced the semantic content.  Isn't this exactly
>what Searle says cannot be done?  Code-breakers (for example Turing :-) must
>have to do a similar task.

I don't think it's possible to break a language code with only written
samples of the code.  The decipherers of ancient scripts have arrived at
meanings only through correspondences with known languages or pictorial
information.  For instance, in the decipherment of Linear B, Ventris
managed to arrive at the *pronunciation* of the script without having
much substantial information about meaning (only some conjectures about
place names) or the identity of the language transcribed.  A
confirmation of his decipherment was Bennet's discovery of a tablet with
a word which in Ventris' system came out ti-ri-po-do accompanied by a
picture of a tripod.  Without the correspondence to Greek or the
picture, it's hard to imagine how the meaning of this word could ever
have become known.

The relationship between script and pronunciation is systematic.  The
relationship between pronunciation and meaning for the primitive
units of a human language is not.

This might have been relevant to the CR question if semantic information
were not already incorporated into the Rules of the Room.  But of
course it must be, since the Room converses intelligently.

				Greg, lee@uhccux.uhcc.hawaii.edu

markh@csd4.csd.uwm.edu (Mark William Hopkins) (01/14/90)

In article <1527@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>                                                 How would I ever know
> that some Chinese symbol meant "tree", for example?

In article <4921.25ad37f7@mva.cs.liv.ac.uk> ian@mva.cs.liv.ac.uk writes:
>You could see how it fitted with other symbols, how certain symbols appeared
>together, how certain rules treated it.  You may not be able to say that you
>would know what a *particular* symbol meant or which symbol meant *tree*, but
>I am sure that you would eventually work out the meanings of *some* symbols.

This is where I can expand on my viewpoint a little more.  So I offer
this answer:

you'll know that *tree* means tree, because the rules for learning Chinese
would have TOLD you to go outside, hug a tree (which you would presumably
already know how to recognize) and say "tree" (in Chinese).

The rule linking symbol to actual neural-motor (and pattern recognition
routine) is a purely formal rule captured (perhaps) in a pushdown-transducer
formalism (or why not in a Finite State machine formalism -- just add stack
operations and other data structure operations in the augmentations).

It links the term "tree" to a routine which process purely formal symbols
that just happen to have been implemented in the architecture as actuator
or sensory signals (but that would have no bearing on the fact that the
intelligence is intelligent).

Ultimately, you'll link "tree" to other meanings, some of which may
already be one or more levels of indirection to actual signal processing
routines.  That is ... routines are first-class data structures in the
underlying architecture, and form the building blocks of symbols.

We can PROVE that the symbols we manipulate in our minds ARE meaningless
if we were able to "cross the wires" that link neural signals to
sensors and actuators -- so that willing yourself to move your arm would cause
you to start walking. :)  You'd breakdown in terrible confusion while you
tried to retrain yourself to adapt to the changed connections -- yet you'd
still be considered intelligent.

The lesson to be learned from that is that meaningfulness of symbols
has no bearing on intelligence.  Your brain doesn't care what the symbols
that correspond to signals do or where they come from -- i.e. it doesn't
care what the symbols ultimately mean.

schwuchow@uniol.UUCP (Michael Schwuchow) (01/15/90)

ian@mva.cs.liv.ac.uk writes:

>Rather than an argument, I will proffer an example of such a phenomenon.
>From time to time, during human history, writings from long-extinct
>civilisations have been found (for example Mayan codices, Runes or Egyptian
>hieroglyphics).  All the information that the translators had to work with
>were the rules they could deduce from the information.  With just this
>syntactic knowledge, they deduced the semantic content.  Isn't this exactly
>what Searle says cannot be done?  Code-breakers (for example Turing :-) must
>have to do a similar task.

>Ian Finch
>---------

IMHO the code-breaking of Mayan codices, Egyption hieroglyphics and so on
is not only based on syntactic knowledge, but on known semantics too.
I will specify this a bit:
The Mayas, the Egyptians are humans too. So you can suppose about what they
had written. Their culture is not totally lost, but relicts were traduced.
So you can fix some words like king, duke ,servant, slave; sun, water, rain,
moon, season; build, fight, govern; saw, grow, harvest, ... relatively
easy, because you can suppose, what a text could mean. Sometimes the words
are even pictures, which show, what they mean.

Suppose you get a message from extraterrestrian non-human intelligent
beeings. Some information transmitted in an unusual form. I think we could
not translate it by the syntax. A translation would suppose, that there are
parts included like
}Hey you out there! Are you intelligent too? Would you like to send letters?
}Send them to adress ...
And it might be, that there is a translation possible, that includes the
information of these statements. But how should we know, we are right??


And what would you suppose they think about us, if we send them back some
chinese poems?

thinking (at least i think so)
Micha
--
------Lieber ein Fachdilettant, als ein Universalidiot--------------
! Nickname: michel                     UUCP: schwuchow@uniol.UUCP  !
! Realname: Michael Schwuchow (biological version)                 !
! Position: Oldenburg, FRG             EARN: 122685@DOLUNI1        !
--------------------------------------------------------------------

ian@mva.cs.liv.ac.uk (01/17/90)

In article <1585@uniol.UUCP>, schwuchow@uniol.UUCP (Michael Schwuchow) writes:
> IMHO the code-breaking of Mayan codices, Egyption hieroglyphics and so on
> is not only based on syntactic knowledge, but on known semantics too.
> The Mayas, the Egyptians are humans too. So you can suppose about what they
> had written. Their culture is not totally lost, but relicts were traduced.
> So you can fix some words like king, duke ,servant, slave; sun, water, rain,
> moon, season; build, fight, govern; saw, grow, harvest, ... relatively
> easy, because you can suppose, what a text could mean.
> 
But, you would need to know something of Chinese culture to perform
translation.  One verb in English may be translated in several ways in
Chinese, dependant on the status of the participants.  To translate this,
there would be rules describing the relationships between kings, dukes,
servants and slaves.  Thus, some semantic knowledge must be included in the
syntatic knowledge.

Ian
---

kp@uts.amdahl.com (Ken Presting) (01/19/90)

In article <4941.25b48ec4@mva.cs.liv.ac.uk> ian@mva.cs.liv.ac.uk writes:
>In article <1585@uniol.UUCP>, schwuchow@uniol.UUCP (Michael Schwuchow) writes:
>> IMHO the code-breaking of Mayan codices, Egyption hieroglyphics and so on
>> is not only based on syntactic knowledge, but on known semantics too.
>> The Mayas, the Egyptians are humans too. So you can suppose about what they
>> had written. Their culture is not totally lost, but relicts were traduced.
>> So you can fix some words like king, duke ,servant, slave; sun, water, rain,
>> moon, season; build, fight, govern; saw, grow, harvest, ... relatively
>> easy, because you can suppose, what a text could mean.
>> 
>But, you would need to know something of Chinese culture to perform
>translation.  One verb in English may be translated in several ways in
>Chinese, dependant on the status of the participants.  To translate this,
>there would be rules describing the relationships between kings, dukes,
>servants and slaves.  Thus, some semantic knowledge must be included in the
>syntatic knowledge.

The Chinese Room argument doesn't really depend on the operator being
ignorant of Chinese.  Of course, Searle makes a big deal out of it, but
IMHO that is a symptom of his argument's being directed more toward our
prejudices than toward our reason.

Suppose the rule book for transforming input into output were written in
Chinese, and the operator were a native speaker.  The process occurring in
the room would still be entirely syntactic.   Searle shouldn't need any
more than this to draw his intended conclusion - that the operation of
the room lacks crucial attributes of mental processes, namely semantics.

The situation remains the same if a Chinese speaker memorizes Chinese
rules and uses them to transform verbal inputs into spoken outputs.  The
transformation is stil based entirely on syntax.

To object to the Chinese Room argument on the grounds that the room is
not really semantics-free, it's necessary to present examples of semantic
operations entering into the procedures specified in the rules.  A rule
such as "If the input is <xxxx>, then look at your watch and select symbol
<x> based on the hour and symbol <y> based on the minute".  Such a rule
has semantic content because the terms "hour", "minute", and "your watch"
do not refer to symbols or symbol-types.

jct@cci632.UUCP (John Thompson) (02/10/90)

In article <7758@sdcsvax.UCSD.Edu> bloch@thor.UUCP (Steve Bloch) writes:
> [ after some deletions ] :
>>
>>	1 ) "Thinking", as done by the human brain, is likely NOT
>>algorithmic
>
>Show me one shred of experimental evidence for that statement.
>And don't try to exclude parallelism; that's a hot area in current
>algorithmic complexity theory.

Please excuse what is probably naivette on my part.

I will agree that there is no evidence that deductive reasoning ( I use
reasoning rather than thinking because it seems that there is some semantic 
confusion on the meaning of "thinking" ) is algolrithmically based. Deductive
reasoning, by definition, follows rules and therefore is an algorithm.

I don't see a way that inductive reasoning, which by the definitions I am used
to does not always follow rules ( i.e. the jump without sufficent data ), can
be reduced to an algolrithm. Can anyone out there enlighten me as to where I am 
wrong with this?

>machine with no loudspeaker.) But it preserves the aspects of
>speech we're interested in: semantic content.  Similarly, I maintain
>that the aspects of "thinking" we're interested in are not biological
>or chemical, and so they need not be preserved by a simulation in
>order for that simulation to be good.  If you insist that thinking IS
>primarily a biochemical phenomenon, thus assuming your conclusion that
>only brains can give rise to it, I suspect the vast majority of human
>beings will agree with me.

I maintain that human thinking is so inextricibly linked into the biological
matrix from which it springs that to attempt to PRACTICALLY ( emphasis is
delibrate ) simulate such "thinking" will require either :

1) A level of complexity to the simulation that is indistinguishable from
   a biological matrix. By this I mean in hardware OR software or both.

Or :

2) A simplified simulation that does not closely approximate the required
   level of "thought".

Finally, and not intended as a flame, I do not see the need for the "vast
majority" of humanity to agree with your points. The vast majority of human
beings in this world are not capable ( by education or lack of ) to understand
these points, and can not affect the validity (or lack of) in your arguements.
Again, I am not flaming, just confused by the line of reasoning and wish to
have it explained.

================================================================================
	And home sings me of sweet things,    |  All opinions are my own. That's
	My life there has its own wings,      |  because no one will haul them
	Fly me o'er the mountain,             |  away at the price I'll pay.
	Though I'm standin' still ...
		-Bonnie Raitt                            John C. Thompson

================================================================================

escher@ucscb.UCSC.EDU (of Dreams and Shadows) (02/12/90)

In article <33868@cci632.UUCP> jct@ccird3.UUCP (John Thompson) writes:
>
>I don't see a way that inductive reasoning, which by the definitions I am used
>to does not always follow rules ( i.e. the jump without sufficent data ), can
>be reduced to an algolrithm. Can anyone out there enlighten me as to where I am 
>wrong with this?

	Inductive reasoning -- and I prefer as you do the term reasoning over
thinking -- is simply taking as given the conclusion and constructing a set of
simplifications that lead you back to specific atoms. This can be dealt with
by a recursive-like process, or any alogorithm [sp?] capable of searching the 
"tree" backwards and applying deductive tests to it.
	Also... there is such a thing as an indirect mode of proof where one
ASSUMES what they wish not to prove and concludes to a contradiction, thereby
showing the original assumption to be wrong and the negation of that assumption
to be right. This sort of "weak" proof corresponds roughly to quite a few so-
called "inductive" arguments.
	[no clever end block yet, but we're working on it]


	

rjones@a.gp.cs.cmu.edu (Randolph Jones) (02/13/90)

In article <33868@cci632.UUCP> jct@ccird3.UUCP (John Thompson) writes:
>I don't see a way that inductive reasoning, which by the definitions I am used
>to does not always follow rules ( i.e. the jump without sufficent data ), can
>be reduced to an algolrithm. Can anyone out there enlighten me as to where I am 
>wrong with this?

Inductive reasoning can definitely be accomplished with the use of rules.  
These rules are just not necessarily truth preserving.  For example, the rule
"The sun has risen every morning that I have experienced --> The sun will
 rise tomorrow morning" 
may or may not be true, but it is a rule that we humans use often to make
a useful induction.  If you are interested in algorithmic approaches to
inductive learning and reasoning, just look in the proceedings of the
Machine Learning conferences and workshops.  You will find a vast amount of 
research on this subject.

kohout@wam.umd.edu (Robert C. Kohout) (02/14/90)

I must say that I am somewhat surprised to see this Searle discussion
continuing. I haven't looked at this group in several months, and I 
thought the Chinese Room had died a merciful death. Insofar as I see
that it is still a topic of conversation, I would like to make 
the following points:

1) In "Minds, Brains and Science", Searle makes the following comment in
the opening paragraph of his second chapter, entitled 'Can Computers Think'?

    "The prevailing view in philosophy, psychology and artificial 
    intelligence is one which emphasizes the analogies between the
    functioning of the brain and the functioning of digital
    computers. According to the most extreme version of this view,
    the brain is just a digital computer and the mind is just a
    computer program. One could summarize this view - I call it 
    'strong artificial intelligence', or 'strong AI' - by saying that
    the mind is to the brain, as the program is to the computer 
    hardware."

Do any of you actually agree with this summation? In particular, I want
to point out that Searle equates mind with program. In all my time reading
this group, I don't recall a single instance of such a statement. We may
believe a lot of things, but do any AI practitioners/afficiandos out there
actually think that mind/brain = program/hardware?

2) Even given this notion of the relationship between mind and brain,
which I regard more or less of a straw dog, Searle manages to bungle
his argument. That is, later in the chapter, when he presents his now
famous CR argument, Searle puts the human in the place of the hardware,
and then points out that it is difficult to claim that such a person
could understand Chinese. Remember, Searle claims to be trying to debunk
the mind/brain = program/hardware notion. He posits a human who interprets
a set of instructions about which he knows nothing. That is, he equates
a human with the machine and the set of instructions with the program.
Thus, by his own (albeit poor) analogy, one should not expect the human
to understand, any more than one would expect the brain (vis a vis the Mind)
to understand. Searle can't even shoot down his own weakly constructed
creation.

3) All Searle really shows is something that we all know already. No
matter how grandiose our program, the bare metal of the digital computer
as we currently know it will never itself become aware. Big deal. The
Chinese Room might start a lot of great, go-nowhere discussions, but it
proves, very, very little.


R.Kohout

weyand@csli.Stanford.EDU (Chris Weyand) (02/14/90)

kohout@wam.umd.edu (Robert C. Kohout) writes:

[Good summation of Searle's ideas deleted]


>3) All Searle really shows is something that we all know already. No
>matter how grandiose our program, the bare metal of the digital computer
>as we currently know it will never itself become aware. Big deal. The
>Chinese Room might start a lot of great, go-nowhere discussions, but it
>proves, very, very little.

Wait a minute are you saying that Searle is showing that the computer as
a hardware architecture can not be aware or no computer + program can be aware?
Yes I would agree that we all know the former is true.  Just like a dead brain
can't be aware (as far as I know :->).  But if you mean that Searle has
shown the latter then I think he would have proven a lot!

Chris Weyand
weyand@csli.Stanford.Edu

kp@uts.amdahl.com (Ken Presting) (02/14/90)

In article <1990Feb13.225830.13432@wam.umd.edu> kohout@wam.umd.edu (Robert C. Kohout) writes:
>I must say that I am somewhat surprised to see this Searle discussion
>continuing. I haven't looked at this group in several months, and I 
>thought the Chinese Room had died a merciful death.

When Scientific American published Searle v Churchlands, we got going
again.

>        ...        One could summarize this view - I {Searle} call it
>    'strong artificial intelligence', or 'strong AI' - by saying that
>    the mind is to the brain, as the program is to the computer 
>    hardware."
>
>Do any of you actually agree with this summation? In particular, I want
>to point out that Searle equates mind with program. In all my time reading
>this group, I don't recall a single instance of such a statement. We may
>believe a lot of things, but do any AI practitioners/afficiandos out there
>actually think that mind/brain = program/hardware?

Sure, I think it's close enough.  Searle does seem to me to underestimate
the significance of *running* the program (:-).  If you count I/O devices
as hardware, then hardware and programs *better* be the terms of the
equation.

If I were to modify Searle's ratio, I would say (apologies to Wirth):

          mind/brain = (data structures)/(hardware+algorithm)

Searle talks almost exclusively about programs, seldom mentioning data.
It does bug me that Searle gives the CR no scratch paper.  It's just a
finite state machine.  But most of his argument doesn't depend on that,
or on the details of the analogy.


>2) Even given this notion of the relationship between mind and brain,
>which I regard more or less of a straw dog, Searle manages to bungle
>his argument. That is, later in the chapter, when he presents his now
>famous CR argument, Searle puts the human in the place of the hardware,
>and then points out that it is difficult to claim that such a person
>could understand Chinese. Remember, Searle claims to be trying to debunk
>the mind/brain = program/hardware notion. He posits a human who interprets
>a set of instructions about which he knows nothing. That is, he equates
>a human with the machine and the set of instructions with the program.
>Thus, by his own (albeit poor) analogy, one should not expect the human
>to understand, any more than one would expect the brain (vis a vis the Mind)
>to understand. Searle can't even shoot down his own weakly constructed
>creation.

If I follow you, you are saying that it's irrelevant that the person does
not understand Chinese.  And anyway, we didn't need an argument to tell
us that the person wouldn't understand - that's exactly what we would
expect based on the role the person has in the analogy.  After all, the
person is being compared to the Brain, and it's the Mind that understands.

The problem with this maneuver is that Searle is using his observation
(that the person doesn't understand) to make a very different point.
If you haven't seen the Scientific American article, you (may) want to
get a copy.  I think Searle's presentation there is very systematic and
well organized.

Searle uses the CR to support what he calls Axiom 3: "Syntax by itself
is neither constitutive of nor sufficient for semantics."  He goes on to
say "At one level this principle is true by definition."  So why does he
need to support it with the Chinese Room?  I'll get to that in a minute.

Suppose you program a Turing machine to accept some formal language, say
all the wff's in a first-order logic that use some finite set of
predicate letters and constants.  This TM program defines the syntax of
the language, and is entirely equivalent to some set of re-writing rules.
Now, neither the TM nor the equivalent grammar has anything at all to do
with the assignments of predicate letters to subsets in a model, nor the
assignments of constants to elements in a model.  That's Axiom 3.

The Chinese Room is like a Turing machine that doesn't just accept the
language.  After it accepts a wff, it prints out a new wff.  Searle's
point is that this TM is *also* completely independent of any semantics
for the language in question.  Of course, this sort of TM is what most
people have in mind for passing the Turing test, so Searle can conclude
that passing the Turing test does not require any knowledge of semantics,
that is, understanding.

Presumably Searle isn't up on automata theory, and had no better way to
state his idea than with a cute story.

>3) All Searle really shows is something that we all know already. No
>matter how grandiose our program, the bare metal of the digital computer
>as we currently know it will never itself become aware. Big deal. The
>Chinese Room might start a lot of great, go-nowhere discussions, but it
>proves, very, very little.

What it shows is that we'd better get clear on how computers are different
from TM's, or spruce up the Turing test, or both!

It certainly does *not* show that we have no choice but connectionism.

hwajin@wrs.wrs.com (Hwa Jin Bae) (02/23/90)

In article <1990Feb16.220511.27647@Neon.Stanford.EDU> arm@Neon.Stanford.EDU (Alexander d Macalalad) writes:
>I don't know too much about the brain, but I think there is evidence
>for special areas in the brain dedicated to speech, and any damage in
>these speech centers could severely impair a person's ability to talk
>and/or understand.  Further, a person has to be exposed to language by
>a certain age, or else it will be next to impossible to learn any
>language.  Please correct me if I have these facts wrong.

At least one couter-example to this would be Helen Keller.  
Studies show that if the left hemisphere of a brain is damanged
after the appearance of language but before the age of 8 or 9, the
child nearly always recovers language in a period ranging from a few
months to three years [note: this is not a "understood" phenomena.]

You'll recall that the region of one's brain named after Paul Broca
(located in the frontal lobe above the lateral fissure) seems to be
closely related to speech.  If this region is destroyed speech becomes
slow and hesitant and the sounds of languages are badly produced.
This area along with the area named after Carl Wernicke seems to
be the most imporant areas as far as one's linguistic abilities
are concerned.  But, as always, there are other areas invovled
as well -- nothing is so simple and finite.  For example, there 
are bundles of nerve fibres (called tracts) which connect the language
areas to each other and to other parts of the cerebral cortex on
the same or the opposite side of the brain.

The idea that one can definitely identify a corresponding
physical parts of anything that is non-physical seems to
be also based on the same old Cartesian model of world view;
everything has to be clearly categorized, labeled and analysed
to the littlest components.  There is always a clear distinction
between things material and things spiritual.

Unfortunately this types of thinking hinders further progress
in many disciplines of studies.  As Newtonian physics was blinded
by the basis established by Descartes, current sciences are
terminally handicapped by the followers of the Cartesian philosophy.

As I've mentioned several weeks back and some others more
recently, Searle's entire argument is based on the premise that
there are in fact two distinct components -- syntax and semantics
which are naturally separate and cannot be reconciled.  This
assumption is taken without any investigation.  I dare you to
find any attempt by Searle, in many of his redundant articles
on this tiresome Chinese Room nonsense, to re-evaluate this
basic condition of his argument -- is it truely appropriate
for us to insist on this division between syntax and semantics
in our search for the understanding of the mind?  Everything
is connected at a certain level.  One cannot safely 
assume any division of any abstractions.  Quantum physists
certainly feel better at ease with this lack of
fine lines -- as in, "it seems to be both wave and particle at
the same time."

The mind seems to be both syntax and semantics at the same time.
-- 
Hwa Jin Bae (hwajin@wrs.com)
Wind River Systems

kp@uts.amdahl.com (Ken Presting) (02/24/90)

In article <857@wrs.wrs.com> hwajin@wrs.wrs.com () writes:
>The idea that one can definitely identify a corresponding
>physical parts of anything that is non-physical seems to
>be also based on the same old Cartesian model of world view;
>everything has to be clearly categorized, labeled and analysed
>to the littlest components.  There is always a clear distinction
>between things material and things spiritual.

Let's lay off Descartes, OK?  It was Aristotle who invented Categories,
and Kant who is responsible for the (small) role they still have in
current philosophy.  Descartes was *opposed* to this view.

Descartes' method involved pure deductions (read "calculations") from
assertions such as "I'm Thinking", and had very little to do with objects
of any sort (though he did draw the famous distinction, which I will not
repeat, because I don't like it either).

Note that Searle accuses the Strong AI'ers of Cartesianism.  He is *right*
that those who suppose pure calculation can constitute a mind have a
position resembling Descartes.  He is *wrong* to suppose that anyone
holds such a position.

For the purposes of the debate over the potential of AI,
Descartes' most important contribution is the invention of analytic
geometry.  "That's completely irrelevant" you say?  Quite right.

>As I've mentioned several weeks back and some others more
>recently, Searle's entire argument is based on the premise that
>there are in fact two distinct components -- syntax and semantics
>which are naturally separate and cannot be reconciled.  This
>assumption is taken without any investigation.  I dare you to
>find any attempt by Searle, in many of his redundant articles
>on this tiresome Chinese Room nonsense, to re-evaluate this
>basic condition of his argument --

Scientfic American, Jan. 1990, page 31, leftmost column.

                                    is it truely appropriate
>for us to insist on this division between syntax and semantics
>in our search for the understanding of the mind?  Everything
>is connected at a certain level.  One cannot safely 
>assume any division of any abstractions.  Quantum physists
>certainly feel better at ease with this lack of
>fine lines -- as in, "it seems to be both wave and particle at
>the same time."
>
>The mind seems to be both syntax and semantics at the same time.

Searle is not saying that syntactic information is ever encoded separately
from semantic information in any natural or artificial language
understanding system.  He is *only* saying:

   IF you write a program that looks at input strings and nothing else,

   THEN that program will not contain enough information to figure out
   what the strings mean.

Your objections actually *strengthen* Searle's position.  He believes,
as you do, that semantic and syntactic information are completely mixed
in human brains.  What he does not believe is that computers ever do
anything other than look at strings.  He concludes from this that
computer programs can never contain any semantic information.

Searle's problem is that he does not understand what compilers do.  To
put it better, he does not understand why they are useful.  Compilers
transform strings of symbols into other strings.  Compilers contain both
syntactic and semantic information about programming languages.  Now
suppose that you know lots about micrprocessors, but nothing about
software.  Would you be able to figure out the semantics of a programming
language just by looking at the compiler?

Well, if you knew the instruction set for the target processor, it would
be tedious, but you could figure out that function references mean "push
the stack and jump" and eventually you'd completely understand the source
language.  If you didn't know the instruction set, but you could watch the
object programs run on a real computer, then it would be even more tedious
to figure out the semantics of the programming langauge, but you could
still do it eventually.

But if all you had was the compiler code itself (let's suppose that it's
a cross-compiler), and samples of source and object files, you'd be hard
pressed to figure out some of the details like number representation
formats and interrupt bit masks.  In general, all the implementation-
dependent headaches that are so carefully spelled out in compiler manuals
are the places where it's not enough to know which symbols get changed
how, you have to know how the program interacts with the real live hardware.

So Searle is wrong that programs cannot contain semantic information, but
he is right that if the occupant of the Chinese Room can only look at the
program and not at what's goig on outside, he may never figure out the
language he's "speaking".

arm@Neon.Stanford.EDU (Alexander d Macalalad) (02/24/90)

In article <857@wrs.wrs.com> hwajin@wrs.wrs.com () writes:
>At least one couter-example to this would be Helen Keller.  
>Studies show that if the left hemisphere of a brain is damanged
>after the appearance of language but before the age of 8 or 9, the
>child nearly always recovers language in a period ranging from a few
>months to three years [note: this is not a "understood" phenomena.]

I agree that the brain can compensate to a certain extent for damage
to certain areas by shifting the affected functions to other areas.
Perhaps my brain/hardware analogy was too strong.  My point was that
there is an area in the brain, though not necessarily in the same
location for everyone, which is crucial for language understanding
as we perceive it.  In other words, when Searle says that the person
in the Chinese room cannot understand Chinese even after memorizing
all of the rules, he means that the person is not using his language
centers to interpret Chinese.  This does not rule out the possibility
that understanding is taking place elsewhere in his brain, in a fashion
that he does not recognize as understanding taking place.

>The idea that one can definitely identify a corresponding
>physical parts of anything that is non-physical seems to
>be also based on the same old Cartesian model of world view;
>everything has to be clearly categorized, labeled and analysed
>to the littlest components.  There is always a clear distinction
>between things material and things spiritual.

This point confuses me.  If I understand correctly, your general
point is that it is wrong to map language understanding to a 
specific part of the brain.  I agree: as I understand it, current
theories move away from this compartmentalized view of the brain.
Let me make my point more carefully.  The way in which we normally
understand language, which has in some sense been hardwired during
childhood, is different from how we might understand language given
the rules of the Chinese room, so different that WE WOULD NOT
RECOGNIZE THE UNDERSTANDING AS UNDERSTANDING.

I'm not sure what this has to do with Descartes, though.  I've
always thought that his major contribution to philosophy was the
separation of mind and matter, which is certainly not what I'm
proposing.  Analytic thought in general is much older than Descartes.
Look at Aristotle, for example.  Perhaps you were thinking of Kant,
whose distinction between the phenomena and the noumena can be seen
in the distinction of the observer and his or her observations.  In
any case, I don't see what any of this has to do with spirituality.

>Unfortunately this types of thinking hinders further progress
>in many disciplines of studies.  As Newtonian physics was blinded
>by the basis established by Descartes, current sciences are
>terminally handicapped by the followers of the Cartesian philosophy.

Again, some strong comments against Descartes.  I always thought that
Newtonian physics was based on Euclidean geometry and the idea that
space and time had an absolute frame of reference, both of which
predate Descartes.  And I certainly don't see how current sciences
are "handicapped."

>-- 
>Hwa Jin Bae (hwajin@wrs.com)
>Wind River Systems

Alex Macalalad

hwajin@wrs.wrs.com (Hwa Jin Bae) (02/25/90)

In article <0czk02el8b1301@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>In article <857@wrs.wrs.com> hwajin@wrs.wrs.com () writes:
>>The idea that one can definitely identify a corresponding
>>physical parts of anything that is non-physical seems to
>>be also based on the same old Cartesian model of world view;
>>everything has to be clearly categorized, labeled and analysed
>>to the littlest components.  There is always a clear distinction
>>between things material and things spiritual.
>
>Let's lay off Descartes, OK?  It was Aristotle who invented Categories,
>and Kant who is responsible for the (small) role they still have in
>current philosophy.  Descartes was *opposed* to this view.

You seem to be missing the point of my reference to the categorization.
It seems a certain hangup with the word "categorization" has triggered
a some kind of automatic association with this type of response.  The
context in which this word is used clearly indicates that the usage
was intended to emphasize the analytic methodology typically employed
by Descartes and his followers.  His methodolgy consisted in breaking
up thoughts and problems into pieces and in arranging these in their
logical order.  This methology is largely considered to be one of his
greatest contributions to modern science.  Why don't you try reading
the original sentence quoted here again.  I never said he invented
categorization (as if it is possible to invent such a thing as you 
are so quick to credit Aristotle).  This type of "lay off" reaction based
on a rather misguided interpretation is not the style of discussion 
in which you hope to engage, I hope.

Nevertheless, Descartes certainly was *not* *opposed* to this view
of categorizing things.  His particular emphasis on the fundamental
division between two independent and separate realms; that of mind,
or res cogitans, or that of matter, res extensa.  Where do you get
this idea that he was *opposed* to it?  Within each of these domains
he did maintain a certain consistency is his efforts to unify things
but the basis of his philosophy was dualistic.  This is rather basic
history, don't you agree?

>Descartes' method involved pure deductions (read "calculations") from
>assertions such as "I'm Thinking", and had very little to do with objects
>of any sort (though he did draw the famous distinction, which I will not
>repeat, because I don't like it either).

Thus, famous saying, "Cognito, ergo sum."  His methodology was certain
based on mathematical reasoning but one must not hasten to claim that
it had very little to do with objects of any sort.  That's quite a
claim.  To Descartes the material universe was a machine and
nothing but a machine.  Nevertheless he did extend his mechanistic
view of matter to living organisms and explained at grat length
how the motions and various biological functions of the body could
be reduced to mechanical operations, in order to show that living organisms
were nothing but automata.  His thinking was very much involved with
the "objects" (let's not get into what "object" means since that will
start another long discussion) in this way.  His view became the
conceptual framework for the senteenth-centry science and influenced
the Newtonian universe.

>Note that Searle accuses the Strong AI'ers of Cartesianism.  He is *right*
>that those who suppose pure calculation can constitute a mind have a
>position resembling Descartes.  He is *wrong* to suppose that anyone
>holds such a position.

Searle accuses the Strong AI'ers (I hate this term) of accusing others
of Cartesianism while not willing to admit his own Cartesianism.  :-)
True?  I'm pretty sure you'd agree.

>For the purposes of the debate over the potential of AI,
>Descartes' most important contribution is the invention of analytic
>geometry.  "That's completely irrelevant" you say?  Quite right.
>

Huh?  The signal to noise ratio is extermely low around here.

>>As I've mentioned several weeks back and some others more
>>recently, Searle's entire argument is based on the premise that
>>there are in fact two distinct components -- syntax and semantics
>>which are naturally separate and cannot be reconciled.  This
>>assumption is taken without any investigation.  I dare you to
>>find any attempt by Searle, in many of his redundant articles
>>on this tiresome Chinese Room nonsense, to re-evaluate this
>>basic condition of his argument --
>
>Scientfic American, Jan. 1990, page 31, leftmost column.
>

Fortunately, one of the guys around here at work put that
dreadful copy of SA in our rest room, so I took a look at it
again to refresh my memory.  I assume you're referring to his
remark that "The argument rests on the distinction between the 
formal symbol manipulation that is done by the computer and the
mental contents biologically produced by the brain, a distinction
I have abbreviated -- I hope not misleadingly -- as the distinction 
between syntax and semantics."  Evidently he realizes that he
made an important mistake in his original statement on the
Chinese Room arguement and here he is trying to amend the
damage by backing out and claiming that he has "abbreviated"
the rather manipulative distinctions.  How convenient!  A pathetic
attempt at saving his face.  I remember now why I felt
sick as I read his article for the first time.

>                                    is it truely appropriate
>>for us to insist on this division between syntax and semantics
>>in our search for the understanding of the mind?  Everything
>>is connected at a certain level.  One cannot safely 
>>assume any division of any abstractions.  Quantum physists
>>certainly feel better at ease with this lack of
>>fine lines -- as in, "it seems to be both wave and particle at
>>the same time."
>>
>>The mind seems to be both syntax and semantics at the same time.
>
>Searle is not saying that syntactic information is ever encoded separately
>from semantic information in any natural or artificial language
>understanding system.  He is *only* saying:
>
>   IF you write a program that looks at input strings and nothing else,
>
>   THEN that program will not contain enough information to figure out
>   what the strings mean.
>
>Your objections actually *strengthen* Searle's position.  He believes,
>as you do, that semantic and syntactic information are completely mixed
>in human brains.  What he does not believe is that computers ever do
>anything other than look at strings.  He concludes from this that
>computer programs can never contain any semantic information.
>

What do you think my objections are?  My objection is that Searle
has never attempted an honest re-evaluation of his pseudo analyses
of the current status of AI and the potential of AI.  His intentions
are to abolish unless research projects "that waste valuable grant
money".  That is totally shortsighted point of view.  Aside from that,
some of the trivial points that are not hardly worth arguing about
but still too errorsome to overlook tend to catch one's attention
and demand some kind of comment. 

He says one thing to avoid having to explain his errors but still
continue to use the old set of axioms to illustrate his CR.  Syntax
does not equal semantics, therefore all the computer based research
of mind is useless.  That's what he wants you to think no matter
how he phrases it.

> [usless harangue about compiler example pointed out by others
>  repeated here but deleted for brevity.]
>
>So Searle is wrong that programs cannot contain semantic information, but
>he is right that if the occupant of the Chinese Room can only look at the
>program and not at what's goig on outside, he may never figure out the
>language he's "speaking".

So what?  What's the point of this arguement?  

In any case this type of thinking is also part of what Cartesian thinking
is all about.  The notion that all can be explained by mathematical
analysis and logical deduction of observed phenomena.  The mathematical
certainty accepted thusly is the basis of all knowledge from this point
of view.  As we know now this is a profoundly incorrect concept as evidenced
by recent advances in physics and Godel's Incompleteness theorem.

As a side note, the Chinese character illustrated in the SA article
means horse but the writing is distored by the artist in a rather
amusing sort of ways.  The people who ran the article evidently did not
understand Chinese either.

-- 
Hwa Jin Bae (hwajin@wrs.com)
Wind River Systems

hwajin@wrs.wrs.com (Hwa Jin Bae) (02/25/90)

In article <1990Feb24.023246.29073@Neon.Stanford.EDU> arm@Neon.Stanford.EDU (Alexander d Macalalad) writes:
>I'm not sure what this has to do with Descartes, though.  I've
>always thought that his major contribution to philosophy was the
>separation of mind and matter, which is certainly not what I'm
>proposing.  Analytic thought in general is much older than Descartes.
>Look at Aristotle, for example.  Perhaps you were thinking of Kant,
>whose distinction between the phenomena and the noumena can be seen
>in the distinction of the observer and his or her observations.  In
>any case, I don't see what any of this has to do with spirituality.

Huh?  Man, I'm tired of repeating the same responses to the same
questions.  You seem to be quoting Ken Presting's remarks on my
previous posting.  Read my response to his.  Ken's reference on Kant's
a priori -- "categories" as in the category of substance and
the category of causality -- was based on his wrong interpretation of
my comment.  [I'm still curious how he ended up jumping onto the
word category and miss the whole point of my reference to Descartes'
methodology.]  You're talking something totally unrelated.

-- 
Hwa Jin Bae (hwajin@wrs.com)
Wind River Systems

rshelby@ms.uky.edu (Richard Shelby) (02/26/90)

hwajin@wrs.wrs.com (Hwa Jin Bae) =  > >>  and  >
kp@amdahl.uts.amdahl.com (Ken Presting) =  > >

> >>The idea that one can definitely identify a corresponding
> >>physical parts of anything that is non-physical seems to
> >>be also based on the same old Cartesian model of world view;

This is definitely not Descartes' view; Descartes continually insisted
that the non-physical is simple and indivisible, hence cannot be broken
into parts (whether corresponding to physical parts or not).  Aristotle,
on the other hand, did argue that a proper method of inquiry for non-physical
`objects' is the analysis of the object into constituents, even if the
constituents do not exist in reality.  You were perhaps misled because
Descartes does undertake a functional analysis of mind, but this is
function only and has nothing to do with constituent parts.

[bulk of discussion deleted] . . .
> You seem to be missing the point of my reference to the categorization.
  . . . 

I think *I* must be, because I'm not following your reasoning at this point.

> ...  Why don't you try reading the original sentence quoted here again. ...

I did, and I still don't follow you.  Could you make your argument a little
more explicit (perhaps by not omitting steps) and perhaps use slightly
different words?

> Nevertheless, Descartes certainly was *not* *opposed* to this view
> of categorizing things.  . . .

What exactly is *this* view of categorizing things?  If you mean the
Aristotelian view, then, yes, Descartes was opposed.

>        . . .  To Descartes the material universe was a machine and
> nothing but a machine.  Nevertheless he did extend his mechanistic
> view of matter to living organisms and explained at gr[e]at length
> how the motions and various biological functions of the body could
> be reduced to mechanical operations, in order to show that living organisms
> were nothing but automata.

Technically, Descartes thought only non-human living organisms were automata,
humans, he argued are something more (i.e. they have minds).

> . . .
> >Note that Searle accuses the Strong AI'ers of Cartesianism.  He is *right*
> >that those who suppose pure calculation can constitute a mind have a
> >position resembling Descartes.  He is *wrong* to suppose that anyone
> >holds such a position.  . . .

Descartes' (functional) analysis of mind includes much more than calculation,
although it *is* true that many of his followers have ignored the other
mental functions, such as perception and intuition.  However, it is still
wrong to say that Descartes held the position that "pure calculation can
constitute a mind".

[much discussion deleted]
> In any case this type of thinking is also part of what Cartesian thinking
> is all about.  The notion that all can be explained by mathematical
> analysis and logical deduction of observed phenomena.

This is perhaps true, but *if* it is, it is solely because of Descartes'
theory of human perception, or perception as a mental act.  If by "logical
deduction of observed phenomena" you mean what the positivists meant, then
your statement is totally false.

>                                                        The mathematical
> certainty accepted thusly is the basis of all knowledge from this point
> of view.  As we know now this is a profoundly incorrect concept as evidenced
> by recent advances in physics and Godel's Incompleteness theorem.
 
Two comments:  1) Strictly speaking, Descartes posits mathematical certainty
as derivitive, not fundamental.  The fundamental certainty is something 
non-mathematical, although it may perhaps be most easily and clearly seen
in mathematical examples.  2) Caretesian certainty is not about the body
of theorems deduced from a set of axioms, but rather about the processes of
deduction, perception, intuition, etc.  Godel's incompleteness theorem is
about formal systems and does not cast any doubt on mathematical reasoning.

In discussions such as these, it is better to refer to Descartes'
followers than to Descartes.  As is the case with all great thinkers,
the followers often misunderstand, misread, misrepresent or mis-whatever
the original thinker's views, and perhaps all that both of you have said
is true of some follower of Descartes.  The main problem with what has
been said of Descartes is that it is an extreme oversimplification.
Descartes' philosophy may be wrong, but it is not simplistic.

-- 
Richard L. Shelby                    rshelby@ms.uky.edu
Department of Health Services        rshelby@ukma.BITNET
University of Kentucky               {rutgers,uunet}!ukma!rshelby

hwajin@wrs.wrs.com (Hwa Jin Bae) (02/27/90)

In article <14319@s.ms.uky.edu> rshelby@ms.uky.edu (Richard Shelby) writes:
>This is definitely not Descartes' view; Descartes continually insisted
>that the non-physical is simple and indivisible, hence cannot be broken
>into parts (whether corresponding to physical parts or not).  

Where did this point come up?  In any case, as irrelevant as it is to
our discussion, Descartes did argue that mind is non-physical and 
thus served as a token of his existence -- he actually doubted
the existence of all physical objects, but was unable to doubt the
existence of himself as a thinking being.  However, he also realized
that the dualism faces considerable philosophical problems -- especially
that of "causal interaction".  He recognized that in many cases mind
and body intermingle to form a kind of unit.  As much as he wanted
to insist on the indivisibily of the mind, he had to devote great efforts to
the problem of interaction between mind and body -- somewhat incompatible
substances working together in mystical ways.  Is mind then truly
indivisible?  He seems to have been quite confused.

>Aristotle,
>on the other hand, did argue that a proper method of inquiry for non-physical
>`objects' is the analysis of the object into constituents, even if the
>constituents do not exist in reality.  You were perhaps misled because
>Descartes does undertake a functional analysis of mind, but this is
>function only and has nothing to do with constituent parts.
>

I still don't understand this fetish with Aristotle.  The emphasis
of my original posting was on the dualistic methodology of Cartesian
thinking systems.  I never even mentioned Aristotle.  The problem seems
to be the automatic and casual association between the words "categorization"
and "object" and Aristotle.  Yes, yes, we've all learned Aristotelian
system of logic where one can categorize and characterize objects
by classification, via "specific difference" after the intial grouping.
Nevertheless, there's no need to drag this into our discussion.  This
thread is becoming too philosophy-oriented as is.  Besides, it's 
not related to what I was saying as I explained over and over.  My
emphasis was not at all in the categorization, but the dualistic
view of the mind-body.

>Technically, Descartes thought only non-human living organisms were automata,
>humans, he argued are something more (i.e. they have minds).
>

Notice your misunderstanding: human living organisms are not automata
because they have minds.  Human living organisms also have physiology
which was considered to be automata by Descartes and that was precisely
what I was talking about: they biological foundation of Cartesian
school of thoughts.  Of course he argued that humans have something 
more (i.e. mind) but the point is that this was not the issue brought
up in my posting.  It's really astounding to see this kind of sidetrackig.
Read what's written, as is.  Don't free associate random concepts with
particular key words.  

Follow-ups to talk.philosophy.misc please.
-- 
Hwa Jin Bae (hwajin@wrs.com)
Wind River Systems

rshelby@ms.uky.edu (Richard Shelby) (02/28/90)

Hwa Jin Bae has written on Descartes; another and I have tried to correct
some of his misunderstandings.  This is my final contribution to the 
exchange.  M. Bae states that the discussion is becoming too philosophically
oriented for this group, if so, I apologize for wasting the readers' time.
The general problem is that M. Bae lacks an understanding of Descartes'
work and refuses to believe that others may be correct or have a better
grasp of Descartes.

In article <861@wrs.wrs.com>, hwajin@wrs.wrs.com (Hwa Jin Bae) writes:
> In article <14319@s.ms.uky.edu> rshelby@ms.uky.edu (Richard Shelby) writes:
> >This is definitely not Descartes' view; Descartes continually insisted
> >that the non-physical is simple and indivisible, hence cannot be broken
> >into parts (whether corresponding to physical parts or not).  
> Where did this point come up?  In any case, as irrelevant as it is to

The point came up because you said that Descartes searched for parts
of the mind analgous to parts of physical objects.

> our discussion, Descartes did argue that mind is non-physical and 
> thus served as a token of his existence -- he actually doubted

Mind did not serve as a token of his existence; it *is* his existence.

> the existence of all physical objects, but was unable to doubt the
> existence of himself as a thinking being.  However, he also realized

Descartes never "actually doubted" the existence of physical objects;
your phrasing shows a very shallow reading of the Meditations.

> that the dualism faces considerable philosophical problems -- especially
> that of "causal interaction".  He recognized that in many cases mind
> and body intermingle to form a kind of unit.  As much as he wanted
> to insist on the indivisibily of the mind, he had to devote great efforts to
> the problem of interaction between mind and body -- somewhat incompatible
> substances working together in mystical ways.  Is mind then truly
> indivisible?  He seems to have been quite confused.

No, you are confused.  Indivisibility does not imply non-interaction; it
simply implies a lack of parts.

> >Aristotle, on the other hand, did argue that a proper method of inquiry
> >for non-physical `objects' is the analysis of the object into constituents,
> >even if the constituents do not exist in reality.   . . .
> I still don't understand this fetish with Aristotle.  The emphasis

Aristotle is brought up because you keep imputing Aristotle's ideas
to Descates.

> of my original posting was on the dualistic methodology of Cartesian
> thinking systems.  I never even mentioned Aristotle.  . . .

> >Technically, Descartes thought only non-human living organisms were automata,
> >humans, he argued are something more (i.e. they have minds).
> Notice your misunderstanding: human living organisms are not automata
> because they have minds.  Human living organisms also have physiology

I fear it is not I who misunderstand Descartes, and I'll stake several
years of graduate study in philosophy of mind and several readings of
the Meditations, Discourse on Method and Descartes' commentators on it.

> which was considered to be automata by Descartes and that was precisely
> what I was talking about: they biological foundation of Cartesian
> school of thoughts.  Of course he argued that humans have something 

Descartes is quite explicit that thoughts are *not* biological in origin.

> more (i.e. mind) but the point is that this was not the issue brought
> up in my posting.  It's really astounding to see this kind of sidetrackig.
> Read what's written, as is.  Don't free associate random concepts with

You should take your admonition to heart in reference to Descartes' works.

> particular key words.  
> Follow-ups to talk.philosophy.misc please.

Nothing further from me in this series; I've grown tired of trying to
correct your misunderstandings.  

-- 
Richard L. Shelby                    rshelby@ms.uky.edu
Department of Health Services        rshelby@ukma.BITNET
University of Kentucky               {rutgers,uunet}!ukma!rshelby

hwajin@wrs.wrs.com (Hwa Jin Bae) (03/02/90)

In article <14352@s.ms.uky.edu> you write:
>Hwa Jin Bae has written on Descartes; another and I have tried to correct
>some of his misunderstandings.  This is my final contribution to the 
>exchange.  M. Bae states that the discussion is becoming too philosophically
>oriented for this group, if so, I apologize for wasting the readers' time.
>The general problem is that M. Bae lacks an understanding of Descartes'
>work and refuses to believe that others may be correct or have a better
>grasp of Descartes.
>

i lack understanding of descartes' work but i do not refuse to believe
that others may be correct or have a better grasp of descartes.  i'm
perfectly willing to give you a chance to prove to me that you indeed
possess better understanding of his work.  since you seem to think that
you do, perhaps it is so.  i'm not yet convinced of that.  if you're
to condemn someone of ignorance you're entitled to your opinion but please
do not accuse me of refusing to believe that anyone else might have a
better understanding of descartes than i.  feel free to quote any remark
that i made which suggested that i did.  i certainly don't recall having
said anything that suggested my supposed arrogance.  i don't believe
that i have full understanding of descartes.  i was explaining what i
undertood of him -- N.B. *my* understanding, as yours is *yours*, who
is to say which is truly correct?  -- not condemning others' of ignorance
or refusing to listen to me, as you've done here.  it saddens me to think
that anyone who has read the meditation can bring himself down to doing 
such things in public out of sheer misunderstanding.

nevertheless, if i offended in any way that might have cause this kind
of response i apologize for that.  i will not, however, apologize for
my "ignorance" and "distorted" understanding.  my opinions and 
my interpretations of others' works are my own and who cannot force-
feed me into taking your views without rigorous and convincing argument.
we have not gone through that.  you've stated some facts which seemed
very fragmented and inconsistent to me and i stated a few bits of 
my understanding that was differently conceived/interpreted.

>In article <861@wrs.wrs.com>, hwajin@wrs.wrs.com (Hwa Jin Bae) writes:
>> In article <14319@s.ms.uky.edu> rshelby@ms.uky.edu (Richard Shelby) writes:
>> >This is definitely not Descartes' view; Descartes continually insisted
>> >that the non-physical is simple and indivisible, hence cannot be broken
>> >into parts (whether corresponding to physical parts or not).  
>> Where did this point come up?  In any case, as irrelevant as it is to
>
>The point came up because you said that Descartes searched for parts
>of the mind analgous to parts of physical objects.
>

it's a pity that i do not possess photographic memory.  i agree, overall
descartes did say that mind is indivisible but i do recall a few places
in meditations that caught my attention which described his ideas of
setting up relationships between mind and body -- only brief references.
that was what i was referring to and it was not even relevant to my
discussion.  i don't know why i said anything about that at all now
that i think of it.  it was not needed in the discussion and it seems
that this has been blown out of proportion.  i think the blurriness 
in descartes insistence on the indivisibility of mind and the later
work on mind-body relationships, in general, are quite confusing for
lay readers such as myself.  

clarification: it was 1:00am in the morning when i wrote that sentence
about corresponding mind-body problems.  unlike some, i do not have
super-human power of consistency.

>> our discussion, Descartes did argue that mind is non-physical and 
>> thus served as a token of his existence -- he actually doubted
>
>Mind did not serve as a token of his existence; it *is* his existence.
>

token of his existence is almost a straight quote by descartes -- again
my memory fails me.

besides, what's the point?  what's the meaning of this staccato remark?

>> the existence of all physical objects, but was unable to doubt the
>> existence of himself as a thinking being.  However, he also realized
>
>Descartes never "actually doubted" the existence of physical objects;
>your phrasing shows a very shallow reading of the Meditations.
>

oh yes.  i know i'm shallow.  i'm also superficial.  i know i am.
do you know you are too?  yes, you too.  we all are.  someone once
wrote that knowing that you're shallow is profound.  8-)

descartes did "actually doubt" his existence.  i'll try and look up
the part where he does that.  if my memory is not complete bankrupt
and full of parity errors, i believe i've read him actually say that.
i didn't make that one up.  perhaps i was reading the words as 
written (i.e. superficially).

>> that the dualism faces considerable philosophical problems -- especially
>> that of "causal interaction".  He recognized that in many cases mind
>> and body intermingle to form a kind of unit.  As much as he wanted
>> to insist on the indivisibily of the mind, he had to devote great efforts to
>> the problem of interaction between mind and body -- somewhat incompatible
>> substances working together in mystical ways.  Is mind then truly
>> indivisible?  He seems to have been quite confused.
>
>No, you are confused.  Indivisibility does not imply non-interaction; it
>simply implies a lack of parts.
>

indivisibility  does not mean lack of parts.  it kind of implies that
it's pretty damn hard to break it apart without errors.

i did not say indivisibility implied non-interaction.  i said the concept
of indivisibility of mind and his work on interaction between mind and body
and the fact that he considered body to be divisible confuses me, or perhaps
he was confused.  that's what i said.

>> >Aristotle, on the other hand, did argue that a proper method of inquiry
>> >for non-physical `objects' is the analysis of the object into constituents,
>> >even if the constituents do not exist in reality.   . . .
>> I still don't understand this fetish with Aristotle.  The emphasis
>
>Aristotle is brought up because you keep imputing Aristotle's ideas
>to Descates.
>

correction... it was ken presting who started the talk about aristotle.
i never brought it up. once it was brought up i made littlest comments
about it (one sentence i think) to actively discourage any more side-tracking.

>> of my original posting was on the dualistic methodology of Cartesian
>> thinking systems.  I never even mentioned Aristotle.  . . .
>
>> >Technically, Descartes thought only non-human living organisms were automata,
>> >humans, he argued are something more (i.e. they have minds).
>> Notice your misunderstanding: human living organisms are not automata
>> because they have minds.  Human living organisms also have physiology
>
>I fear it is not I who misunderstand Descartes, and I'll stake several
>years of graduate study in philosophy of mind and several readings of
>the Meditations, Discourse on Method and Descartes' commentators on it.
>

hey.  i'm an amateur.  i read the stuff out of pure joy and interpret 
it as i read them.  i do not study the damn things.  i just read to
get the feel, the flow, and get my shit together, that's all.  when
given a chance, i'll exhibit my ignorance to others -- perhaps if i'm
lucky i'll get some enlightenment from people like you who seemed to
have spent more time with it.  but sometimes, scorn is all i get.
oh well.  you win some, you lose some.

>> which was considered to be automata by Descartes and that was precisely
>> what I was talking about: they biological foundation of Cartesian
>> school of thoughts.  Of course he argued that humans have something 
>
>Descartes is quite explicit that thoughts are *not* biological in origin.
>

agreed.  the biological foundation of cartesian school of thoughts means
nothing descartes did.  note that lack of his name in this phrase.  i 
explicitly avoided using his name and used cartesian school of thoughts.
i was *not* also talking about the origin of thoughts -- be it biological
or not.  i was talking about one aspect of cartesian school of thinking
that supposes that mind is also a kind of automata.

>Richard L. Shelby                    rshelby@ms.uky.edu
>Department of Health Services        rshelby@ukma.BITNET
>University of Kentucky               {rutgers,uunet}!ukma!rshelby

-- 
Hwa Jin Bae (hwajin@wrs.com)
Wind River Systems

rshelby@ms.uky.edu (Richard Shelby) (03/03/90)

In article <863@wrs.wrs.com>, hwajin@wrs.wrs.com (Hwa Jin Bae) writes:
> In article <14352@s.ms.uky.edu> you write:
> >Hwa Jin Bae has written on Descartes; another and I have tried to correct
> >some of his misunderstandings.  This is my final contribution to the 
> >exchange.  M. Bae states that the discussion is becoming too philosophically
> >oriented for this group, if so, I apologize for wasting the readers' time.

Okay, I lied; I have a further comment, but it is not specific to Desartes.

> >The general problem is that M. Bae lacks an understanding of Descartes'
> >work and refuses to believe that others may be correct or have a better
> >grasp of his work.
> 
> i lack understanding of descartes' work but i do not refuse to believe
> that others may be correct or have a better grasp of descartes.  i'm
> perfectly willing to give you a chance to prove to me that you indeed
> possess better understanding of his work.  since you seem to think that
> you do, perhaps it is so.  i'm not yet convinced of that.  if you're
> to condemn someone of ignorance you're entitled to your opinion but please
> do not accuse me of refusing to believe that anyone else might have a
> that i have full understanding of descartes.  . . .  i was explaining what i
> undertood of him -- N.B. *my* understanding, as yours is *yours*, who
> is to say which is truly correct?  -- not condemning others' of ignorance
> or refusing to listen to me, as you've done here.  it saddens me to think
> that anyone who has read the meditation can bring himself down to doing 
> such things in public out of sheer misunderstanding.

Perhaps I've been rude; if so, I apoloze to all in the newsgroup because
one shouldn't be rude.  Perhaps also I am arrogant.  I do know that I
often become impatient in these discussions; I will explain why.

>                                                my opinions and 
> my interpretations of others' works are my own and who cannot force-
> feed me into taking your views without rigorous and convincing argument.
> we have not gone through that.  you've stated some facts which seemed
> very fragmented and inconsistent to me and i stated a few bits of 
> my understanding that was differently conceived/interpreted.
  [much deleted]
> hey.  i'm an amateur.  i read the stuff out of pure joy and interpret 
> it as i read them.  i do not study the damn things.  i just read to
> get the feel, the flow, and get my shit together, that's all.  

This is a general problem to which philosophy is often subjected.  People
make remarks along the line of, "I read it and get *my* *own* understanding/
interpretation".  What would be the response if, for example, I said, "I've
read Euclid's argument that there is no largest prime, I think he's saying
that after a certain point (in the sequence of integers) *all* larger
integers are prime"?  Surely this would not be tolerated.  It ought also
not be tolerated in regard to much of philosophy.  There *is* an
understanding of Descartes' work which stretches from Descartes to the
present, which does not mean that there are *no* points of controversy
concerning his work.  For example, the relationship to the Cogito to
action statements such as "Ambulo, ergo sum" ("I walk, therefore I am")
is unclear.  However, other points are simply *not* controversial.  It
is also the case that misunderstandings recur: having taught a few
courses in introductory philosophy will expose one to a set of 
misunderstandings which occur over and over in introductory courses.
I get quite impatient when I am told "*my* understanding, as yours is 
*yours*, who is to say which is truly correct?"  There *are* correct
and incorrect interpretations.  This may come as a surprise, but
philosophy is a discipline.

> Hwa Jin Bae (hwajin@wrs.com)
> Wind River Systems


-- 
Richard L. Shelby                    rshelby@ms.uky.edu
Department of Health Services        rshelby@ukma.BITNET
University of Kentucky               {rutgers,uunet}!ukma!rshelby