[sci.lang] Chinese Room

MIY1@PSUVM.BITNET (N Bourbaki) (09/26/89)

In article <15157@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>In article <567@ariel.unm.edu> bill@wayback.unm.edu (william horne) writes:
>
>>This example is relavant to AI, because it questions the validity of the
>>Turing Test as a test of "understanding", as well as questioning the
>>legitimacy of rule based systems as models of intelligence.
>
>One serious flaw in the Chinese Room Problem is that it relies on the
>so-called 'conduit metaphor' (originally described by Michael Reddy in A.
>Ortony's _Metaphor_and_Thought_ Cambridge U. Press 1979).  That metaphor
>assumes that meaning is essentially contained in the linguistic expression.

>  The conduit metaphor
>is very powerful and useful as a means of illuminating the behavior of
>language, but, like all analogies, it breaks down.  Those who deal with real
>language to language translation know that there is no one-to-one match
>between expressions in one language and those in another.

But this difficulty would affect the native Chinese speaker and the
Chinese Room Demon equally.   That is one premise of Searle's
argument - the "mechanical" system is presumed to be just as competent
(not necesarily perfect) at translation as the "understanding" system.

Searle would have you believe that the "mechanical" system lacks
true understanding because it lacks "intentionality".  But this
begs the question, and leads immediately to the "other minds" problem.
Searle acknowledges this objection in _Minds, Brains, and Programs_,
but shrugs it off as only being worth "a short reply", basically that
cognitive states are not created equal, and that systems which exhibit
intentionality are more worthy of being described as "understanding"
than formal symbol-manipulating systems.

The gist of his conundrum is not to validate (or invalidate) any particular
linguistic theory, but to attack so-called "strong AI".  I don't find it a
very convincing argument.  It seems too much like vitalism -- that
there is something special about brains that cannot be duplicated by
artificial means.

N. Bourbaki

ellis@chips.sri.com (Michael Ellis) (09/30/89)

> N Bourbaki  >> Rick Wojcik

>>The conduit metaphor
>>is very powerful and useful as a means of illuminating the behavior of
>>language, but, like all analogies, it breaks down.  Those who deal with real
>>language to language translation know that there is no one-to-one match
>>between expressions in one language and those in another.

>But this difficulty would affect the native Chinese speaker and the
>Chinese Room Demon equally.   That is one premise of Searle's
>argument - the "mechanical" system is presumed to be just as competent
>(not necesarily perfect) at translation as the "understanding" system.

>Searle would have you believe that the "mechanical" system lacks
>true understanding because it lacks "intentionality".  

    As far as I can tell, Searle *is* a mechanist (I have also heard
    him called a "weird sort of dialectical materialist"). He
    believes that the mind is "caused by" the neurophysiological
    mechanism of the brain, and that eventually there will be a purely
    scientific (read "physicalistic and mechanistic" here) account of
    the mind.

    I noticed the scare quotes: "intentionality". If you aren't
    familiar with this concept, you might try reading Brentano, Husserl,
    Sartre, Dreyfus, Searle, Putnam and Dennett for different
    treatments. 

    I think it is fair to say that, for Searle, understanding
    presupposes intentionality practically by definition.

>But this begs the question, and leads immediately to the "other minds"
>problem.

    Maybe it is begging the question to assume that minds exist, that
    you got one, that I got one, that everybody's got one, that minds
    are the paradigm case of something that understands, and that it
    is the mind's ability to understand that we want to know more about.

    If you don't accept this you and John Searle aren't talking the
    same language.

>Searle acknowledges this objection in _Minds, Brains, and Programs_,
>but shrugs it off as only being worth "a short reply", basically that
>cognitive states are not created equal, and that systems which exhibit
>intentionality are more worthy of being described as "understanding"
>than formal symbol-manipulating systems.

    If by "cognitive state" you mean something that is formally
    equivalent to the execution state of a computer, Searle is saying
    that such a thing is totally irrelevant to the question of mind.

    He does not deny that a computer might be conscious.

    He is saying is that, if and when a conscious machine is built,
    its understanding would not be caused by virtue of running the
    right program.

>The gist of his conundrum is not to validate (or invalidate) any particular
>linguistic theory, but to attack so-called "strong AI".  I don't find it a
>very convincing argument.  It seems too much like vitalism -- that
>there is something special about brains that cannot be duplicated by
>artificial means.

    Searle might be wrong, but not for the reasons you offer, since
    you don't quite seem to be arguing against anything Searle said.

    As far as attacks on theories of language, Searle says unkind
    things about Skinner, Chomsky/Fodor, and Quine. 

    As to vitalism, Searle says unkind things about that, too. Whether
    intentionalistic theories are vitalism in diguise, I do not think
    so, but I suppose there are many who might disagree. Vitalism was
    a philosophical and scientific dead end. Is the same true of
    intentionality? Well, the topic seems to be showing up more these
    days in scientifically minded AngloAmerican thought. I am not
    competent to judge whether of not intentionality can be made into
    a rigorous concept, but I suspect that the question currently
    hinges on future developments in intensional logics.

-michael