[sci.lang] What's the Chinese room problem?

rwojcik@bcsaic.UUCP (Rick Wojcik) (09/24/89)

In article <567@ariel.unm.edu> bill@wayback.unm.edu (william horne) writes:

>This example is relavant to AI, because it questions the validity of the
>Turing Test as a test of "understanding", as well as questioning the
>legitimacy of rule based systems as models of intelligence.

One serious flaw in the Chinese Room Problem is that it relies on the
so-called 'conduit metaphor' (originally described by Michael Reddy in A.
Ortony's _Metaphor_and_Thought_ Cambridge U. Press 1979).  That metaphor
assumes that meaning is essentially contained in the linguistic expression.  A
logical consequence of this belief is that one can devise a set of principles
for translating from one language into another without losing any of the
semantic 'stuff' that a linguistic expression conveys.  The conduit metaphor
is very powerful and useful as a means of illuminating the behavior of
language, but, like all analogies, it breaks down.  Those who deal with real
language to language translation know that there is no one-to-one match
between expressions in one language and those in another.  An alternative view
of linguistic communication is to assume that linguistic expressions merely
help to shape the flow of mental pictures (alas, another metaphor :-) that
constitute the end product of communication.  Therefore, there is no necessary
one-to-one correspondence between linguistic expressions in one language and
those in another.  The trick to translation is to construct expressions in the
target language that evoke the same thoughts as those in the source language.
And this may even be impossible without modification of the target language
(i.e. the creation of new words to fit new experiences).  So I claim that the
Chinese room problem rests on incorrect assumptions about the nature of
language and understanding.


-- 
Rick Wojcik   csnet:  rwojcik@atc.boeing.com	   
              uucp:   uw-beaver!bcsaic!rwojcik 

sp299-ad@violet.berkeley.edu (Celso Alvarez) (09/26/89)

In article <15157@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:

>. . .  The trick to translation is to construct expressions in the
>target language that evoke the same thoughts as those in the source language.

Much more than thoughts are evoked by language.  How do you translate
the signalling of identity, roles, and social relationships?

>And this may even be impossible without modification of the target language
>(i.e. the creation of new words to fit new experiences).  So I claim that the
>Chinese room problem rests on incorrect assumptions about the nature of
>language and understanding.

I'm not familiar with the Chinese room problem, but where do you/Searle
leave the question of interpretation?  There is more to language than
understanding.

Celso Alvarez
sp299-ad@violet.berkeley.edu

rwojcik@bcsaic.UUCP (Rick Wojcik) (09/29/89)

Celso Alvarez (CA) writes:
me>. . .  The trick to translation is to construct expressions in the
me>target language that evoke the same thoughts as those in the source
me>language. 

CA> Much more than thoughts are evoked by language.  How do you translate
CA> the signalling of identity, roles, and social relationships?

I think that such concepts have to be represented as thought structures, since
they have an impact on language structure.  But your question may be filed
under my general question: Just what do 'Chinese Room' debaters think a
translation is?  What criteria do you use to judge that a translation from one
language to another is successful?  My position is that there is no such thing
as translation in an absolute sense.  A seemingly trivial example is the
translation of expressions that refer to language-specific grammatical
structure.  Thus, there is no way to translate French 'tutoyer' directly into
English.  You must rely on circumlocution.  It means roughly 'use the intimate
2nd person singular form of the verb'.  But practical translators might take
an equivalent French expression to 'Don't tutoyer me' into English as 'Don't
use that tone of voice with me', or some such thing.  But it is difficult to
say what makes one such translation better than another.  People can get into
heated arguments over such questions.

N. Boubaki (NB) writes:
>...Those who deal with real
>language to language translation know that there is no one-to-one match
>between expressions in one language and those in another.
NB> But this difficulty would affect the native Chinese speaker and the
NB> Chinese Room Demon equally.   That is one premise of Searle's
NB> argument - the "mechanical" system is presumed to be just as competent
NB> (not necesarily perfect) at translation as the "understanding" system.

I know, but I think that Searle, like most of us, has implicitly adopted the
conduit metaphor in his conceptualization of the problem.  He really believes
that there is some absolute sense whereby an expression in one language
corresponds to one in another.  This seems clear from his insistence that the
translation itself be 'mechanical'--in other words, symbol manipulation.
Those involved in translation understand that the translation process requires
editing and revision.  Who determines that the "mechanical" system is "just as
competent" if there is no mechanical basis for judging competence?  But that
is just what you need to do in order to bring about translation.  You need
mechanize the ability to judge and revise.  That would be tantamount to
mechanizing the understanding process, since it is only by understanding
expressions in two different languages that you can judge their equivalence.

I want to be careful to distinguish modern Machine Translation efforts, which
do not attempt to automate the revision process (rather they attempt to
facilitate it), from an ideal MT system, which would require mechanized
understanding to do its job properly.  So I agree with you that Searle
ultimately begs the question.  The question is whether or not 'understanding'
is a mechanizable process.  He either assumes that it is not, or he doesn't
have a proper conception of the nature of translation.

Ray Allis (RA) writes:
RA>It seems to me your position is in fact very close to Searle's.  The problem
RA>I have with his little parable is that he pretends that the output from
RA>the Chinese room is satisfactory (or rather lets us assume so).  I believe
RA>that if the room does not "understand" Chinese, and he argues that it does
RA>not, the output will not be satisfactory...

From my above remarks, you should see that I am closer to your viewpoint than
Searle's.  In fact, I find myself largely in agreement with most of what you
said.  I would only quibble on the issue of whether or not modern NLP efforts,
including MT, are futile.  The pragmatic purpose of such work is to increase
human efficiency in language-intensive work on computers.  There are many good
things you can do without addressing the need for full language understanding.
MT (really Machine-Assisted Translation) can improve the output of a human
translator, even though the MAT system may produce some pretty bad
translations.  Our grammar-checking system is proving useful in the writing of
aircraft maintenance manuals.  But this takes us away from the philosophical
question of whether or not you can mechanize language understanding.

-- 
Rick Wojcik   csnet:  rwojcik@atc.boeing.com	   
              uucp:   uw-beaver!bcsaic!rwojcik 

rwojcik@bcsaic.UUCP (Rick Wojcik) (09/29/89)

In article <822kimj@yvax.byu.edu> kimj@yvax.byu.edu writes:
>Could you elaborate what you mean by "the semantic stuff"? Say I translate
>"kick the bucket" into "die" in Chinese.  Does the translation lose what
>you call "the semantic stuff"?

I want to recommend to you Ronald Langacker's tour de force Foundations_of_
Cognitive_Grammar.  v.1. Stanford U. Press, 1987.  I particularly call your
attention to the discussion on and around p. 93, where he lays out a clear
distinction between literal and figurative senses.  He argues quite
convincingly that you can take neither a purely compositional, nor a purely
conventional, approach to meaning.  I do not know how his work, and that of
other 'cognitive grammarians' will end up affecting the world of computational
linguistics, but it does help to point up many areas for future research.  I
do not think that there is any precise way to translate 'kick the bucket' into
Chinese, and I don't think that the opening scene of the movie 'It's a Mad,
Mad World' can be properly understood by Chinese speakers, even in its dubbed
version.  (That scene has a great sight gag involving the 'kick the bucket'
idiom.)  Semantic stuff is very often lost when idioms get translated.  But it
is the compositionally-derived stuff that gets lost, not the conventional
stuff.  



-- 
Rick Wojcik   csnet:  rwojcik@atc.boeing.com	   
              uucp:   uw-beaver!bcsaic!rwojcik 

dmocsny@uceng.UC.EDU (daniel mocsny) (10/01/89)

In article <15336@bcsaic.UUCP>, rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
> What criteria do you use to judge that a translation from one
> language to another is successful?

How about: the same criteria (note: plural) we use to judge whether
native language speakers can communicate with each other successfully.

> My position is that there is no such thing
> as translation in an absolute sense.

Two languages aren't even necessary. Two people who speak the "same"
language can misunderstand each other. Two translation steps are already
going on there---from the speaker's thoughts into a serial symbol
string, and then from the string to the hearer's thoughts. If the hearer's
thoughts differ substantially from the speaker's, then the translation 
has failed.

However, I think absolute translation *must* be possible in principle,
unless we believe that the human mind has an infinite information
content. That is, if we view communication as a thought-transfer
between two thinkers, then some finite serial data stream must
represent the thoughts of the speaker in sufficient detail to allow
the hearer to reconstruct them with arbitrary accuracy.  We may not
know how to move thoughts from one person to another as one would copy
files between computers, but the materialist assumption says it must
be possible. (If the brain turns out to be not a very convenient
medium to "write" on, then one might have to resort to physically
reconstructing features of the sender's brain in the recipient. "Let
me give you a piece of my mind..." This won't be an easy trick, but it
can't be impossible.)

>  A seemingly trivial example is the
> translation of expressions that refer to language-specific grammatical
> structure.  Thus, there is no way to translate French 'tutoyer' directly into
> English. You must rely on circumlocution.  It means roughly 'use the intimate
> 2nd person singular form of the verb'.  But practical translators might take
> an equivalent French expression to 'Don't tutoyer me' into English as 'Don't
> use that tone of voice with me', or some such thing.  But it is difficult to
> say what makes one such translation better than another.  People can get into
> heated arguments over such questions.

I'm not sure what you mean by "directly." Perhaps you should use
"concisely."  After all, we are English/American speakers here, and
Lo! we can certainly grasp some idea of the action "tutoyer" refers to
from your brief description. Since we start off without the necessary
concepts, you simply have to hand us the underlying knowledge heirachy
for us to understand. That doesn't make your translation "bad."  It
simply means you can't lop off the top floor of a skyscraper, ship it
across town, and expect it to float the same distance above bare
ground. 

You can speak concisely when you share a large base of common
knowledge/experience with your hearer. If you don't, then you must
recursively expand your high-level expressions until you reach the
level of your listener's available knowledge. Consider how differently
you might describe what you did at work today to your boss, to a
coworker, to a casual friend, and to your mother.

Thus translating a static word-string from one language into another
should be, in general, about as hard as inferring a person's hair
color by observing the mud they have tracked onto a carpet. A person's
language is essentially a set of high-level pointers into the large
knowledge network they cart around in their head. Without an accurate
model of that network, translating those pointers into another
language (with all its different cultural baggage) will be tough.
In any case, the "goodness" of the translation must always depend
on the recipient.

Dan Mocsny
dmocsny@uceng.uc.edu

sp299-ad@violet.berkeley.edu (Celso Alvarez) (10/05/89)

In article <15336@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:

CA> Much more than thoughts are evoked by language.  How do you translate
CA> the signalling of identity, roles, and social relationships?

RW>I think that such concepts have to be represented as thought structures,
RW>since they have an impact on language structure.

If you're talking about (mental) typifications of social relationships,
that's one thing.  Typifications generate expectations which underly the
production and interpretation of talk.  But the signalling of identity
etc. is a situated process, and by this I imply that there may be no
matching between typifications and actual behavior.  Additionally,
certain linguistic markers of social dimensions are inherently ambiguous
(this ambiguity may be of a different kind than lexical ambiguity).
I don't know how you can translate social markers unless you establish
certain universals (or, at least, certain generalizable transcultural
principles) of socio-interactional meanings.  I don't think you can do
this without incorporating context as a variable in those universals.
And I'm not sure that even then you can give account of the dynamic
reconstitution of those general principles (thought structures?)
in and through context, during the course of an interaction.

Celso Alvarez
sp299-ad@violet.berkeley.edu

rwojcik@bcsaic.UUCP (Rick Wojcik) (10/07/89)

In article <1989Oct5.080214.7683@agate.berkeley.edu> sp299-ad@violet.berkeley.edu (Celso Alvarez) writes:
>CA> Much more than thoughts are evoked by language.  How do you translate
>CA> the signalling of identity, roles, and social relationships?
>RW>I think that such concepts have to be represented as thought structures,
>RW>since they have an impact on language structure.

>If you're talking about (mental) typifications of social relationships,
>that's one thing.  Typifications generate expectations which underly the

All I meant was that anything which influences linguistic structure ipso facto
has to be represented as some kind of thought structure.  However you want to
represent those thought structures is open to debate.  I agreed with your
implicit point that they are not represented well in modern linguistic theory.
But I don't think that my statement should have generated any controversy. 


-- 
Rick Wojcik   csnet:  rwojcik@atc.boeing.com	   
              uucp:   uw-beaver!bcsaic!rwojcik 

sp299-ad@violet.berkeley.edu (Celso Alvarez) (10/09/89)

In article <15578@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>>RW>I think that such concepts [as social identity, etc.] have to be
>>RW>represented as thought structures,
>>RW>since they have an impact on language structure.

CA>If you're talking about (mental) typifications of social relationships,
CA>that's one thing.  Typifications generate expectations which underly the

RW>All I meant was that anything which influences linguistic structure ipso
RW>facto has to be represented as some kind of thought structure.  However
RW>you want to represent those thought structures is open to debate.

That's why I opened the debate.  Am I sure that I do want to represent those
categories/typifications as thought structures?  Is it analytically or
heuristically productive to work with a notion such as `thought
structure' to help explain linguistic behavior?  (because we're talking
about this, aren't we?).  Between linguistic action and cognition there's
still a missing link, both in Searle and beyond Searle, in
discourse analysis or ethnomethodology.

RW>I agreed with your implicit point that they are not represented
RW>well in modern linguistic theory.  But I don't think that my statement
RW>should have generated any controversy. 

And I agree with your view on translation, however
different our approaches may be.  But I'm not that interested in
contributing to fill the holes in modern linguistic theory.
In other, socio-interactionally oriented linguistic disciplines, yes.

It's not my intention to create unnecessary controversy.  I'm just
trying to translate your language (`thought structures', social
relationships as `notions' and not actions) into mine.

Celso Alvarez
sp299-ad@violet.berkeley.edu