[comp.ai] Eliza and the Question on Chinese Room Argument

thom@dewey.soe.berkeley.edu (Thom Gillespie) (03/03/89)

Dear Jim,

		Eliza worked because it tried to simulate a Rogerian shrink, there was no
		domain , a wonderful area for an expert system wouldn't you say Jim? Are
		you trying to carry on the tradition? Read "computer Power and Human
		Reason : from Judgement to Calculation" by Weizenbaum. Better yet , why not
		write an expert program to understand it for you.

Thom Gillespie

		while you and I have lips and voices
		which are for kissing and to sing with
		who cares if a one-eyed son of a bitch
		invents an instrument to measure spring with 
				- - ee cummings

curry@hplchm.HP.COM (Bo Curry) (03/07/89)

>/ hplchm:comp.ai / geddis@polya.Stanford.EDU (Donald F. Geddis) / 12:47 am  Mar  4, 1989 /
>True enough, but then you are defining the "computer" to be the dumb processor
>that interprets the rules.  No one claimed that the processor (by itself)
>*did* understand.  I still haven't heard a satisfactory rebuttal to the
>"Systems Reply", namely that (Searle + Rules) understands, whereas just
>(Searle) doesn't.  [To use your analogy:  (Computer Processor + Symbolic
>Rules) understands, but just (Computer Processor) doesn't.]
>

OK, let's follow up on this.  Let me see if I can paraphrase Harnad interpreting
Searle:

1.  Searle does not understand Chinese.
    I agree, because *not only* does he so claim, *but also* he fails
    to pass the Chinese TT.  Although neither of these facts alone would
    suffice (since the first would not convince me, and the second would
    fail to convince Harnad), together they prove the case. 

2.  The Room (Searle + rules) passes the TT and *seems* to understand
    Chinese.
    I agree.

3.  However, the Room doesn't *really* understand, since
    "Searle is doing everything [the Room] is doing, and *he* claims
    not to understand" (!).

    I don't see how 3. follows at all from the rest of the argument.
    It obviously has a powerful intuitive appeal to Harnad and others.
    Let me try to show how intuition is misleading these deep thinkers.

    The foundation of the intuition that Searle's opinion about
    the Room's understanding is definitive seems to be:

4.  Other than Searle, there is *nothing* in the Room (except for
    blackboard, chalk, and a few slips of paper upon which the
    rules are inscribed).  Since we all know, from our experience,
    that blackboards and slips of paper don't understand anything,
    nothing is left except Searle himself.
    Searle has suggested that even these meager props would be
    unnecessary, since he could *memorize* the rules!

    Now, let's look at this intuitive argument in a bit more detail,
    to test its plausibility.  Perhaps, in his mind's eye, Searle
    sees the rules as consisting of a Chinese dictionary, a Grammar,
    perhaps a Thesaurus and a few syntactic rules of grammar such as
    "Never split an infinitive", or some such.  In any case,
    Searle's (envisioned) set of rules is clearly compact enough that
    he imagines *memorization* of the rules to be possible.  This idea
    (or even the idea that the rules, written on paper, might fit
    into a room) is, I would submit, ludicrous to anyone who has actually
    attempted to design a program to understand natural language.

    Consider what it would require for a Room to be able to pass the TT.
    It must be able to use language well enough to convince a native
    speaker.  It must therefore know the denotations, connotations,
    and normal (physical, cultural, etymological, etc) associations for
    a large subset of Chinese words, phrases, proverbs, etc.  In order
    to carry on a conversation about, say, riots in Tibet, the Room would
    have to understand Tibetan Buddhism, its relationship to Chinese
    Buddhism and to Confucianism, the political and historical
    relations between Tibet and China, and thousands of other facts
    and relationships.  All this knowledge, for thousands of possible
    topics of conversation, must be represented in the rules before
    the Room can hope to satisfy Searle's premise.  I submit
    that this knowledge base is essentially isomorphic to a large part
    of Searle's brain, and that it would be clearly impossible for
    him to "internalize" it (as explicit memorized rules).  Think
    about the size and complexity of these rules.  Will they fit in
    a room?  If punched out on Hollerith cards and laid end to end,
    would they reach from Earth to Sirius?  How many centuries will
    Searle require, while interpreting these rules sequentially, to
    respond to the simplest question posed to the Room in Chinese?

    Once the complexity of the hypothesized rule set is fully grasped,
    it becomes clear that intuitions about the "obvious" lack of
    understanding embodied in "a few slips of paper" are seriously
    misleading.  Admit that this intuition may be mistaken, and Searle's
    (and Harnad's) argument disappears.

    Bo (still waiting for the PC version) Curry
    curry%hplchm@hplabs.HP.COM 

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/13/89)

In article <280003@hplchm.HP.COM> curry@hplchm.HP.COM (Bo Curry) writes:
>    Admit that this intuition may be mistaken, and Searle's
>    (and Harnad's) argument disappears.
Why should I?

Admit that gravity may not exist, and then what disappears?

I think you should argue your case.

If people's "intuitions" say the room/rules have no understanding,
then why doubt them?  What's the gain?

If your AI systems "work", all well and good.  But don't demand that
people call black white in the process.  If AI folk spent less time
trying to redefine everyday language, people might trust them more.

There is no quicker way to lose people's trust than to abuse language.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

throopw@agarn.dg.com (Wayne A. Throop) (03/21/89)

> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
> If people's "intuitions" say the room/rules have no understanding,
> then why doubt them?  What's the gain?

Well... many "people's intuitions" say that Eliza already understands.
Why doubt *them*?  The gain in doubting people that think Eliza understands
is that we don't cheapen what we mean by "understanding".  The gain in
doubting that people that think the CR shows that the room/rules have no
understanding even in principle is that we don't arbitrarily anthopomorphize
what we will accept as an "understanding entity".

> If your AI systems "work", all well and good.  But don't demand that
> people call black white in the process.  If AI folk spent less time
> trying to redefine everyday language, people might trust them more.

This situation doesn't arise in the CR.  In fact, the CR's premise is
that "people's intuition" from outside the room leads them to think
the room understands, and "people's intuition" once they've seen inside
the room leads them to think otherwise.  So, we aren't asking to call
black white.  We are asking whether black should be defined functionally
(in terms of the light it reflects) or structurally (in terms of which
pigments it is constructed of).

--
"Who would be fighting with the weather like this?"
        "Only a lunatic."
                "So you think D'Artagnian is involved?"
                        --- Porthos, Athos, and Aramis.
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw