[comp.ai] Chinese Room

oliphant@dvinci.USask.CA (Mike Oliphant) (03/04/89)

I'm currently taking a first year philosophy class in which
the topic of Searle's Chinese room argument came up.  Here are
my thoughts on the matter:

Suppose, for the sake of argument, that we accept Searle's Chinese
room example as begin possible - i.e. that it is possible for a
machine to appear to external examination to understand when in reality
it does not understand.  This implies that we could have two machines
that react to their environment in exactly the same way, but one does
so with understanding while the other does not.  Now we must ask the
question "Just what can it be that the 'understanding' machine possesses
that makes it different?".  Since these two machines behave in exactly
the same manner, this difference cannot cause any external effects.

This causes a problem, as it is not consistent with my concept of
understanding that it has no external effects.  This problem becomes
more apparent when we ask the question "Why do I believe in something
called 'understanding'?".  It seems to me that most people believe in
'understanding' due to their own introspection.  We believe in understanding
and we argue about the nature of understanding because we have this
internal experience that we call understanding.  If this is the case,
hasn't this phenomenon of understanding caused something external?  Hasn't
it caused me to ponder this subject and write this message?  I believe
it has.

So, what possible conclusions can we draw from this?  What I conclude
is that it is not possible to have two machines that display
identical behaviour while one possesses 'understanding' (in the
intuitive sense of the word) while the other does not.  Understanding
is an integral part of behaviour and if you duplicate the behaviour
you also duplicate the phenomenon we call understanding.
Of course, this conclusion is still based upon the assumption that the
Chinese room scenario is plausible.  Many will argue that it is not
possible to construct a machine that even simulates understanding,
but that is a separate (but very important) issue.


There are undoubtedly flaws in my reasoning.  I encourage any
comments/criticism so that I can locate them and (hopefully)
patch them up.

			-Mike Oliphant

bwk@mbunix.mitre.org (Barry W. Kort) (03/05/89)

Mike Oliphant raises an interesting wrinkle on the Chinese Room.

He posits two entities with identical observable behavior, except
that one has subjective understanding while the other is a mere
automaton, like the Chinese Room, with no subjective understanding.

Now, in the Chinese Room thought experiment, we give the Room a
story, and then ask it questions about the story.

Suppose we ask, "Did you understand the story?"

Does the Room mechanically answer (dishonestly), "Yes, I understood it."?

According to Mike's scenario, the Room must respond identically to its
model.  So far, no problem.

Now we give both entitiies another story.  The story is a Chinese
translation of Searle's "Chinese Room Thought Experiment".  We
again ask both entities if they understood.

The one-who-understands, says, "Hmmm.  That is a thought-provoking
story.  I'm not sure I fully understand it."

Does the automaton answer likewise?  How can the automaton's response
be a deception?   What is the Room doing with the symbols for "you"
and "I"?

--Barry Kort

dvm@yale.UUCP (Drew Mcdermott) (03/10/89)

I want to weigh into the Chinese-room discussion on the side of Engelson,
Geddis, et al., and opposite Harnad.

I think Engelson et al. have put the case pretty clearly that Searle+rules
may well be a different entity than Searle by himself.  As I said in
my original response to Searle, if all N employees of a corporation stand
in a room, then the room contains N+1 legal persons.  If Searle+rules is
a person, then wherever Searle goes two persons go.  It's true that this
is a piece of dogma; I don't know of any compelling argument in favor of
it.  On the other hand, I don't know of any compelling argument against it.
Searle's argument is basically to state the consequences of the "strong AI"
theory and invite one to find them absurd.  One need not.  It's a draw.

The whole argument is reminiscent of Ned Block's argument about qualia.
(In "Troubles about Functionalism.")  His argument was: If functionalism
is true, then we could get China to be conscious by having all the Chinese
simulate information processors in the right way.  This is absurd, so
functionalism (and hence stronger positions, like strong AI) are false.
But this argument, like Searle's, is just a clash of intuitions.  If
you can imagine that China is conscious, the argument fails.  I can,
I think.  (I wonder who the Chinese use for their examples?)

I've already lost Harnad, but others may find the following more specific
remarks illuminating.  As Engelson (I believe) said, if subjective
experience is the hallmark of understanding, then the question is whether
Searle+rules has the right kind of subjective experience.  (As an aside,
I believe there is such a subjective experience, but it's not clear it's
the hallmark of anything except a decision to stop thinking about something.)
It seems to me there are four positions one could take:

  1 Yes, obviously Searle+rules is the sort of entity that could have
    this experience.  [I hope it's obvious that the rules I and the
    others are envisioning do a good deal more than the Schankian
    script appliers Searle was describing.  I will grant that those
    rules wouldn't have any experiences.]

  2 No, obviously nothing of that sort could ever have the experience

  3 Perhaps they could, but only the system itself could ever know, and
    we probably can't trust what it says.  Having it type out "I'm getting 
    that warm glow of comprehension" might not tell us much.
    
  4 It's an empirical question whether a given system of this sort has
    experiences, and we won't know until we have more information about
    what experience is at the physiological or computational level.
    (Or even what level is appropriate.)

It seems to me that the safest position is number 4, but I am in the grip
of an ideology, so I lean toward number 1.  Anyone who likes position 2
is also in the grip of an ideology, though.  I would bet that position 3
is attractive to most people, but it seems intrinsically dualist to me,
so I am puzzled by people who like (3) but claim not to be dualists.  I
can't decide whether Harnad's MTE or MPE positions correspond to 2, 3, or 4.
Perhaps MTE=4 and MPE=3. 
                                        -- Drew McDermott

ray@bcsaic.UUCP (Ray Allis) (03/22/89)

(Sorry this reply took so long, but our USENET feed disappeared for about
a week.)

> From: mike@arizona.edu (Mike Coffin)
> Message-ID: <9690@megaron.arizona.edu>
> Organization: U of Arizona CS Dept, Tucson
> 
> From article <10704@bcsaic.UUCP>, by ray@bcsaic.UUCP (Ray Allis):
> > Is there some evidence
> > that would cause you to re-inspect your conviction that the "Systems Reply"
> > is sufficient?   What part of the process by which I came to see the
> > inadequacy of the symbol processing approach can I explain more clearly?

I asked the question hoping to get something that would help me improve my
own arguments.  Thanks for the clear statement of your position.  

> Sure.  Convince me that no symbol-pushing engine can simulate pieces
> of my brain, if you make the pieces small enough.  My belief in the
> systems reply is based exactly on this:
> 
> 1) I have in my possesion a system that seems to understand and think:
>    my brain.  (My wife might argue about that...)

O.k., I'm willing to give you the benefit of the doubt. :-)

> 2) The brain (and the rest of the body) is made up of physical parts:
>    electrons, atoms, molecules, cells, organs, etc.

Agreed.  No one but a theoretical physicist would quibble. :-)

> 3) I see no reason, in principle, that such parts can't be simulated
>    to any desired precision, given powerful enough computers.  Not
>    necessarily Turing machines; we may need random bits.

I don't either, except there may be *some* limit on precision.

> 4) Given such simulators, I see no reason, in principle, that I can't
>    begin replacing my biological parts with simulated parts.

The problem here is a shift in the meaning of the word "simulation" between
statement 3) and statement 4).

     (NOTE: I'm talking about DIGITAL COMPUTER simulations here.
      Wall panelling of "simulated" walnut is still a real material,
      though one might more precisely call it a poor duplicate.  And
      a "mental model" is (arguably) non-physical, though maybe it
      should be called a mental simulation.  A distinction must be
      made whether we are discussing a physical thing or a symbol system.)

I have no doubt we might be able to *duplicate* part or all of the brain, and
thereby (we believe) the mind, but "simulate" is not "duplicate".

+ You can *simulate* something: a *symbolic* process.
+ You can *model* something:  i.e. produce an abstracted form.  This can be
  either physical or symbolic.

     (Models are characteristically "reduced", i.e. smaller and less
      complex than the thing modelled.  They do not have all the
      properties of the original.)

+ You can *duplicate* something: a *physical* process.

At issue here is the distinction between simulation and implementation or
duplication.  You want to actually IMPLEMENT parts of your brain, but in
different media.  One case is a physical *duplication* of parts of your brain,
but substituting silicon or supercooled whatever for the biological components,
the other case is not physical but symbolic; that is, constructed from the
*relationships among the members of a set of symbols*.  Such a simulation is
disconnected from the physical universe. 

Not incidentaly, your argument illustrates the very problem I'm trying to point
out: the *logic* is valid, but the argument's relevance to reality is determined
by the MEANING of the statements.  This is exactly the problem with symbol
systems.  Denotation and connotation (meaning) do not enter into the manipulation
of symbol systems.  (BTW, that's also their great advantage, I don't want to give
the impression I think symbolic reasoning should be discarded!)

>    Obviously I will need some chemical peripherals to interface the
>    two systems.

There's a big difference between a model airplane and a simulated airplane, or
a simulated leg and a prosthetic.  A simulated air crash won't kill anyone or
break a plane.  That's why pilots are trained in simulators.  A simulated leg
does an amputee little direct good, it's for the doctor's benefit to analyze
the situation.  The patient needs a *duplicate*, even a poor one.  Simulated
bells don't ring.  

In fact, you *can't* replace your real brain with a simulation.

That's why we have to build supercolliders.  Simulation doesn't provide all
the information we need.  In fact digital computer simulation, like deductive
logic (which it is) doesn't provide anything which was not "built in" to it by
design.  If you try to pour 11 oz. of wine into a 10 oz. glass, physical reality
constrains the result.  Not so with a simulation.  The DESIGN of the (symbolic)
simulation constrains it.  THERE IS NO, repeat NO, "real" relationship between
a symbol and its denotation; the relationship exists only in a mind.  The
simulation (all simulations) are unaffected by the real world!  A simulated
wing is not affected by sunlight unless we specify such affect in the design of
the simulation.  There is *NO* possibility of affect from surprise factors.
So, if we didn't think of everything, the simulation will not mimic reality.
(It fails in proportion to our ignorance!)

You'll agree we can't build an atom from symbols; symbols are not real things.
What's important about a symbol is its *meaning*, denotation and connotation.
Some symbols denote other symbols; the marks you're reading now denote sounds
which denote ideas.  But, at bottom, symbols are associated with non-symbols.
The non-symbols are what we compare to detect similarity and difference, to
discover analogy... to think.

> 5) Given that the simulations are accurate enough, I see no reason that
>    at some point in the process of replacement I will cease to
>    understand: e.g., that with 23.999% of my brain simulated, I understand,
>    but with 24.000% I cease understanding.
> -- 
> Mike Coffin				mike@arizona.edu
> Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
> Tucson, AZ  85721			(602)621-2858

Ray Allis  ray@atc.boeing.com  or bcsaic!ray

ray@bcsaic.UUCP (Ray Allis) (03/22/89)

> From: lee@uhccux.uhcc.hawaii.edu (Greg Lee)
> Subject: Re: Chinese Room Argument
> 
> From article <10704@bcsaic.UUCP>, by ray@bcsaic.UUCP (Ray Allis):
> # ...
> # Human "natural" languages are not symbol systems; nothing useful can be
> # done with only the symbols.  It's the meanings that are important.
> # Directly translating from the symbols of one language (e.g. English) to
> # the symbols of another (e.g. Chinese) without recourse to denotations
> # and connotations is nonsensical.  (This really isn't arguable, is it?)
> 
> It's not nonsensical at all.  Lots of people have had the experience
> of translating an article in a language they don't know with a
> dictionary.  Not fully, but for some types of articles you can
> get most of the gist.  For that matter, when you use a dictionary
> for your own language it's mostly just symbol-symbol correspondences
> you're finding out about.  Though dictionaries do commonly have
> some encyclopedic information, too.
 
I originally wrote a reply thanking you for pointing out that translation is
possible without recourse to denotations; purely symbol to symbol.  But as I
think about it more, I think your example doesn't count as translation. It's
just a simple substitution code using words rather than characters.  Meaning
is still attached by a perceiver outside the process.

When I use a dictionary, it's almost never a symbol-symbol correspondence,
unless I'm looking for plural formation.  The dictionary problem always has
been "Where do you start?"  "What's the word for the notion of caretaker,
when used in the context of a guardian of a corporation's assets?"  "What's
the word for the center thing in a flower, that is the receiver of pollen
for fertilization?"  You can't look up a symbol by its *meaning*.

I had thought that the example of translation between two languages would
make my point most emphatically.  I actually take a stronger position; that
natural language is not the symbol set alone, that we think with the
referents, denotations, connotations ... and the tokens are just pointers and
handles for the experience we are really communicating.

> # Thinking and understanding have to do with (non-symbolic) physical and
> # chemical events in our central nervous system (brain).  ...
> 
> That can hardly be so, since such events can be taken as symbols
> for the states of the world that evoke them.

Such events can be taken as *representations* of the states of the world
that evoke them, but symbolizing occurs only in minds as a subjective
experience.  

> 
> 		Greg, lee@uhccux.uhcc.hawaii.edu

Ray Allis  ray@atc.boeing.com   bcsaic!ray

dmocsny@uceng.UC.EDU (daniel mocsny) (03/25/89)

In article <10884@bcsaic.UUCP>, ray@bcsaic.UUCP (Ray Allis) writes:
> Simulated bells don't ring.  

Mechanical simulation is certainly a valid sound-synthesis technique.
It is not very widespread yet because it demands considerable
computation, but a simulated bell need not sound any different to your
ears than a real bell.

We know nothing of the "real" world other than what our sensory organs
encode onto our nerves and send to our brain. I believe that informed
opinion considers the bandwidth of our nervous system to be finite.
Therefore, in principle, no theoretical barriers forbid the
possibility of creating a simulation indistinguishable from "reality."
That we cannot yet do so means that our simulations are incomplete.
Must they always remain so?

I think when you mentioned the simulated bell, you were probably
imagining that the output would be in the form of a list of numbers
describing the average position and velocity over time of a set of
finite elements partitioning the bell. By itself, a list of numbers
doesn't "ring." But is this list of numbers a "simulation" of the
bell? I think we would more properly say the list of numbers simulates
a set of measurements one might make of a bell. If we fit the bell
with strain guages, accelerometers, and transducers, then strike the
bell with a hammer, we can record a list of numbers describing the
bell's behavior. This list of numbers, and not the "ring," was the
goal of the simulation. If we want the list of numbers to ring, we
must extract from them an appropriate waveform and send it through a
loudspeaker.

We can already record the sound of the ringing bell directly, with
exceptional fidelity, yielding a digitized waveform. The waveform is
just a list of several hundred thousand finite-precision integers.
Creating those integers from a simulation is not an impossible
problem.

After all, a computer, according to Steven Wolfram, is a physical
system and it obeys physical laws. A simulation is nothing more than
instructing one physical system to behave something like another
physical system.

> Ray Allis  ray@atc.boeing.com  or bcsaic!ray

Dan Mocsny
dmocsny@uceng.uc.edu

MIY1@PSUVM.BITNET (N Bourbaki) (09/26/89)

In article <15157@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>In article <567@ariel.unm.edu> bill@wayback.unm.edu (william horne) writes:
>
>>This example is relavant to AI, because it questions the validity of the
>>Turing Test as a test of "understanding", as well as questioning the
>>legitimacy of rule based systems as models of intelligence.
>
>One serious flaw in the Chinese Room Problem is that it relies on the
>so-called 'conduit metaphor' (originally described by Michael Reddy in A.
>Ortony's _Metaphor_and_Thought_ Cambridge U. Press 1979).  That metaphor
>assumes that meaning is essentially contained in the linguistic expression.

>  The conduit metaphor
>is very powerful and useful as a means of illuminating the behavior of
>language, but, like all analogies, it breaks down.  Those who deal with real
>language to language translation know that there is no one-to-one match
>between expressions in one language and those in another.

But this difficulty would affect the native Chinese speaker and the
Chinese Room Demon equally.   That is one premise of Searle's
argument - the "mechanical" system is presumed to be just as competent
(not necesarily perfect) at translation as the "understanding" system.

Searle would have you believe that the "mechanical" system lacks
true understanding because it lacks "intentionality".  But this
begs the question, and leads immediately to the "other minds" problem.
Searle acknowledges this objection in _Minds, Brains, and Programs_,
but shrugs it off as only being worth "a short reply", basically that
cognitive states are not created equal, and that systems which exhibit
intentionality are more worthy of being described as "understanding"
than formal symbol-manipulating systems.

The gist of his conundrum is not to validate (or invalidate) any particular
linguistic theory, but to attack so-called "strong AI".  I don't find it a
very convincing argument.  It seems too much like vitalism -- that
there is something special about brains that cannot be duplicated by
artificial means.

N. Bourbaki

ellis@chips.sri.com (Michael Ellis) (09/30/89)

> N Bourbaki  >> Rick Wojcik

>>The conduit metaphor
>>is very powerful and useful as a means of illuminating the behavior of
>>language, but, like all analogies, it breaks down.  Those who deal with real
>>language to language translation know that there is no one-to-one match
>>between expressions in one language and those in another.

>But this difficulty would affect the native Chinese speaker and the
>Chinese Room Demon equally.   That is one premise of Searle's
>argument - the "mechanical" system is presumed to be just as competent
>(not necesarily perfect) at translation as the "understanding" system.

>Searle would have you believe that the "mechanical" system lacks
>true understanding because it lacks "intentionality".  

    As far as I can tell, Searle *is* a mechanist (I have also heard
    him called a "weird sort of dialectical materialist"). He
    believes that the mind is "caused by" the neurophysiological
    mechanism of the brain, and that eventually there will be a purely
    scientific (read "physicalistic and mechanistic" here) account of
    the mind.

    I noticed the scare quotes: "intentionality". If you aren't
    familiar with this concept, you might try reading Brentano, Husserl,
    Sartre, Dreyfus, Searle, Putnam and Dennett for different
    treatments. 

    I think it is fair to say that, for Searle, understanding
    presupposes intentionality practically by definition.

>But this begs the question, and leads immediately to the "other minds"
>problem.

    Maybe it is begging the question to assume that minds exist, that
    you got one, that I got one, that everybody's got one, that minds
    are the paradigm case of something that understands, and that it
    is the mind's ability to understand that we want to know more about.

    If you don't accept this you and John Searle aren't talking the
    same language.

>Searle acknowledges this objection in _Minds, Brains, and Programs_,
>but shrugs it off as only being worth "a short reply", basically that
>cognitive states are not created equal, and that systems which exhibit
>intentionality are more worthy of being described as "understanding"
>than formal symbol-manipulating systems.

    If by "cognitive state" you mean something that is formally
    equivalent to the execution state of a computer, Searle is saying
    that such a thing is totally irrelevant to the question of mind.

    He does not deny that a computer might be conscious.

    He is saying is that, if and when a conscious machine is built,
    its understanding would not be caused by virtue of running the
    right program.

>The gist of his conundrum is not to validate (or invalidate) any particular
>linguistic theory, but to attack so-called "strong AI".  I don't find it a
>very convincing argument.  It seems too much like vitalism -- that
>there is something special about brains that cannot be duplicated by
>artificial means.

    Searle might be wrong, but not for the reasons you offer, since
    you don't quite seem to be arguing against anything Searle said.

    As far as attacks on theories of language, Searle says unkind
    things about Skinner, Chomsky/Fodor, and Quine. 

    As to vitalism, Searle says unkind things about that, too. Whether
    intentionalistic theories are vitalism in diguise, I do not think
    so, but I suppose there are many who might disagree. Vitalism was
    a philosophical and scientific dead end. Is the same true of
    intentionality? Well, the topic seems to be showing up more these
    days in scientifically minded AngloAmerican thought. I am not
    competent to judge whether of not intentionality can be made into
    a rigorous concept, but I suspect that the question currently
    hinges on future developments in intensional logics.

-michael

David.Combs@p0.f30.n147.z1.FIDONET.ORG (David Combs) (01/16/90)

I feel the Chinese Room implies subtlties which aid whatever Searle
feels are valid arguments against Strong AI.  The Chinese Room
implies language as a prerequisite to thought and understanding.
I find it hard to "understand" what it was like not to posess the
concept of being able to ride a bicycle.  At one time I did understand
what it was like.  Similarly, at one time I did not posess the concept
of language -- and yet I maintain that I did posess intelligence and
that I was capable of understanding.  If I were placed in the Chinese
Room at birth, the Chinese Room would have invariably failed the
Turing test.  And as far as I know, it would be impossible to apply
any test to a newborn which would prove the existance of intelligence
to any degree of acuracy, at least not by direct interaction with
the infant.  Language is an add-on feature, a command shell, if you
will -- completely separate from the intelligence -- and so I
assert that testing for "understanding" or "thought" should exclude
the search for things not inherant to intelligence.
 
--Dsc--

a



--  
David Combs - via FidoNet node 1:147/10
UUCP: ...apple.com!uokmax!metnet!30.0!David.Combs
INTERNET: David.Combs@p0.f30.n147.z1.FIDONET.ORG