[comp.ai] The mind extends beyond the Chineese Room

throopw@agarn.dg.com (Wayne A. Throop) (03/14/89)

> harnad@elbereth.rutgers.edu (Stevan Harnad)
> [...Searle...] just said that there
> must be properties that the brain has that symbol-crunchers lack. He
> was perfectly right to say that, and his Chinese Room Argument was
> quite valid, as far as it went.

He may or may not have been "right", but his CR argument is nowhere
near conclusive, nor is it clearly valid.  To contend that that the
issue of the validity of the CR argument is settled is, bluntly,
absurd.

> cam@edai.ed.ac.uk (Chris Malcolm)
> It is a nice argument by Gregory Bateson from the heyday of
> cybernetics to the effect that mind extends beyond the brain and even
> beyond the boundary of the creature [...]
> further developed by the biologist Maturana, and used by Winograd and
> Flores, in the concept of "structural coupling" between a creature and
> its environment.

It is indeed an attractive notion to assume that the boundary of the
mind isn't at any one particular place.  But this notion doesn't
conflict with the position that Searle is, nevertheless, wrong, and
that his CR argument does not whow what he purports it to show.
Because the Chinese Room is as well connected to its environment as
some humans are, say an extreme case of ALS.  The CR has effectively
memories of interaction with the world, stored as part of its rules,
and it has a very low bandwidth connection with the world of some
sort, not different in principle from that a person only posessing
limited pressure (touch) sensation might have.

"But" I hear the objection "the CR's communication is coded and
symbolic, where the human's is direct, and only interpreted to be
symbolic."  But there is no genuine difference, because the CR's is no
more fundamentally symbolic than any other sense pathway.  Examined
closely, we can see that the human sensation of touch is "encoded"
every bit as much as the symbols passed into the CR, or we can see
that the CR's symbols are "just squobles on paper" which the CR
proceeds to interpret.


Searle appeals to anthropomorphism to convince the listener that
there is no mind but Searle's in the scenario.  But he has not
shown this to be so... merely appealed to emotion and intuition.
Of course, nobody has shown convincingly that in such a scenario
there IS such a mind, but Searle's claim to have ruled it out is
just not so.


Also, the point that a simulation of a storm in a weather computer
is no more a real storm than a simulation of a mind in a computer is
a real mind is misleading.  The simulated mind in the computer is
connected to the real world by the IO of that computer (and the "memory"
embodied in the rules of the simulation) in ways that the simulation
of the storm is not.  If the simulation of the storm actually could
interact with the world to produce tides, rainfall, winds, and so on
and on, the claim that this storm which toppled the tree onto my
house was "just a simulation" would be moot.  Just as the contention
that the mind that passed the LTT with every external evidence of
understanding nevertheless didn't understand because it was
"just a simulation" would be moot.


( Note, by the way, that Searleists tend to say "just" and "mere" and
  "only" and so on and on to refer to deterministic symbolic processing.
  In my opinion, this is an attempt (either conscious or unconscious) to
  bias the listener by using loaded language.  )

--
"This is just the sort of thing that people never believe."
                              --- Baron Munchausen
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw