[comp.ai] THE MIND EXTENDS BEYOND THE SKIN

cam@edai.ed.ac.uk (Chris Malcolm) (03/09/89)

THE MIND EXTENDS BEYOND THE BRAIN AND BODY
------------------------------------------

I would like to introduce a new perspective to the Searle and
symbol-grounding argument, which I hope will clarify some of the
points which many of those disputing with Harnad fail to recognise,
due to having too much experience of computers, and too little with
real creatures, whether natural (biological) or artificial (robots).
It is a nice argument by Gregory Bateson from the heyday of
cybernetics to the effect that mind extends beyond the brain and even
beyond the boundary of the creature (Searle's argument is a corollary,
as is Harnad's "robotic functionalist" rebuttal). This is a theme
further developed by the biologist Maturana, and used by Winograd and
Flores, in the concept of "structural coupling" between a creature and
its environment.

I belong to that small but annoying company of roboticists who think
that these considerations have direct implications for the
architecture of even the simple and incompetent (relative to natural
creatures) kinds of robots we can build with today's technology. In
other words, there are some ways of building robots which are never
going to work, and the issues underlying Searle's argument as well
explicated by Harnad contain useful pointers to this.

I will quote Bateson in detail, since the paper from which this nice
argument comes is not otherwise of general (comp.ai) interest, and
since it is clear that most comp.ai disputants never read papers
anyway, even when they are central to the dispute :-)

Gregory Bateson:

We can assert that _any_ ongoing ensemble of events and objects which
has the appropriate complexity of causal circuits and the appropriate
energy relations will surely show mental characteristics.  It will
_compare_, that is, be responsive to _difference_ (in addition to
being affected by the ordinary physical "causes" such as impact or
force).  It will "process information" and will inevitably be
self-corrective either towards homeostatic optima or toward the
maximisation of certain variables.

A "bit" of information is definable as a difference which makes a
difference.  Such a difference, as it travels and undergoes successive
transformation in a circuit, is an elementary idea.

But, most relevant in the present context, we know that no part of
such an internally interactive system can have unilateral control over
the remainder or over any other part.  The mental characteristics are
inherent or immanent in the ensemble as a _whole_.

Even in very simple self-corrective systems, this holistic character
is very evident. In the steam engine with a "governor", the very word
"governor" is a misnomer if it is taken to mean that this part of the
system has unilateral control. The governor is, essentially, a sense
organ or transducer which receives a transform of the _difference_
between the actual running speed of the engine and some ideal or
preferred speed. This sense organ transforms these differences into
differences in some efferent message, for example, to a fuel supply or
a brake. The behavior of the governor is determined, in other words,
by the behavior of the other parts of the system, and indirectly by
its own behavior at a previous time.

The holistic and mental character of the system is most clearly
demonstrated by this last fact, that the behavior of the governor
(and, indeed, of every part of the causal circuit) is partially
determined by its own previous behavior. Message material (i.e.
successive transforms of difference) must pass around this total
circuit, and the _time_ required for the message material to return to
the place form which it started is a basic characteristic of the whole
system. The behavior of the governor (or any other part of the
circuit) is thus in some degree determined not only by its own
immediate past, but by what it did at a time which precedes the
present by the interval necessary for the message to complete the
circuit.  There is thus a sort of determinative _memory_ in even the
simplest cybernetic circuit.

The stability of the system (i.e., whether it will act
self-correctively or oscillate or go into runaway) depends upon the
relation between the operational product of all the transformations of
difference around the circuit and upon this characteristic time. The
"governor" has no control over these factors. Even a human governor in
a social system is bound by the same limitations. He is controlled by
information from the system and must adapt his own actions to its time
characteristics and to the effects of his own past action.

Thus in no system which shows mental characteristics can any part have
unilateral control over the whole. In other words, __the mental
characteristics of the system are immanent, not in some part, but in
the system as a whole__.

The significance of this conclusion appears when we ask, "Can a
computer think?" or, "Is the mind in the brain?"  And the answer to
both questions will be negative unless the question is focussed upon
one of the few mental characteristics which are contained within the
computer or the brain.  A computer is self-corrective in regard to
some of its internal variables. It may, for example, include
thermometers or other sense organs which are affected by differences
in its working temperature, and the response of the sense organ to
these differences may affect the action of a fan which in turn
corrects the temperature. We may therefore say that the computer
exhibits mental characteristics in regard to its internal temperature.
But it would be incorrect to say that the main business of the
computer --- the transformation of input differences into output
differences [i.e. symbol crunching]--- is "a mental process".  The
computer is only an arc of a larger circuit which always includes a
man and an environment form which information is received and and upon
which efferent messages from the computer have effect.  This total
system, or ensemble, may legitimately be said to show mental
characteristics. It operates by trial and error and has creative
character.

Similarly, we may say that "mind" is immanent in those circuits of the
brain which are complete within the brain. Or that mind is immanent in
circuits which are complete within the system, brain _plus_ body. Or,
finally, that mind is immanent in the larger system, man _plus_
environment.

In principle, if we desire to explain or understand the mental aspect
of any biological event, we must take into account the system - that
is, the network of _closed_ circuits, within which that biological
event is determined.  But when we seek to explain the behavior of a
man or any other organism, this "system" will usually _not_ have the
same limits as the "self" - as this term is commonly (and variously)
understood.

[....]

Consider a blind man with a stick.  Where does the blind man's self
begin?  At the tip of the stick? At the handle of the stick?  Or at
some point halfway up the stick?  These question are nonsense, because
the stick is a pathway along which differences are transmitted under
transformation, so that to draw a delimiting line _across_ this
pathway is to cut off a part of the systemic circuit which determines
the blind man's locomotion.

[....]

The total self-corrective unit which processes information, or, as I
say, "thinks" and "acts" and "decides", is a _system_ whose boundaries
do not at all coincide with the boundaries either of the body or of
what is popularly called the "self" or "consciousness"; and it is
important to notice that there are _multiple_ differences between the
thinking system and the "self" as popularly conceived:

	1.  The system is not a transcendent entity as the "self" is
	commonly supposed to be.
	
	2.  The ideas are immanent in a network of causal pathways
	along which transforms of difference are conducted.  The
	"ideas" of the system are in all cases at least binary in
	structure.  They are not "impulses" but "information".
		
	3.  This network of pathways is not bounded with
	consciousness but extends to include the pathways of all
	unconscious mentation - both autonomic and repressed, neural
	and hormonal.
		
	4.  The network is not bounded by the skin but includes all
	external pathways along which information can travel.  It
	also includes those effective differences which are immanent
	in the "objects" of such information.  It includes the
	pathways of sound and light along which travel transforms of
	differences originally immanent in things and other people -
	and especially _in our own actions_.
		
REFERENCES

Gregory Bateson, section "The Epistemology of Cybernetics" in his
paper "The Cybernetics of 'Self': A Theory of Alcoholism", in
"Psychiatry", Vol 34, no 1, pp 1-18, 1971; reprinted in "Steps to an
Ecology of Mind", Ballantine Books, NY, 1972.

Stevan Harnad, "Minds, Machines, and Searle", J. Expt. Theor. Artif.
Intell. 1(1989) pp5-25.

H.R Maturana, F.J. Varela, ``The Tree of Knowledge: The Biological
Roots of Human Understanding'', New Science Library, Shambala, Boston
Mass., 1988.

T. Winograd, F. Flores, ``Understanding Computers and Cognition'',
Norwood, N.J. Ablex Publishing 1986.


---------------------------------------------------------------------------
Chris Malcolm, Department of Artificial Intelligence, Edinburgh University.
---------------------------------------------------------------------------

hassell@boulder.Colorado.EDU (Christopher Hassell) (03/11/89)

In article <305@edai.ed.ac.uk> cam@edai.ed.ac.uk (Chris Malcolm) writes:
# THE MIND EXTENDS BEYOND THE BRAIN AND BODY
# ------------------------------------------
# 
# I would like to introduce a new perspective to the Searle and
# symbol-grounding argument, which I hope will clarify some of the
# points which many of those disputing with Harnad fail to recognise,
# due to having too much experience of computers, and too little with
# real creatures, whether natural (biological) or artificial (robots).

It is debatible that either would give a *pure* observation point.

# It is a nice argument by Gregory Bateson from the heyday of
# cybernetics to the effect that mind extends beyond the brain and even
# beyond the boundary of the creature (Searle's argument is a corollary,
# as is Harnad's "robotic functionalist" rebuttal). This is a theme
# further developed by the biologist Maturana, and used by Winograd and
# Flores, in the concept of "structural coupling" between a creature and
# its environment.
  
# I belong to that small but annoying company of roboticists who think
# that these considerations have direct implications for the
# architecture of even the simple and incompetent (relative to natural
# creatures) kinds of robots we can build with today's technology. In
# other words, there are some ways of building robots which are never
# going to work, and the issues underlying Searle's argument as well
# explicated by Harnad contain useful pointers to this.

I still find predictions about what *won't* work to be misleading
and impotent at later stages.  Even proofs such as Von Neuman's
program-tracing problem truly only states that not ALL programs are
machine-tracible ... (without knowing ALL of its states).  This provides
no real benefit and has only been extrapolated to mean "NO programs..."

# I will quote Bateson in detail, since the paper from which this nice
# argument comes is not otherwise of general (comp.ai) interest, and
# since it is clear that most comp.ai disputants never read papers
# anyway, even when they are central to the dispute :-)

BUT of course.  What else is Usenet for except for us uneducated dopes
who come up with the nerve-racking answers :->?
  
# Gregory Bateson:
  
# We can assert that _any_ ongoing ensemble of events and objects which
# has the appropriate complexity of causal circuits and the appropriate
The measurement of complexity is far from the whole requirement, as is known.

# energy relations will surely show mental characteristics.  It will
# _compare_, that is, be responsive to _difference_ (in addition to
# being affected by the ordinary physical "causes" such as impact or
# force).  It will "process information" and will inevitably be
# self-corrective either towards homeostatic optima or toward the
# maximisation of certain variables.
  
  [..]
  
# But, most relevant in the present context, we know that no part of
# such an internally interactive system can have unilateral control over
# the remainder or over any other part.  The mental characteristics are
# inherent or immanent in the ensemble as a _whole_.

# Even in very simple self-corrective systems, this holistic character
# is very evident. In the steam engine with a "governor", the very word
# "governor" is a misnomer if it is taken to mean that this part of the
# system has unilateral control. The governor is, essentially, a sense

Control is entirely a misnomer in itself.  In a deterministic system
what controls what???  What can EVER "control" what else?  The governer
is "controlled" by speed, and from this there is never a "peak" in ANY
deterministically closed system.

# organ or transducer which receives a transform of the _difference_
# between the actual running speed of the engine and some ideal or
# preferred speed. This sense organ transforms these differences into

    [..]

# The holistic and mental character of the system is most clearly
# demonstrated by this last fact, that the behavior of the governor
# (and, indeed, of every part of the causal circuit) is partially
# determined by its own previous behavior. Message material (i.e.

    [..]  <memory exists by a feedback loop>

# "governor" has no control over these factors. Even a human governor in
# a social system is bound by the same limitations. He is controlled by
# information from the system and must adapt his own actions to its time
# characteristics and to the effects of his own past action.
  
# Thus in no system which shows mental characteristics can any part have
# unilateral control over the whole. In other words, __the mental
# characteristics of the system are immanent, not in some part, but in
# the system as a whole__.
  
# The significance of this conclusion appears when we ask, "Can a
# computer think?" or, "Is the mind in the brain?"  And the answer to
# both questions will be negative unless the question is focussed upon
# one of the few mental characteristics which are contained within the
# computer or the brain.  A computer is self-corrective in regard to

    [..]

# But it would be incorrect to say that the main business of the
# computer --- the transformation of input differences into output
# differences [i.e. symbol crunching]--- is "a mental process".  The
# computer is only an arc of a larger circuit which always includes a
# man and an environment form which information is received and and upon
# which efferent messages from the computer have effect.  This total
# system, or ensemble, may legitimately be said to show mental
# characteristics. It operates by trial and error and has creative
# character.

With what he goes on to say, you might want to ask what ANY definition of
a "mind" might be.  I find the limit of the skin to be quite appropriate
because we MUST deal with it ourselves and also try to develop a computer
that can attempt to deal WITHIN a framework itself.
  
  ...  <About the mind as a _closed_-loop entity only>

# Consider a blind man with a stick.  Where does the blind man's self
# begin?  At the tip of the stick? At the handle of the stick?  Or at
# some point halfway up the stick?  These question are nonsense, because
# the stick is a pathway along which differences are transmitted under
# transformation, so that to draw a delimiting line _across_ this
# pathway is to cut off a part of the systemic circuit which determines
# the blind man's locomotion.
  
I have always considered that there must always be a definition of External
interaction to produce a "good" reaction-to-the-world or a "bad" reaction.
All internal interaction is not "thought" itself.  Its proper definition is
that which "molds" to the rest of the system, the external system.

# [....]
  
# The total self-corrective unit which processes information, or, as I
# say, "thinks" and "acts" and "decides", is a _system_ whose boundaries
# do not at all coincide with the boundaries either of the body or of
# what is popularly called the "self" or "consciousness"; and it is
# important to notice that there are _multiple_ differences between the
# thinking system and the "self" as popularly conceived:

The popular definitions are tautilogical and defined to be what they are.
 ...

# 	3.  This network of pathways is not bounded with
# 	consciousness but extends to include the pathways of all
# 	unconscious mentation - both autonomic and repressed, neural
# 	and hormonal.

There definately are such "intelligences", but because they are neither
selected-against nor uniformly or in any meaningful manner "taught", they cannot
be said to "learn" in the best and most likely of senses.
  		
# 	4.  The network is not bounded by the skin but includes all
# 	external pathways along which information can travel.  It
# 	also includes those effective differences which are immanent
# 	in the "objects" of such information.  It includes the
# 	pathways of sound and light along which travel transforms of
# 	differences originally immanent in things and other people -
# 	and especially _in our own actions_.

There should ALWAYS be considered to be different abstractions of where
"smartness" can lie.  But, as a mountain may be considered to "sense" its
upwelling and erosion and its growth and precipitation, there is no net loop
in any manner from this.  The information flow becomes either too 
non-interactive (as in a mountain) or too random (as in mob psychology <with no
complex internal communcation>) to produce any sort of sensible self-modifying
behavior.  The self-communication just isn't in there.

Man himself has a beautiful version of this only through the channels of 
evolution and culture as well as the practiced sciences.  (Though he impedes
them quite thoroughly some times :-/).
  		
# REFERENCES
  
   [excluded, but in previous article]
# ---------------------------------------------------------------------------
# Chris Malcolm, Department of Artificial Intelligence, Edinburgh University.
# ---------------------------------------------------------------------------

If I am what is considered to be "intelligence" <hopefully> then there is
no need or means to place this "outside" of me.

### C>H> ###

bwk@mbunix.mitre.org (Barry W. Kort) (03/12/89)

In article <7337@boulder.Colorado.EDU> hassell@monarch.Colorado.EDU
(Christopher Hassell) writes:

 > [ From article <305@edai.ed.ac.uk> cam@edai.ed.ac.uk, Chris Malcolm
 > introduces Bateson's example of a speed governor on a steam
 > engine and comments on the function of a governor or controller. ]

 > Control is entirely a misnomer in itself.  In a deterministic system
 > what controls what???  What can EVER "control" what else?  The governer
 > is "controlled" by speed, and from this there is never a "peak" in ANY
 > deterministically closed system.

I think the term "regulator" more accurately captures the function
of the "speed governor".  In feedback control systems, we know that
the control loop has two essential components:  the Observer,
which monitors the current output state, and the Controller,
which applies an error-correcting adjustment to the system input.

Central to the operation of a feedback loop is a "goal state"
(ideal speed, say) around which perturbations and error-correcting
adjustments are computed.

So the self-regulating system controls itself, responding to the
random winds which would otherwise deflect the system from its
desired course.

--Barry Kort

jbn@glacier.STANFORD.EDU (John B. Nagle) (03/14/89)

In article <7337@boulder.Colorado.EDU> hassell@monarch.Colorado.EDU (Christopher Hassell) writes:
>In article <305@edai.ed.ac.uk> cam@edai.ed.ac.uk (Chris Malcolm) writes:
># Gregory Bateson:
># We can assert that _any_ ongoing ensemble of events and objects which
># has the appropriate complexity of causal circuits and the appropriate
># energy relations will surely show mental characteristics.  

      Yes, he can definitely assert that.  One can assert anything.

      The argument that follows is basically over definitions, not
content.  Such arguments are inherently futile.

      Could we talk about something else now?

					John Nagle

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/14/89)

In article <7337@boulder.Colorado.EDU> hassell@monarch.Colorado.EDU
(Christopher Hassell) writes:
>In article <305@edai.ed.ac.uk> cam@edai.ed.ac.uk (Chris Malcolm) writes
(quoting Gregory Bateson):
>
># But it would be incorrect to say that the main business of the
># computer --- the transformation of input differences into output
># differences [i.e. symbol crunching]--- is "a mental process".  The
># computer is only an arc of a larger circuit which always includes a
># man and an environment form which information is received and and upon
># which efferent messages from the computer have effect.  This total
># system, or ensemble, may legitimately be said to show mental
># characteristics. It operates by trial and error and has creative
># character.
>
>With what he goes on to say, you might want to ask what ANY definition of
>a "mind" might be.  I find the limit of the skin to be quite appropriate
>because we MUST deal with it ourselves and also try to develop a computer
>that can attempt to deal WITHIN a framework itself.
>
Actually, this reminds me of the old puzzle about whether a tree which falls
when no one hears it makes a sound.  It would seem that what Bateson is saying
is that "mind" is a process which EMERGES from interactions "beyond the skin."
Thus, it goes one step beyond the "systems argument" which Searle attempts to
object to.  Searle wants to argue about whether or not a collection of simple
agents (call them neurons or symbol processors or whatever) can have properties
AS A COLLECTION which none of the components have as individual members of the
collection.  Anyone who has worked with systems know that such properties do
indeed exist;  but both Searle and Harnad would have us believe that
"understanding" is too sacrosanct to be such a property.

Bateson takes another approach which seems a little less awestruck with the
need to distinguish man from machine.  He is saying, as I understand it, that
"mind" cannot emerge merely from the interactions of components within an
individuals body but, rather, from the interactions of that body and its
components with other exterior entities.  In other words, Bateson seems
to be buying into the "systems argument;"  but he wants to extend the
boundaries of the system beyond those considered by Searle . . . outside
the man in the room, outside the room itself, to include the room and anything
(human or otherwise) which might interact with it.

I, for one, happen to like this new spin on things.  I regard it as further
evidence of how careful we have to be when we approach a word like
"understand."  It seems as if Bateson is saying that asking whether
or not the man in the room understands Chinese is a silly question,
and asking whether or not the "system" of the room and all its interacting
contents understands Chinese is no better.  My reading of Bateson is that
he feels the only appropriate question to ask is whether or not there is
a manifestation of understanding in the interactions that a Chinese person
has with the room.  Since Searle seems to have deliberately set up his GEDANKEN
experiment to assure such a manifestation, there does not seem to be much room
(pun intended?) for argument.

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/15/89)

ON THE VARIETIES OF FUNCTIONALISM: WIDE AND NARROW, SYMBOLIC AND ROBOTIC

smoliar@vaxa.isi.edu (Stephen Smoliar) of
USC-Information Sciences Institute wrote:

" Searle wants to argue about whether or not a collection of simple
" agents (call them neurons or symbol processors or whatever) can have
" properties AS A COLLECTION which none of the components have as
" individual members of the collection. Anyone who has worked with
" systems know that such properties do indeed exist; but both Searle and
" Harnad would have us believe that "understanding" is too sacrosanct to
" be such a property.

It absolutely astonishes me how a simple point can persistently fail to
register if all of one's resources are committed to its contrary. I
will repeat, patiently, for the Nth time, that "call them neurons or
symbol processors or whatever" simply is not good enough, because it is
just the critical difference between the two that's at issue here!
Supposing one were talking about the critical materials needed to get
electrical conduction and someone said "call them metal or rubber or
whatever"! Or the critical function needed to perform work, and someone
said "call it energy or entropy or whatever." I could go on and on.

There is nothing sacrosanct about "understanding" (though humility
dictates that we acknowledge that the mind/body problem has been
proving to be a rather tough nut to crack). Both Searle and I believe
that it can be done by a physical SYSTEM (e.g., the brain). The point
is that we have been giving reasons (Searle, one reason, I several) why
a symbol-crunching system is the WRONG KIND OF SYSTEM to generate
understanding. What is the response to these arguments? Endless
repetition of the claim that the problem with Searle's Argument is that
he underestimates SYSTEMS -- as if he would deny that even neural systems
could understand.

This is just wasted words. If you want to rebut Searle, stick to what
SYMBOL SYSTEMS (implemented in symbol-crunchers) can and can't do, and
why. Don't hand-wave about "systems" in general. And while you're at it,
try to address directly the points I've been raising about the specific
limits of symbolic vs. nonsymbolic "systems."

" Bateson... [says instead] that "mind" cannot emerge merely from the
" interactions of components within an individuals body but, rather, from
" the interactions of that body and its components with other exterior
" entities.

I made no reply to the Bateson-related postings because they seemed too
metaphorical and remote: Bateson is an anthropologist, not a cognitive
modeler. But even philosophers (who are likewise not mind-modelers,
but often quite good at point out the silliness of some of the latter's
shenanigans), in struggling with the problem of meaning and understanding,
have proposed two kinds of "functionalism" (the position that mental
states are functional states, and that meaning is a functional relation
between words and states of affairs in the world). One form of functionalism
is "narrow" or "skin-and-in" functionalism, according to which meaning
is something that goes on between the ears of the candidate, and the
rest is just the causal history of the candidate in the world. Certain
"twin-earth" koans by Hilary Putnam and others have led some
philosophers to prefer a "wide" functionalism, according to which the
critical functional relation involved in meaning something includes a
wider "system" than the candidate himself: a wide functional "state"
includes the candidate plus objects and states of affairs in the
world.

One of the features of "wide" functionalism is that meaning something
or knowing something has little to do with knowing that you mean
something or know something. This is simply not pertinent to the subjective
state I have been emphasizing in these postings -- namely, the
subjective EXPERIENCE of understanding, meaning, or knowing something
-- which is surely as "narrow" a "skin-and-in" state as pain is. There
are no "twin-pain" problems. Hence I reject wide functionalism as
either flummery or changing the subject. The task of the mind modeler is
to produce a candidate that will pass the Total Turing Test (TTT).
The only functions such a mind modeler needs to worry about
are the internal ones. The rest is just a matter of generating the
right outputs to the inputs. (And once the mind modeler succeeds, we
must accept that the TTT-capable candidate understands -- or at least that
we have no better reason to believe it does than we do to believe that
any other person but ourself does -- and that we can never expect to be
the wiser.)

There are at least two "narrow" functionalisms available: I've dubbed
them "symbolic functionalism," according to which the critical internal
function for passing the TTT and having a mind is just symbol-crunching,
and "robotic functionalism," according to which the critical internal
functions will be nonsymbolic function, in which "dedicated" symbolic
function is grounded bottom-up, and not isolable as an independent
functional module.

" this reminds me of the old puzzle about whether a tree which falls
" when no one hears it makes a sound...

As usual, those who use this old koan about the mind/body problem to
lampoon philosophers are really just displaying that they don't
understand it. ("Angels on a pin" is at least a coherent retort, though
it too usually signals that the complainant has been missing
something...)


REF:    Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

mike@arizona.edu (Mike Coffin) (03/16/89)

From article <Mar.15.10.06.52.1989.29883@elbereth.rutgers.edu>, by harnad@elbereth.rutgers.edu (Stevan Harnad):

> I will repeat, patiently, for the Nth time, that "call them neurons
> or symbol processors or whatever" simply is not good enough, because
> it is just the critical difference between the two that's at issue
> here!  [...]  Both Searle and I believe that it can be done by a
> physical SYSTEM (e.g., the brain). The point is that we have been
> giving reasons (Searle, one reason, I several) why a
> symbol-crunching system is the WRONG KIND OF SYSTEM to generate
> understanding. [...]

I will repeat, patiently, that Searle has given
NO reason why a symbol crunching system is the wrong kind.  He has
attempted a form of proof called reductio ad absurdum.  The problem is
that he never reaches an absurdity.  He starts with the premise that a
symbol-crunching system can appear to understand.  He derives the
"absurdity" that one piece of the symbol cruncher doesn't understand.  
But that is only absurd IF you've already decided that understanding
is not an emergent property --- i.e., IF you have accepted the
conclusion Searle is trying to prove.
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/16/89)

In article <18163@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>      The argument that follows is basically over definitions, not
>content.  Such arguments are inherently futile.

I see your type a lot, and I still can't work out where this idea comes
from that there is a content separable from our use of language.  I
presume that every research project you've worked on has been driven by
a set of futile definitions?  Don't be a jock about language.

A Scottish man has been charged with raping his wife.  His legal
advisors objected on the grounds that, in law, one cannot 'rape' one's
wife.  They have been overruled.

Strange how major social movements are based on something as futile as
arguments over language.

I find your comment exceptionally shallow and naive.  I think you
should expand so we can have the benefit of your wondrous epistemology.
Regale us with your hard and fast line between definition and content.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert