[comp.ai] Sci. American AI debate: No Contest

harnad@phoenix.Princeton.EDU (Stevan Harnad) (01/05/90)

It's never clear to me how seriously to take these comp.ai discussions,
but if there really is anyone out there who is interested in a deeper
analysis of the recent Scientific American article by the Churchlands,
here's one. These were comments on an earlier draft, but no changes seem
to have been made, so they apply to the published version too. For the
record, although his Sci. Am. paper was not the most cogent version
of Searle's position, I don't think the Churchland rebuttals work,
so Searle's Argument comes out on top again.
-----
To:    P & P Churchland
From:  Stevan Harnad
         
    THINKING THE UNTHINKABLE, OR, RUN THAT BY ME AGAIN, FASTER

Hi Pattie and Paul:

Thanks for sending me your Scientific American draft. I've seen
Searle's companion draft too. Here are a few comments:

(1) Unfortunately, in suggesting that Searle is the one who is begging
the question or assuming the truth of the hypothesis that syntax alone
can consititute semantics you seem to have the logic reversed: In fact,
Searle's the one who's TESTING that hypothesis and answering that question;
and the Chinese-Room thought-experiment shows that the hypothesis fails
the test and the answer is no! It is the proponents of the "systems
reply" -- which merely amounts to a reiteration of the hypothesis in
the face of Searle's negative evidence -- who are begging the question.

By the way, in endorsing the systems reply in principle, as you do
(apparently only because of its counterintuitiveness, and the fact that
other counterintuitive things have, in the history of science, turned
out to be correct after all), you leave out Searle's very apt RESPONSE
to the counterintuitive idea that the "system" consisting of him plus
the room and its contents might still be understanding even if he
himself is not: He memorizes the rules, and henceforth he IS all there
is to the system, yet still he doesn't understand Chinese. (And I hope
you won't rejoin with the naive hackers' multiple-personality gambit at
this point, which is CLEARLY wanting to save the original hypothesis at
any counterfactual price: There is no reason whatsoever to believe that
simply memorizing a bunch of symbols and symbol manipulation rules and
then executing them is one of the etiologies of multiple personality
disorder!)

As to the speed factor: Yes, that is one last holdout, if it is in
fact true that Searle could never pass the Chinese TT in real time. But
that's at the price of being prepared to believe that the difference
between having and not having a mind is purely a function of speed!
The phenomenon of phase transitions in physics notwithstanding, that
sounds like a fast one, too fast for me to swallow, at any rate.
Besides, once he's memorized the rules (PLEASE don't parallel the speed
argument with a capacity argument too!), it's not clear that Searle
could not manage a good bit of the symbol manipulation in real time
anyway. The whole point of this exercise, after all, is to show that
thinking can't be just symbol manipulation -- at ANY speed.

I don't know about you, but I've never been at all happy with
naive hackers' claims that all there is to mind is the usual stuff, but
(1) faster, (2) bigger, and (3) more "complex." I think the real
punchline's going to turn out to be a good bit more substantive than
this hand-waving about just (a lot) more of the same...

(2) All your examples about the groundlessness of prior skepticism
in the face of physical theories of sound, light and life were (perhaps
unknowingly) parasitic on subjectivity. Only now, in mind-modeling, is
the same old problem finally being confronted on its home turf. But All
prior bets are off, since those were all away-games. The buck, as Tom
Nagel notes, stops with qualia. I'll show this specifically with the
example below.

(3) It is ironic that your example of light = oscillating
electromagnetic radiation should also hinge on speed (frequency). You
say that Searle, in a parallel "simulation," would be waving the magnet
much too slowly, and would then unjustly proclaim "Luminous room, my
foot, Mr. Maxwell. It's pitch black in here!" But here you see how all
these supposedly analogous forms of skepticism are actually parasitic on
subjectivity (with shades of Locke's primary and secondary qualities):
Because of course the only thing missing is the VISIBILITY of light at
the slow frequency. It made perfect sense, and could have been pointed
out all along, that, if fast electromagnetic oscillations really are
light, then it might only be visible to the eye in some of its
frequency ranges, and invisible but detectable by other instruments in
other frequency ranges.

That story is perfectly tenable, and in no way analogous to Searle's
Argument, because it is objective: It's not "what it's like to see
light" (a subjective, "secondary" quality) that the Maxwellian
equation of light with EM radiation is trying to explain, it's the
objective physical property that, among other things, happens to be the
normal cause of the subjective quality of seeing light. The mistake
the sceptics were making is clearly NOT the same as Searle's. They
were denying an objective-to-objective equation: One set of objective
physical properties (among them the power to cause us to see light) was
simply being shown to be the same as another set of objective physical
properties. No one was trying to equate THE SUBJECTIVE QUALITY OF LIGHT
ITSELF with something objective. (Not until lately, that is.)

So, whereas concerns about subjectivity might indeed have been the
source of the earlier scepticism, all that scepticism was simply
misplaced. It was much ado about nothing. Ditto with sound and life:
Subjectivity, though lurking in each case, was really never at issue.
As Nagel puts it, one set of appearances was simply being replaced by
(or eliminated in favor of) another, in the new view, however
surprising the new appearances might have appeared. But no one was
really trying to replace APPEARANCES THEMSELVES by something else, by
the stuff of which all appearances would then allegedly be made: THAT
would have been a harder nut to crack.

But that's the one modern mind-modeling is up against, and Nagel is
right that this is another ball game altogether (my "home-game" analogy
was an understatement -- and the metaphors are as mixed as nuts by
now...). So no analogies carry over. It's not that the rules have
changed. It's just that nothing remotely like this has ever come up
before. So, in particular, you are NOT entitled to help yourself to the
speed analogy in trying to refute Searle's Chinese Room Argument.
Because whereas it would have been no problem at all for Maxwell to
"bite the bullet" and claim that very slow EM oscillation was still
light, only it wasn't visible, one CANNOT say that very slow
symbol-manipulation is still thinking only it's... what?
"Unthinkable?" You took the words out of my mouth.

(4) Your point about the immunity of parallel processes to the Chinese
Room Argument (unlike similar points about speed, capacity or
complexity) has somewhat more prima facie force because it really is
based on something Searle can't take over all by himself, the way he
could with symbol manipulation. On the face of it, Searle couldn't BE
the parallel system that was passing the TT in Chinese in the same way
he could BE the serial symbol system, so he could not take the next
step and show that he would not be understanding Chinese if he were
(and hence that neither would the system he was duplicating).

This is why I suggested to Searle that his "Chinese Gym" Argument fails
to have the force of his original Chinese Room Argument, and is indeed
vulnerable to a "systems reply." It's also why I suggested the
"three-room" argument to Searle, which is completely in the
spirit of the original Chinese Room Argument and puts the burden of
evidence or argument on the essential parallelist, where it belongs.
Here is the critical excerpt from my comments on an earlier draft by
Searle:

> So I respectfully recommend that you jettison the Chinese Gym Argument
> and instead deal with connectionism by turning the Chinese Room
> Argument on its head, as follows. Suppose there are three rooms:
> 
> (1) In one there is a real Net (implemented as physical units, with
> real physical links, real excitatory/inhibitory interconnections
> real parallel distributed processing, real backpropping, etc.) that
> could pass the Turing Test in Chinese (Chinese symbols in, Chinese
> symbols out).
> 
> (2) In the second there is a computer simulation of (1) that likewise
> passes the TT in Chinese.
> 
> (3) In the third is Searle, performing ALL the functions of (2),
> likewise passing the Chinese TT (while still not understanding, of
> course).
> 
> Now the connectionists have only two choices:
> 
> Either they must claim that all three understand Chinese (in which case
> they are back up against the old Chinese Room Argument), or the
> essentialists among them will have to claim that (1) understands but (2)
> and (3) do not -- but without being able to give any functional reason 
> whatsoever why.

So this is what parallelism is up against. I also went on to query the
Connectionists on this, as follows (and received multiple replies, most
along the lines of the 1st two, which I include below):

> From:    Stevan Harnad
> To:      connectionists@cs.cmu.edu
> Subject: Parallelism, Real vs. Simulated: A Query
> 
>   "I have a simple question: What capabilities of PDP systems do and
>   do not depend on the net's actually being implemented in parallel,
>   rather than just being serially simulated? Is it only speed and
>   capacity parameters, or something more?"
> ----------------------------------------------------------------
> 
> (1) From: skrzypek@CS.UCLA.EDU (Dr. Josef Skrzypek)
> Cc: connectionists@cs.cmu.edu
> 
> Good (and dangerous) question. Applicable to Neural Nets in general
> and not only to PDP.
> 
> It appears that you can simulate anything that you wish. In principle
> you trade computation in space for computation in time. If you can
> make your time-slices small enough and complete all of the necessary
> computation within each slice there seem to be no reason to have
> neural networks. In reality, simulation of synchronized, temporal
> events taking place in a 3D network that allows for feedback pathways
> is rather cumbersome.
> 
> (2) From: Michael Witbrock <mjw@cs.cmu.edu>
> 
> I believe that none of the properties depend on parallel implementation.
> 
> There is a proof of the formal equivalence of continuous and discrete
> finite state automata, which I believe could be transformed to prove the
> formal equivalence of parallel and serially simulated pdp models.

Except for some equivocal stuff on "asynchronous" vs "synchronous"
processes, about which some claimed one thing and others claimed the
opposite, most respondents agreed that the parallel and serial
implementations were equivalent. Hence it is not true, as you write,
that parallel systems are "not threatened by [Searle's] Chinese Room
argument." They are, although someone may still come up with a
plausible reason why, although the computational difference is
nonexistent, the implementational difference is an essential one.

And that may, logically speaking, turn out to be (one of the)
answer(s) to the question of which of the "causal powers" of the brain
are actually relevant (and necessary/sufficient) for producing a mind.
I think Searle's Argument (and my Symbol Grounding Problem) have
effectively put pure symbol manipulation out of contention. I don't
think "the same, only faster, bigger, or more complex" holds much hope
either. And parallelism stands a chance only if someone can show what
it is about its implementation in that form, rather than in fast serial
symbolic form, is critical. My own favored candidate for the "relevant"
property, however, namely, sensory grounding, and sensory transduction
in particular, has the virtue of not only being, like parallelism,
invulnerable to the Chinese Argument (as I showed in "Minds, Machines
and Searle"), but also being a natural candidate for a solution to the
Symbol Grounding Problem, thereby, unlike paralellism, wearing the
reason WHY it's critical on its sleeve, so to speak.

(5) Finally, you write "We, and Searle, reject the Turing Test as a
sufficient condition for conscious intelligence." In this I must disagree
with both (or rather, all three) of you: The logic goes like this. So
far, only pure symbol crunching has been disqualified as a candidate
for being the sufficient condition for having a mind. But don't forget
that it was only a conjecture (and in my mind always a counterfactual
one) that the standard (language-only) Turing Test (only symbols in,
and symbols out), the TT, could be successfully passed by a pure symbol
cruncher. Searle's argument shows that IF the TT could be passed by
symbol crunching alone, THEN, because of the Chinese Room Argument, it
would not have a mind, and hence the TT is to be rejected.

Another possibility remains, however, which is that it is impossible to
successfully pass the TT with symbol crunching alone. The truth may
instead be that any candidate that could pass the TT would already have
to have and draw upon the causal power to pass the TTT, the Total
Turing Test, which includes all of our robotic, sensorimotor capacities
in the real world of objects. Now the TTT necessarily depends on
transduction, which is naturally and transparently immune to Searle's
Chinese Room Argument. Hence there is no reason to reject the TTT
(indeed, I would argue, there's no alternative to the TTT, which,
perhaps expanded to include neural function -- the "TTTT"? -- is simply
equivalent to empiricism!). And if a necessary condition for passing the
TT is the causal power to pass the TTT, then there's really no reason
left for rejecting the TT either.

Stevan Harnad

References:

Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
and Theoretical Artificial Intelligence 1:

Harnad, S. (1990) The Symbol Grounding Problem. Physica D (In Press)

Preprints are available by email.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) (01/06/90)

In response to harnad@phoenix.Princeton.EDU (Stevan Harnad)

I think you (and Searle) need to define EXACTLY what you mean by symbol
manipulation. The functional equivalence (other than speed and
efficiency) of parallel and single-processor architectures seems to my
limited knowledge to truly exist. However, I consider the functionality
of parallel systems to be very different from symbol manipulation.
Furthermore, it seems to me that even single-processor systems can do
more than just symbol manipulation.

If you define symbol manipulation to be exactly what single-processor
systems can do, then you have trivially shown that if symbol
manipulation itself cannot due it, then single-processor systems cannot.
But the chinese room argument which accomplishes this seems to use
symbol manipulation in a way which makes it much more specific than all
things that can be done with a single-processor system.

Which brings us to the point I made a few posts back, I think to
this group. The point is that most of the Searle article rests on
definitions. Here the one in question is symbol manipulation. The more
critical one, I think, is understanding. Also, intelligence is a key
one. The whole conclusion of the argument is that the system can't
'understand' or won't be 'intelligent.' But these things are never
defined.

In fact, Searle seems to have his definitions already in his head and
they seem to be defined precisely so that his argument is true, which
makes the argument trivial. You seem to have decided the fate already
too:

In article <12679@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
>I don't know about you, but I've never been at all happy with
>naive hackers' claims that all there is to mind is the usual stuff, but
>(1) faster, (2) bigger, and (3) more "complex." I think the real
>punchline's going to turn out to be a good bit more substantive than
>this hand-waving about just (a lot) more of the same...

The reason everyone disagrees about the Searle article(s)/ideas is that
he doesn't provide any definitions and so people must supply their own.
Those that hold to the "naive hackers' claims" have definitions in their
heads which cause them to disagree with Searle. Those who share your
views have definitions in their heads which cause them to agree with
Searle. Thus, I don't think Searle's arguments actually changed anyone's
mind about anything.

-Karl		kpfleger@phoenix.princeton.edu
		kpfleger@pucc (bitnet)

daryl@oravax.UUCP (Steven Daryl McCullough) (01/06/90)

In article <12679@phoenix.Princeton.EDU>, harnad@phoenix.Princeton.EDU
(Stevan Harnad) writes:

> [...stuff deleted...]
> By the way, in endorsing the systems reply in principle, as you do
> (apparently only because of its counterintuitiveness, and the fact that
> other counterintuitive things have, in the history of science, turned
> out to be correct after all), you leave out Searle's very apt RESPONSE
> to the counterintuitive idea that the "system" consisting of him plus
> the room and its contents might still be understanding even if he
> himself is not: He memorizes the rules, and henceforth he IS all there
> is to the system, yet still he doesn't understand Chinese.

> (And I hope
> you won't rejoin with the naive hackers' multiple-personality gambit at
> this point, which is CLEARLY wanting to save the original hypothesis at
> any counterfactual price: There is no reason whatsoever to believe that
> simply memorizing a bunch of symbols and symbol manipulation rules and
> then executing them is one of the etiologies of multiple personality
> disorder!)
> [...stuff deleted...]

Stevan, I'm not surprised thatt you hope they don't bring up the
"multiple-personality gambit", since (it seems to me) it makes a
shambles of Searle's argument.

Consider my version of the strong AI claim:
A mind is a process, produced by executing a program.

It is obvious that a single physical system (a computer) can execute
more than one program simultaneously, so a consequence of my version
of strong AI is that a physical system, such as a brain, can
simultaneously be associated with more than one mind. If Searle
memorizes the Chinese room rules, then according to strong AI, he is
producing a Chinese mind, in addition to his "normal" mind. When you ask
him if he understands Chinese, you receive the correct answer from his
normal mind; the normal Searle *doesn't* understand Chinese. That doesn't
prove that there is not a second mind associated with the physical system
of Searle plus rules which does understand Chinese.

The argument that the "multiple-personality" (or, rather,
multiple-mind) idea is naive is no argument. If the articles by Searle
and the Churchlands in Scientific American represent the best and
wisest thoughts on the subject, then the whole area has simply not
progressed beyond the naive stage.

The argument that
> There is no reason whatsoever to believe that
> simply memorizing a bunch of symbols and symbol manipulation rules and
> then executing them is one of the etiologies of multiple personality
> disorder!  carries no weight with respect to Searle's position.
Searle's claim in the Scientific American article is that the strong
AI position could be proven false by his Chinese room thought
experiment. To attack Searle's position it is not necessary to give
any evidence in favor of strong AI; it is only necessary that Searle's
argument has holes in it, that he has not proved strong AI to be
impossible. In order for Searle to claim that he has "disproved"
strong AI, he either must come up with a set of definitions and axioms
that everyone can agree with, and show that the falsity of strong AI
is a logical consequence of those definitions and axioms, or he must
by exhaustion answer every objection (naive or not, supported by
evidence or not) to his "proof".

Daryl McCullough

mike@cs.arizona.edu (Mike Coffin) (01/06/90)

From article <12679@phoenix.Princeton.EDU>, by harnad@phoenix.Princeton.EDU (Stevan Harnad):
> (1) Unfortunately, in suggesting that Searle is the one who is begging
> the question or assuming the truth of the hypothesis that syntax alone
> can constitute semantics you seem to have the logic reversed: In fact,
> Searle's the one who's TESTING that hypothesis and answering that question;
> and the Chinese-Room thought-experiment shows that the hypothesis fails
> the test and the answer is no! It is the proponents of the "systems
> reply" -- which merely amounts to a reiteration of the hypothesis in
> the face of Searle's negative evidence -- who are begging the question.

Of course, we naive hackers don't see any negative evidence.  Searle
tests the hypothesis and finds that he must either reject it or adopt a
point of view he finds counterintuitive.  We don't find that too
surprising because he has set up a thought experiment that is well
beyond the bounds of human experience and thus not likely to yield
intuitive results.

Moreover, I (at least) am not arguing that symbol pushing can produce
intelligence.  I merely argue that Searle has not proven that it
can't.  This is very different. (My own view is that we don't know
enough about intelligence to have the slightest idea what might be
necessary to create it.)

> By the way, in endorsing the systems reply in principle, as you do
> (apparently only because of its counterintuitiveness, and the fact that
> other counterintuitive things have, in the history of science, turned
> out to be correct after all), you leave out Searle's very apt RESPONSE
> to the counterintuitive idea that the "system" consisting of him plus
> the room and its contents might still be understanding even if he
> himself is not: He memorizes the rules, and henceforth he IS all there
> is to the system, yet still he doesn't understand Chinese. (And I hope
> you won't rejoin with the naive hackers' multiple-personality gambit at
> this point, which is CLEARLY wanting to save the original hypothesis at
> any counterfactual price: There is no reason whatsoever to believe that
> simply memorizing a bunch of symbols and symbol manipulation rules and
> then executing them is one of the etiologies of multiple personality
> disorder!)

As I have pointed out before, not all of us think the systems response
is counterintuitive.  I think that complicated systems almost always
display surprising properties that none of their components do.  Our
intuition is easily fooled when we apply it to subjects well outside
everyday experience.

Also, we don't postulate a multiple personality in the sense of human
pathology.  We postulate that Searle, after memorizing an absolutely
astonishing number of complicated rules and executing them with
immense speed and precision---a situation *far* outside current
experience---might exhibit quite amazing and unexpected properties.
In fact, he might simultaneously claim to be Searle, blindly executing
rules, and also exhibit an entirely different super-personality.
"Super" in the sense of being above, not cohabitant.  This surprises
me no more than the fact that the computer on my desk, which consists
of little more than a power supply and some flip-flops, seems, when
the right incantations are given, to exhibit behavior that follows the
semantics of any of dozens of computer languages.  The pieces are
blindly following the rule of physics; the whole seems almost to
have a mind of its own. 

-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

gilham@csl.sri.com (Fred Gilham) (01/06/90)

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) writes:
------
Which brings us to the point I made a few posts back, I think to
this group. The point is that most of the Searle article rests on
definitions. Here the one in question is symbol manipulation. The more
critical one, I think, is understanding. Also, intelligence is a key
one. The whole conclusion of the argument is that the system can't
'understand' or won't be 'intelligent.' But these things are never
defined.
------

I interpret Searle's argument as follows:

There is something we (humans) do, called understanding.  We usually
know when we do it, and we can often say when others do or do not
understand.  We can also give examples of, say, a computer system
``not understanding'', as when it bills someone for $0.00 and causes
increasingly threatening letters to be send demanding payment.

So we then assume that we have a system based on manipulating and
transforming what we will loosely call symbols, or what I would call
physical patterns.  Let's say (counterfactually) that this system can
take written input from a Chinese and then produce written output that
the Chinese would interpret as being an intelligent reply.  Suppose
this were a program running on a computer, it would pass the Turing
test.

Now if someone emulates this system by hand, manipulating the patterns
in an equivalent way, this person can legitimately claim that he
doesn't understand Chinese, yet he can follow the rules and produce
the desired results.  Assuming we are using books and paper, the only
physical apparatus existing is the books, paper, and the person who
doesn't understand Chinese.  The question is, what understands
Chinese?  Searle claims that there is nothing there to understand
Chinese.

If one claims that the ``system'' somehow understands Chinese, Searle
says that you can do away with the books and paper, and memorize the
rules.  The person doing the manipulation will still not understand
Chinese, and there is no other physical entity left to be a candidate
for understanding.

From this, Searle concludes that the Chinese room does not understand,
that is, it does not share in the experience that we are talking about
when we use the word understand.

Thus, I don't think it is necessary to pin down the word `understand'
any more carefully.

I would also say that a computer doesn't manipulate symbols.  It
transforms one physical state or set of physical patterns into another
physical state.  As far as I know, it does not do anything other than
this.  We (thinking humans) interpret those physical states as symbols
in an arbitrary fashion.  We then go on to derive meaning from those
symbols.  Thus we interpret the computer's behavior to be pertinent to
the process we call thinking.  I see little difference between this
and the idea that a saw knows about architecture because it performs
functions we find useful when building buildings.

-Fred Gilham    gilham@csl.sri.com

jones@fornax.UUCP (John Jones) (01/06/90)

Searle's Chinese Room argument appears to be based on sleight-of-hand:
we are presented with a system, containing a man and a book of rules, 
that produces appropriate responses to an arbitrarily large range of
questions.
Searle invites us to locate where the 'understanding' takes place.  Naturally we 
look first at the man -- whoever heard of a book understanding anything --
but the man assures us, through introspection, that he understands neither the
questions nor the answers.  Searle concludes that no understanding is taking
place.

One of the beauties of this argument is that it allows Searle to avoid any
definition of 'understanding'.  We know what it is like to understand something,
so introspection provides the sole and universal test for the presence of
understanding.  If we are faced with an entity of a new type, we have simply
to imagine ourselves in the place of that entity and introspect.

This criterion works well enough when the imaginative leap is small.
We can imagine ourselves into the place of the man in the Chinese room
easily enough, and we're quite confident in saying he doesn't understand.
The criterion discourages us from looking at the book -- we have no idea how
to put ourselves in the place of a book -- so we pass over the remarkable part 
of the system.  For this book is unlike any book that exists in the world.
It contains everything an intelligent adult Chinese knows about himself and
the world -- so it's a big book, perhaps comparable in size with the Library
of Congress -- but, far more remarkable, it contains rules that will allow
the reader to find the knowledge needed to answer an uninterpreted string
of signs and to compose that knowledge into another uninterpreted string
of appropriate length and form.  We have no idea what such rules might look
like.  And the only tool Searle offers us for evaluating the properties of
this book is introspection.

To make the adequacy of introspection more plausible, Searle has the man 
memorise the book.  Now we have a man who has memorised something he
doesn't understand.  We can imagine himself into his position easily enough,
we introspect, and, sure enough, we don't understand.

Other critics have pointed out that the man lacks the memory capacity to memorise
the book, and that, even if he were to memorise it, he could not respond to 
questions in real time.  Both these points are true, but irrelevant.  The reason
Searle's argument breaks down is that the man-plus-book is an entity unlike any
we have ever encountered, and we can no more imagine ourselves into his position 
than we can imagine ourselves to be a bat.

The inadequacy of imagination-plus-introspection as a criterion for understanding
can be demonstrated with less exotic examples than the Chinese room.
Consider one of the split-brain patients described by Sherrington.
Asked to describe an object that he is allowed to handle but not to see,
the patient can write down that it is a pipe, but states verbally that he
has no idea what it is.  This behaviour is no more extraordinary than might be
expected from the man-plus-book; Searle's criterion is no more helpful
in one case than in the other.

We have two alternative criteria for understanding; an operational criterion, 
such as the Turing test; and the imagination-introspection criterion proposed
by Searle.  It is, of course, meaningless to ask which criterion represents
true understanding.  I suggest, though, that by limiting us to entities we can
---
readily imagine ourselves in the place of, Searle guarantees that our use of the 
word 'understanding' will always be parochial and anthropocentric.  For this reason
I would prefer an operational criterion, and the Turing test seems as reasonable an
operational criterion as one could wish for.

                                                               John Dewey Jones

daryl@oravax.UUCP (Steven Daryl McCullough) (01/06/90)

In article <GILHAM.90Jan5103910@cassius.csl.sri.com>, gilham@csl.sri.com (Fred
Gilham) writes:

> If one claims that the ``system'' somehow understands Chinese, Searle
> says that you can do away with the books and paper, and memorize the
> rules.  The person doing the manipulation will still not understand
> Chinese, and there is no other physical entity left to be a candidate
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> for understanding.
  ^^^^^^^^^^^^^^^^^

I may be departing from the orthodox strong AI position for saying
this, but it seems to me that it is not *physical entities* that can be
said to understand or not to understand, but *running programs*. Since a
program has to run on some physical entity, such as a brain, or a
computer, then you may wonder what difference does it make whether one
associates an understanding mind with the physical entity or with the
running program. Well, the biggest difference is that there may be
more than one mind associated with a single physical entity. I don't
see how Searle's tactic of doing away with the books and paper and
memorizing the rules himself changes anything; there are still two
programs running: the one hard-wired into Searle's brain, which of
course doesn't understand, and the one programmed into the rules he
memorized, which may or may not understand.

In order for Searle's argument against the systems reply to have any
force, he needs to show that it is impossible for more than one system
(candidate for understanding, as you put it) to be associated with a
given physical entity. He seems to take the one-to-one correspondence
between minds and physical entities for granted, although it is certainly
not obvious to me.

Daryl McCullough
The Crux of the Bisquit is the Apostrophe

hougen@umn-cs.CS.UMN.EDU (Dean Hougen) (01/06/90)

In article <12679@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
>record, although his Sci. Am. paper was not the most cogent version
>of Searle's position, I don't think the Churchland rebuttals work,
>so Searle's Argument comes out on top again.

 *see below*

>Searle's the one who's TESTING that hypothesis and answering that question;
>and the Chinese-Room thought-experiment shows that the hypothesis fails
>the test and the answer is no! 

According to the Searleans, but ...

>It is the proponents of the "systems
>reply" -- which merely amounts to a reiteration of the hypothesis in
>the face of Searle's negative evidence -- who are begging the question.

If all they were doing was reiterating the hypothesis, then, of course,
they would be begging the question.  But they aren't.  Instead they (and
I am one of them) are simply saying that Searle's perspective on the 
question is flawed and therefor he has no negative evidence.  (BTW, to
beg the question they would have to hold that the system was indead 
thinking without giving reasons for believing that it was.  It is very
possible, however, to give the systems reply, "If it is the case that
it is not Searle, but the system as a whole that were thinking, then
Searle's lack of knowledge of what he was a part of is not surprizing,"
without actually claiming that "We KNOW that the system would be thinking,"
and thereby begging the question.  Alternately, reasons could be given for
believing that the system was thinking - a question answered is not a 
question begged.)

>By the way, in endorsing the systems reply in principle, as you do
>(apparently only because of its counterintuitiveness, and the fact that
>other counterintuitive things have, in the history of science, turned
>out to be correct after all),

 *see below*

>you leave out Searle's very apt RESPONSE
>to the counterintuitive idea that the "system" consisting of him plus
>the room and its contents might still be understanding even if he
>himself is not: He memorizes the rules, and henceforth he IS all there
>is to the system, yet still he doesn't understand Chinese. 

Ah, but why should he understand what he is doing?  Why is his perspective
magical (or ideal, perfect, prefered, etc.)?  His perspective has not
changed, only the little bits of paper and pencil have gone away.  The 
system is riding atop his conscious, english-speaking level.  Why should
he have greater access to that than he does to the lower levels of his
mind's operation?  Can Searle tell you through introspection how he calls
up memories?  Can he tell you how it is that he learns?  He has not access
to these other levels of his mind's activity, why this one.  This is what
Searle fails to mention in his very un-apt response, and so it, like his
original argument, fails.

>(And I hope
>you won't rejoin with the naive hackers' multiple-personality gambit at
>this point, which is CLEARLY wanting to save the original hypothesis at
>any counterfactual price: There is no reason whatsoever to believe that
>simply memorizing a bunch of symbols and symbol manipulation rules and
>then executing them is one of the etiologies of multiple personality
>disorder!)

Of course not, we're not talking about multiple personality disorders here.
*see below*

>As to the speed factor: Yes, that is one last holdout, if it is in
>fact true that Searle could never pass the Chinese TT in real time. But
>that's at the price of being prepared to believe that the difference
>between having and not having a mind is purely a function of speed!

There is really no reason for me to go on.  If the Chinese room has not
made it this far, no rebuttals of the Churchland's arguments will save it
now.  I did want to mention, however, that I agree with this point.  As I
asked in a recent article, why would anyone believe that thinking and not
thinking is purely a function of speed?

>light, only it wasn't visible, one CANNOT say that very slow
>symbol-manipulation is still thinking only it's... what?
>"Unthinkable?" You took the words out of my mouth.

Very rude, sticking words into their mouths like that just so you
could accuse them of having made your point for you.  *see below*

>Stevan Harnad

--------------------------------------------------------------------------
What follows is a general flame about the tactics used by Hard-core-
Searleans in making their points.  It does have some slight value to
those of you who are not terribly convinced by the logic used by Searleans
but come away with a feeling that they may be right.  This is what all
those *see below* markers were about.

Starting with Searle himself in his now famous (for some reason) paper,
Hard-core-Searleans have appearantly found mere logic to be insuficient
to sway others to their cause, and have resorted to psychological tricks
and manipulations to win their points.

In said paper, Searle takes a number of steps to try to get the reader to
empathize with his position in the Chinese Room.  First off, he makes it
a CHINESE room, because he knows that few of the readers in his target
audience will know how to read Chinese, and that all things Chinese have 
an air of inscrutability to many westerners.  Many of us feel that we might
never learn Chinese through regular methods, to say nothing of having
unexplained rules in front of us.  Second, he pretends that the room would
be very simple, with tiny pieces of paper, etc.  "Of course no understanding
is going on here, understanding is complex," we are to tell ourselves.  
Third he places himself in the room so that he can write in first-person
how "I understand nothing of Chinese," etc.  It is like reciting a religeous
literagy just to read through the paper.  And there is more.

Other Searleans have followed suit.  Take a look at the *see below* markers
in this text.  They are places where Harnad threw rabbit punches.  Why they
are psychological attacks and not logical ones should be quite obvious.

Keith Gunderson, here at the U of MN is quite the same way.  His course here
on minds, brains, and machines seemed 90+% nonsense to try to get you to
accept his positions regarding the Searle-battle.  I went into the course
knowing a good deal about the subject matter, (having had a similar course
taught by an excellant prof. named Bill Robinson at Iowa State University)
and quite excited about getting the chance to argue the ins and outs of the
Searle-battle with one of his followers.  But instead I got someone who
would rather joke about Searle's opponents than to discuss rationally their
or Searle's positions.  He even went so far as to coin the word
"outCHURCHLANDish" and used it constantly to cut off discussion.
"You aren't really going to try to support that outCHURCHLANDish position,
are you?  Ha, ha, ha."  The rest of the class time was spent trying to
buddy-up to class members so that they would accept his positions without
really looking at them too hard.  Quite disgusting, all-in-all.

Dean Hougen

Credentials:  If any of you are interested, I do not pretend to be any
great philosopher.  I do have an undergradute minor in philosophy, and
I took enough courses there (including some terrific courses taught by 
the spectacular philosopher (and friend) E.D. "Doc" Klemke) to know the
difference between philosophy and some of the junk we have been given by
the Hard-core-Searleans.  (The meat of the Chinese Room argument is phil,
the dressing is garbage.)

Disclaimer:  I am afraid that I may have upset some people with this
article to the degree that for the first time I think I should include
a disclaimer, so here it is.  I speak only for myself.  I do not
represent any other person or organization in these writings, and
they should not be misconstrued as such.  I wouldn't have said it if I
didn't think it was true.
--
"They say your stupid, that you haven't got a brain.
 They say you'll listen to anything, that you're just bein' trained.
 (There's something inside your head.  Grey matter, grey matter.)
	- Oingo Boingo

tanner@cheops.cis.ohio-state.edu (Mike Tanner) (01/08/90)

[I tried to post this right after reading Harnad's note, but our news poster
blew up somehow.  I think my point has been raised since then, but it's
important so I'll say it again anyway.]

In article <12679@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:

>   Searle's Argument comes out on top again.

Searle doesn't have an argument.  He has an assertion -- understanding can
never arise from formal symbol manipulation -- which he takes to be obviously
true and proceeds to talk all around it for page after page.

Whether it's true or not, so what?

As an argument against AI it's powerless.  It confuses a model of computation
with the real thing.  Programs running on real computers are *not* formal
symbol manipulators.

-- mike

smoliar@vaxa.isi.edu (Stephen Smoliar) (01/08/90)

In article <18053@umn-cs.CS.UMN.EDU> hougen@umn-cs.cs.umn.edu (Dean Hougen)
writes:
>
>Keith Gunderson, here at the U of MN is quite the same way.  His course here
>on minds, brains, and machines seemed 90+% nonsense to try to get you to
>accept his positions regarding the Searle-battle.  I went into the course
>knowing a good deal about the subject matter, (having had a similar course
>taught by an excellant prof. named Bill Robinson at Iowa State University)
>and quite excited about getting the chance to argue the ins and outs of the
>Searle-battle with one of his followers.  But instead I got someone who
>would rather joke about Searle's opponents than to discuss rationally their
>or Searle's positions.

It should come as no surprise that Searle, himself, behaved pretty much the
same way when he led a seminar at UCLA last year.  The rabbit punches came
fast and furious there, as if to encourage the audience to add there own.
(I still have no idea who in the audience thought he was making a contribution
by attacking LISP for its parentheses, but that will give you an idea of the
sort of arena that Searle now fosters and probably encourages.)

My own feeling is that David West has done the best job so far of putting a
finger on where the problem lies.  He made the observation that we are trying
to use the word "understanding" as if it denotes some entity which is
boolean-valued, atomic, veridical, and static.  Any of these properties
may be open to question.  After all, a good deal of Wittgenstein's
PHILOSOPHICAL INVESTIGATIONS was devoted to questioning whether or
not those properties could be applied to a denotation of the word
"game."  If we can't get out of the woods with an apparently simple
word like "game," are we not a bit arrogant to assume that we have
a grasp on a word like "understanding?"

This is ultimately one of the messages of Minsky's SOCIETY OF MIND (although I
do not think that Minsky was able to summarize the position as well as West
did).  There is no reason to assume that all the words we use admit of
denotations as simple as those for words like "circle" or "square."  Words
like "understand," "learn," and "self" are particularly deceptive and are
the last sorts of objects which should be tried by the courts of our
intuitions.  The last time this argument erupted, it was because I accused
Searle of playing fast and loose with his terminology, at which point Stevan
Harnad took extreme umbrage.  Since then, I have seen nothing to persuade me
to change by position.  Instead, I have seen a better characterization than
I could provide as to what would involve treating one's terminology with more
respect.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"For every human problem, there is a neat, plain solution--and it is always
wrong."--H. L. Mencken

robertso@ug.cs.dal.ca (01/09/90)

In article <11274@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:

>In article <18053@umn-cs.CS.UMN.EDU> hougen@umn-cs.cs.umn.edu (Dean Hougen)
>writes:

>>                                         . . . But instead I got someone who
>>would rather joke about Searle's opponents than to discuss rationally their
>>or Searle's positions.
>
>It should come as no surprise that Searle, himself, behaved pretty much the
>same way when he led a seminar at UCLA last year. . .


I'll second that!  I saw Searle giving essentially this very paper at Concordia
University a few years ago.  At the time I was enrolled in a Philosophy of Mind
course at McGill which was taught by a guy named Charles Travis, a classmate
of Searle's from Berkely.  So naturally, I went expecting at least some good
food for philosophical thought.  Instead, what I saw was Bob Hope meets
Wittgenstein - a smattering of philosophical allusion bolstered by much
irrelevant humour when the going actually got rough.


Speaking of Wittgenstein, . . .

  [Searle's argument trades on attributing certain semantic properties
   to the word "understanding" which are unintuitive and/or unsupported.
>Wittgenstein's] PHILOSOPHICAL INVESTIGATIONS was devoted to questioning whether
>or not those properties could be applied to a denotation of the word
>"game."  If we can't get out of the woods with an apparently simple
>word like "game," are we not a bit arrogant to assume that we have
>a grasp on a word like "understanding?"


Wittgenstein's target, in both the family resemblance and private language 
arguments, was a long standing philosophical tradition to the effect that
ALL bits of language admit to a (simple enough to be useful) semantic
reduction - Empirical Positivism.  Wittgenstein offered a radically different
view of how we come to acquire  language, and so also of what we, as the one's
who USE it, can say ABOUT it  (especially about MENTAL terms, like
"understanding").

I would be the first to cite these arguments in support of the idea that 
machines, that is artifices, can (in principle) think, at least as well as
we can digest pizza.  (A small taste of Searle's desperate wit).  But
Wittgenstein did NOT question that we can and do reliably USE linguistic
terms.  Just Because they don't lend themselves to criterial analyses does not
 imply that we don't "know what they mean", at least insofar as we seem to
 communicate adequately with them (albeit after much effort, at times...).
That is why Searle can indeed help himself to the term 
"understanding" without defining it.  Of course, if he implicitly uses it
as having semantic properties which no one else thinks it does, as is charged,
then he still has to give a good defense of his analysis of the term (which
he doesn't).

I think that the best way to get at Searle's dubious assumptions about
certain mental predicates is not to demonstrate how they don't fit well
with the human evidence, but instead to reveal how they don't fit any BETTER
with the MACHINE evidence than several alternatives might.  It seems that
Searle designed his Chinese Room with the following target in mind:  Anyone
who thinks that a computer could ever think or understand anything must be
committed to maintaining that it is the computer's CPU which is doing the
thinking, or understanding, or whatever.  Advances in PDP architectures
provided the first hint that, yes, although a Turing Machine can emulate
anything, that does not mean that its SYNTACTIC components, ie., tape states
(or transistor states), are SEMANTICALLY interpretable as having the same
macro-properties as that which it computes.

Does that mean that the room as a whole knows Chinese?  Well, it sounds
pretty absurd to me.  Rooms don't know anything, right?  (See?  We can 
discard BLATANT abuses of language without an accurate analysis to back us
up.  We just have to agree that they sound stupid.)  But that doesn't mean that
the only alternative is to presume that a computer and its CPU are semantically
indistinguishable.  Here I think that the details of the case are too 
simplistic to even give an "in principle" representation of exactly what is at
issue.

In short, Searle's argument, if we are to interpret it charitably as being one,
succeeds quite well in discrediting a certain philosophical stance towards
machines - but it is a stance that is far from monopolizing current research,
and one which I hope is not to widespread among those of us who would love to
make, among other things, a machine that could speak Chinese.

=================

 - Chris Robertson          Don't look back.  You may trip and hit your head.

 

smoliar@vaxa.isi.edu (Stephen Smoliar) (01/09/90)

In article <1990Jan9.064320.2131@ug.cs.dal.ca> robertso@ug.cs.dal.ca.UUCP ()
writes:
>
>Wittgenstein's target, in both the family resemblance and private language 
>arguments, was a long standing philosophical tradition to the effect that
>ALL bits of language admit to a (simple enough to be useful) semantic
>reduction - Empirical Positivism.  Wittgenstein offered a radically different
>view of how we come to acquire  language, and so also of what we, as the one's
>who USE it, can say ABOUT it  (especially about MENTAL terms, like
>"understanding").
>
>I would be the first to cite these arguments in support of the idea that 
>machines, that is artifices, can (in principle) think, at least as well as
>we can digest pizza.  (A small taste of Searle's desperate wit).  But
>Wittgenstein did NOT question that we can and do reliably USE linguistic
>terms.  Just Because they don't lend themselves to criterial analyses does not
> imply that we don't "know what they mean", at least insofar as we seem to
> communicate adequately with them (albeit after much effort, at times...).
>That is why Searle can indeed help himself to the term 
>"understanding" without defining it.  Of course, if he implicitly uses it
>as having semantic properties which no one else thinks it does, as is charged,
>then he still has to give a good defense of his analysis of the term (which
>he doesn't).
>
This is a good point.  I think the key phrase is "communicate adequately."  If
Searle were to accept Wittgenstein's view of how we use words like
"understand," then he would be painting himself into a corner, since
all we would be talking about is how we communicate about observed behavior.
My guess is that he WANTS to assume that we can perform semantic reduction
on the word "understand" and then excuse himself for the fact that he will
not offer up his own version of that reduction.  This may make for a clever
debating technique, but it doesn't help the rest of us much when it comes to
communicating adequately while using the word "understanding."

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"For every human problem, there is a neat, plain solution--and it is always
wrong."--H. L. Mencken

jeff@aiai.ed.ac.uk (Jeff Dalton) (01/10/90)

In article <12702@phoenix.Princeton.EDU> kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) writes:
>Which brings us to the point I made a few posts back, I think to
>this group. The point is that most of the Searle article rests on
>definitions. Here the one in question is symbol manipulation. The more
>critical one, I think, is understanding. Also, intelligence is a key
>one. The whole conclusion of the argument is that the system can't
>'understand' or won't be 'intelligent.' But these things are never
>defined.

Suppose there's a conversation taking place in Chinese and someone
asks "do you understand Chinese?"  Would you find this a difficult
question to akswer?  Would you need a carefully articulated definition
of "understand" first?  I don't think you would.  I know I wouldn't.

This doesn't mean we can forever put off considering just what we mean
by "understand".  But I don't think you will ever understand why the
Chinese Room argument appeals to anyone if you insist that no one can
form a valid opinion until all the terms have been carefully defined.

I think anyone who wants a definition should try defining it themself.
I don't think it's all that easy to do, and it's easy to get it wrong.
That is, it's easy to end up with a definition that doesn't really
match your notion of understanding (some things you wouldn't consider
understanding satisfy your definition, or some things that you would
don't).

>In fact, Searle seems to have his definitions already in his head and
>they seem to be defined precisely so that his argument is true, which
>makes the argument trivial. You seem to have decided the fate already
>too:

Well, that's the problem with definitions: accusations of begging the
question.  My suspicion about those who are demanding definitions is
that they hope the definition will let them settle the issue in a
fairly straightforward way.  But if the problem really is a hard one,
what then?  

The danger is that we'll end up arguing about whether various things
should be called "understanding" when what we should be looking at is
whether there are any interesting differences between humans and machines.

>The reason everyone disagrees about the Searle article(s)/ideas is that
>he doesn't provide any definitions and so people must supply their own.

I think there's more to it than that.  For example, some people seem
to feel that having the right behavior is all that could ever be asked
as evidence of understanding.  In that case, I'd be tempted to say
they have a losing definition of "understanding", but maybe they're
just not interested in other sense of the word.  Who am I to say what
they should be interested in?

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (01/10/90)

In article <16603@megaron.cs.arizona.edu> mike@cs.arizona.edu (Mike Coffin) writes:
>Moreover, I (at least) am not arguing that symbol pushing can produce
>intelligence.  I merely argue that Searle has not proven that it
>can't.  This is very different. (My own view is that we don't know
>enough about intelligence to have the slightest idea what might be
>necessary to create it.)

I agree.  I think it's at least very difficult to decide this question
now, when we've never seen a program that can pass the Turing Test and
have no idea what it would look like or how it would work or even if
it's possible to have one at all.

jeff@aiai.ed.ac.uk (Jeff Dalton) (01/10/90)

In article <199@fornax.UUCP> jones@lccr.UUCP (John Jones) writes:
>One of the beauties of this argument is that it allows Searle to avoid any
>definition of 'understanding'.  We know what it is like to understand
>something, so introspection provides the sole and universal test for the
>presence of understanding.  If we are faced with an entity of a new type,
>we have simply to imagine ourselves in the place of that entity and
>introspect.

I think you're right as far as Searle not needing a definition is
concerned.  In the Chinese room story, he's just using our common
sense notion of understand, as in "do you understand Chinese?".

I think it's clear that the person in the C.R. wouldn't find that he
understood Chinese.  I don't think that this conclusion requires that
we accept introspection as a sole and universal test, however.  All
we're being asked to do is to imagine ourself in the place of a
person, not an arbitrary entity of a new type.  (Of course, Searle
tries to get us to think that whether or not the person understands
shows whether or not there's any understanding at all, but that's
another matter.)

>The inadequacy of imagination-plus-introspection as a criterion for
>understanding can be demonstrated with less exotic examples than the Chinese
>room.  Consider one of the split-brain patients described by Sherrington.
>Asked to describe an object that he is allowed to handle but not to see,
>the patient can write down that it is a pipe, but states verbally that he
>has no idea what it is.  This behaviour is no more extraordinary than might
>be expected from the man-plus-book; Searle's criterion is no more helpful
>in one case than in the other.
>
>We have two alternative criteria for understanding; an operational criterion, 
>such as the Turing test; and the imagination-introspection criterion proposed
>by Searle.  It is, of course, meaningless to ask which criterion represents
>true understanding.  [...]

I don't think that's all there is to it.  First, I don't think the
Turing test is a test for understanding at all.  Maybe it will turn
out that it really does test for understanding, but that has to be
shown.  Otherwise, understanding would just mean having certain
behavior; and if that's there is to it, it turns out not to be very
interesting after all (because it doesn't connect to our internal
experience) -- at least to me.  I'd want to ask about something else,
consciousness perhaps.

Second, is introspection the only alternative?  Perhaps it is now,
when we don't have any machines that can pass the Turing Test and
don't know all that much about how the human mind works.  But maybe
when we know more we will find other interesting, relevant differences
between humans and Turing-capable machines.

As an example of something that's roughly the sort of thing I have in
mind, consider dreams.  By introspection, what we know is that,
sometimes, when we wake to a certain extent we can remember dreams.
We don't know whether the dream took place over a period of time or
just appeared all at once as a memory.  But later, we discover the
link between dreams and REM sleep, and then we have some other
evidence (although not absolutely conclusive evidence) that dreams
don't appear all at once.  Once we know more, new kinds of evidence
can become available.  [Some people would count this sort of thing as
behavior too, but it's not the sort of behavior considered by the
Turing Test.]

As another example, consider two programs that play Chess.  They might
have more or less the same behavior but work quite differently inside.
Perhaps one is just a brute force search and while the other tries to
emulate human play by using structures representing goals and plans.
A significant internal difference doesn't have to show up in behavior,
or at least not in such a way that we could be sure which program was
which without looking inside.

What I'm trying to suggest is that at some point we may be able to
look inside both humans and machines and find relevant differences.
Maybe it will be clear from this that the machines are just faking it,
or maybe it will be clear that they should count as understanding
after all, even if they get there in a somewhat different way.

Of course, it may be that nothing of the sort becomes clear at all.
But I think we still have to accept it as a possibility.

-- Jeff

robertso@ug.cs.dal.ca (01/10/90)

  In response to my earlier posting, Stephen Smoliar writes:

>. . . if Searle were to accept Wittgenstein's view of how we use words like
>"understand," then he would be painting himself into a corner, since
>all we would be talking about is how we communicate about observed behavior.
>My guess is that he WANTS to assume that we can perform semantic reduction
>on the word "understand" and then excuse himself for the fact that he will
>not offer up his own version of that reduction.  This may make for a clever
>debating technique, but it doesn't help the rest of us much when it comes to
>communicating adequately while using the word "understanding."

  Yes.

  Interestingly enough, his talk at Concordia seemed to emphasize that he DOES
  want to subscribe to a non-reductionistic account of, among others, mental
  predicates.  The well-worn analogy was solubility - a property instantiated
  globally, not locally.

  The problem is that he has hoisted reductionism upon the would-be Strong-AI
  defender, whether asked for or not.  He CHARGES strong-AI with the assumption
  that language, (and mind), are REDUCIBLE.  But that's not the only option!
  Of course his arguments work well against reductionistic models of cognition,
  but he leaves untreated several other enticing models (like Connectionism).

  ===================

  - Chris Robertson                     Hey . . . that's my lunch.

dhw@itivax.iti.org (David H. West) (01/10/90)

Someone *please* create talk.philosophy.searle, so that this
discussion can be moved there.  The present situation in comp.ai is as 
if sci.math.num-analysis were swamped with discussions of the
impossibility of angle-trisection.

-David West       dhw@iti.org