[net.philosophy] Rosen on Searle

tmoody@sjuvax.UUCP (T. Moody) (10/24/85)

[]

>> I originally asked whether anyone disputed my claim that the human
>> mind is not equivalent to a turing machine. After all the negative
>> response, I would like to change my question to:
>> 
>> *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
>>  NOT EQUIVALENT TO A TURING MACHINE?* [query by Tom Tedrick]

>I could care less about the exact type of machine that the human mind really
>is, but I have no disagreement with the notion that the mind and brain are
>represented as some sort of machine. [Rosen]

Indeed, Rich Rosen should have no disagreement, since as long as "the
exact type of machine" is not specified, agreement or disagreement
would be without content.  As long as one is careless about the exact
type of machine, *anything* can be represented as some sort of
machine.

>To throw yet another bone into this mix, I will quote from the oft-misquoted
>(at least here) John Searle, from his "Minds, Brains, and Programs":
 [Rosen, quoted material from Searle omitted]

To my knowledge, the only people in this newsgroup who have been
quoting Searle are Michael Ellis and me.  I have checked my archives against
Searle's papers; neither of us has misquoted him.

________________________________
|  "Could instantiating a program, the right program of course,
|  by itself be a sufficient condition of understanding?"
|
|   This I think is the right question to ask, though it is usually
| confused with one of the earlier questions, and the answer to it is no.
|
|  "Why not?"
|
|    Because the formal symbol manipulations themselves don't have
|   any intentionality...
|______________________________ [Searle, quoted by Rosen]

>I think at this point Searle destroys his own argument.  By saying that these
>things have "no intentionality", he is denying the premise made by the person
>asking the question, that we are talking about "the right program".  Moreover,
>Hofstadter and Dennett both agreed (!!!!) that Searle's argument is flawed.
>"He merely asserts that some systems have intentionality by virtue of their
>'causal powers' and that some don't.  Sometimes it seems that the brain is
>composed of 'the right stuff', but other times it seems to be something else.
>It is whatever is convenient at the moment."  (Sound like any other conversers
>in this newsgroup?)  "Minds exist in brains and may come to exist in programmed
>machines.  If and when such machines come about, their causal powers will
>derive not from the substances they are made of, *but* *from* *their* *design*
>*and* *the* *programs* *that* *run* *in* *them*.  [ITALICS MINE]  And the way
>we will know they have those causal powers is by talking by them and listening
>carefully to what they they have to say."  Readers of this newsgroup should
>take note of how a non-presumptive position is built, and of how someone
>quoted right and left in this newsgroup doesn't even agree halfheartedly with
>the notions of those quoting him. [Rosen]

Now, let's look at Rich Rosen's argument.  The claim that formal
symbol manipulations lack intentionality is the *conclusion* of
Searle's arguments, which Searle recaps at the end of the paper.  Far
from destroying his argument, Searle is merely summarizing its
conclusions, in order to distinguish them from other positions.  The
"right program" does *not* mean "the program that has intentionality";
it means "the program that passes the Turing Test."  The whole point
of Searle's argument, of course, is that passing the Turing Test is
not a sufficient condition of intentionality.

It's true that Hofstadter and Dennett do not accept Searle's
arguments.  Rich Rosen proceeds to quote some of Hofstadter's
responses, from _The_Mind's_I_.  Presumably, Rosen agrees with
Hofstadter.  But Hofstadter's arguments are weak.  Rather than "merely
asserting" that some systems possess intentionality in virtue of their
causal powers, Searle has written several books on the subject (one
was written after _The_Mind's_I_).  Note that the purpose of Searle's
"Minds, Brains, and Programs" was not to develop a general theory of
intentionality, but to criticize the notion that intentionality is
just a matter of instantiating a Turing Machine program.  Hofstadter's
insinuation that Searle vacillates on whether minds need to be
embodied in neural stuff is a straw man.  Searle makes no such claim.
The last two sentences of Hofstadter, quoted by Rosen,
cannot be called counterarguments; they are mere counterassertions.
Rich Rosen offers no arguments of his own.  Indeed, he never clearly
states just what it is that he is claiming about this Turing Machine
issue.

I will grant that Hofstadter does offer *some* arguments in his
remarks, but Rosen has not mentioned one of them.  Rosen also claims
that those of us who have quoted him (Ellis and me) do so in defense
of positions that Searle would reject.  Rosen does not name names, nor
does he identify those positions, but it sure sounds good, doesn't it?

In short, the substantive content of Rosen's comments on Searle and
the relation of Turing Machines to minds is vanishingly close to
zero.
 

Todd Moody                 |  {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody
Philosophy Department      |
St. Joseph's U.            |         "I couldn't fail to
Philadelphia, PA   19131   |          disagree with you less."