[mod.ai] yet more wrangling on Searle, Turing, ...

cugini@NBS-VMS.ARPA ("CUGINI, JOHN") (10/16/86)

>  Date: 10 Oct 86 13:47:46 GMT
>  From: rutgers!princeton!mind!harnad@think.com  (Stevan Harnad)
>  Subject: Re: Searle, Turing, Symbols, Categories
>                             
>  It is not always clear which of the two components a sceptic is
>  worrying about. It's usually (ii), because who can quarrel with the
>  principle that a veridical model should have all of our performance
>  capacities? Now the only reply I have for the sceptic about (ii) is
>  that he should remember that he has nothing MORE than that to go on in
>  the case of any other mind than his own. In other words, there is no
>  rational reason for being more sceptical about robots' minds (if we
>  can't tell their performance apart from that of people) than about
>  (other) peoples' minds.

This just ain't so... if we know, as we surely do, that the internals
of the robot (electronics, metal) are quite different from those
of other passersby (who presumably have regular ole brains), we might
well be more skeptical that robots' "consciousness" is the same as
ours.  Briefly, I know:

1. that I have a brain
2. that I am conscious, and what my consciousness feels like
3. that I am capable of certain impressive types of performance,
   like holding up my end of an English conversation.

It seems very reasonable to suppose that 3 depends on 2 depends
on 1.  But 1 and 3 are objectively ascertainable for others as
well.  So if a person has 1 and 3, and a robot has 3 but NOT 1,
I certainly have more reason to believe that the person has 2, than
that the robot does.  One (rationally) believes other people are
conscious BOTH because of their performance and because their
internal stuff is a lot like one's own.

I am assuming here that "mind" implies consciousness, ie that you are
not simply defining "mind" as a set of external capabilities.  If you
are, then of course, by (poor) definition, only external performance
is relevant.  I would assert (and I think you would agree) that to
state "X has a mind" is to imply that X is conscious.
               
>  ....So, since we have absolutely no intuitive idea about the functional
>  (symbolic, nonsymbolic, physical, causal) basis of the mind, our only
>  nonarbitrary basis for discriminating robots from people remains their
>  performance.

Again, we DO have some idea about the functional basis for mind, namely
that it depends on the brain (at least more than on the pancreas, say).
This is not to contend that there might not be other bases, but for
now ALL the minds we know of are brain-based, and it's just not
dazzlingly clear whether this is an incidental fact or somewhat
more deeply entrenched.

>  I don't think there's anything more rigorous than the total turing
>  test ... Residual doubts about it come from
>  four sources, ... (d) misplaced hold-outs for consciousness.
>  
>  Finally, my reply to (d) [mind bias] is that holding out for
>  consciousness is a red herring. Either our functional attempts to
>  model performance will indeed "capture" consciousness at some point, or
>  they won't. If we do capture it, the only ones that will ever know for
>  sure that we've succeeded are our robots. If we don't capture it,
>  then we're stuck with a second level of underdetermination -- call it
>  "subjective" underdetermination -- to add to our familiar objective
>  underdetermination (b)...[i.e.,]
>  there may be a further unresolvable uncertainty about whether or not
>  they capture the unobservable basis of everything (or anything) that is
>  subjectively observable.
>  
>  AI, robotics and cognitive modeling would do better to learn to live
>  with this uncertainty and put it in context, rather than holding out
>  for the un-do-able, while there's plenty of the do-able to be done.
>  
>  Stevan Harnad
>  princeton!mind!harnad

I don't quite understand your reply.  Why is consciousness a red herring
just because it adds a level of uncertainty?  

1. If we suppose, as you do, that consciousness is so slippery that we
will never know more about its basis in humans than we do now, one
might still want to register the fact that our basis for belief in
the consciousness of competent robots is more shaky than for that
in humans.  This reservation does not preclude the writing of further
Lisp programs.  

2. But it's not obvious to me that we will never know more than we do
now about the relation of brain to consciousness.  Even though any
correlations will ultimately be grounded on one side by introspection
reports, it does not follow that we will never know, with reasonable
assurance, which aspects of the brain are necessary for consciousness
and which are incidental.  A priori, no one knows whether, eg,
being-composed-of-protein is incidental or not.  I believe this is
Searle's point when he says that the brain may be as necessary for
consciousness as mammary glands are for lactation.  Now at some level
of difficulty and abstraction, you can always engineer anything with
anything, ie make a computer out of play-doh.  But the "multi-
realizability" argument has force only if its obvious (which it
ain't) that the structure of the brain at a fairly high level (eg
neuron networks, rather than molecules), high enough to be duplicated
by electronics, is what's important for consciousness.


John Cugini <Cugini@NBS-VMS>
------