[comp.ai] 1st person experience

ellis@unix.SRI.COM (Michael Ellis) (03/30/89)

> Wayne A. Throop  >> Gilbert Cockton

>The gain in doubting people that think Eliza understands
>is that we don't cheapen what we mean by "understanding".  The gain in
>doubting that people that think the CR shows that the room/rules have no
>understanding even in principle is that we don't arbitrarily anthopomorphize
>what we will accept as an "understanding entity".

    One goal is indeed to make artifacts which can perform tasks that
    currently "require understanding". The success of this kind of
    research program is immune from CR considerations, just exactly
    as the success of chess computers is judged by the game they play
    and not whether they really think like us.
    
    But there is another question at hand, one which you refuse to even
    acknowledge, Wayne: Just what is the *human* thought process?
    What is that strange stuff (some of us call "understanding") that
    reveals itself to us in such an so incorrigibly first person fashion?

    In spite of the stunning success in computer chess technology, 
    it has taught us practically nothing about how human chess players 
    do it, and I fear that symbol crunching gadgets from the folks
    who make "Machines Who Think" will have a little else to add, 
    no matter how operationally they might resemble you.

    IMHO, phenomenological introspectivity is something that absolutely
    must be explicitly designed into the artifact. It's got to have qualia.
    Feelings, pain, consciousness. These are the *explicanda*, they are what
    we (or at least some of us) hope to have accounted for. 

    No phenomenology, no mind, no understanding. Just another toy doll that
    technicians from blighted backgrounds anthropomorphically project
    mind onto.

>> If your AI systems "work", all well and good.  But don't demand that
>> people call black white in the process.  If AI folk spent less time
>> trying to redefine everyday language, people might trust them more.

>This situation doesn't arise in the CR.  In fact, the CR's premise is
>that "people's intuition" from outside the room leads them to think
>the room understands, and "people's intuition" once they've seen inside
>the room leads them to think otherwise.  

    Searle's premises here are that:

	An entity is conscious if and only if it is like something
	to be that entity. 

	If you *are* the entity in question, your consciousness is the only
	one that can possibly be present: Same stuff, same consciousness.
	There is only one way to be the same thing. (This is my inferrence
	and not something I recall Searle saying, so I could be wrong).

    The systems response is equivalent to asserting: 

	There is more than one way to be the entity in question.
 	The same stuff "been in different ways" (first, qua intentional
        system, second, qua symbol cruncher) can give rise to distinctly
	different consciousnesses.
	There are different ways to be the same thing.

    Maybe the systems response is true. It's a wild leap of faith, as I've
    remarked before, which is not justified by any argument from its
    advocates.
   
>So, we aren't asking to call
>black white.  We are asking whether black should be defined functionally
>(in terms of the light it reflects) or structurally (in terms of which
>pigments it is constructed of).

    There's a third possibility that you have either forgotten
    or overlooked, Wayne:

	Black is a qualitative experience that is revealed to
	us via direct 1st person experience.

    Just how do you deal with qualia functionally or structurally, Wayne?
    Am I correct in inferring that you think we must, for ideological
    reasons, ban qualia from the study of the mind? 

-michael

throopw@agarn.dg.com (Wayne A. Throop) (04/11/89)

> ellis@unix.SRI.COM (Michael Ellis)
> But there is another question at hand, one which you refuse to even
> acknowledge, Wayne: Just what is the *human* thought process?
> What is that strange stuff (some of us call "understanding") that
> reveals itself to us in such an so incorrigibly first person fashion?

I don't know.  I doubt that anyone does.  Searle and Harnad and others
claim that, whatever it is, it's  -- what to call it -- physical substrate
must have certain vague "causal powers" that computers lack.  I find
their arguments largely based on appeals to various intuitions, and their
purported insights about understanding vague and none too useful.  I am
unable to see any practical difference between "causal powers" and a
sort of noveau "vital fluid".

> In spite of the stunning success in computer chess technology, 
> it has taught us practically nothing about how human chess players 
> do it, and I fear that symbol crunching gadgets from the folks
> who make "Machines Who Think" will have a little else to add, 
> no matter how operationally they might resemble you.

But is it worthless to have found out that human players don't seem to
have some ultra-fast evaluator of positions working below the level of
the conscious mind, and that instead they have a much more subtle
method?  I would not count results from even lowly chess research as
useless, and even the negative result pointed out above hardly means
that computers cannot ever evaluate chess positions as humans do.

> Feelings, pain, consciousness. These are the *explicanda*, they are what
> we (or at least some of us) hope to have accounted for.

This seems a non-sequitur.  I have no disagreement, and don't see
why anybody would have thought I did.

> If you *are* the entity in question, your consciousness is the only
> one that can possibly be present: Same stuff, same consciousness.
> There is only one way to be the same thing. (This is my inferrence
> and not something I recall Searle saying, so I could be wrong).

Yes, this is something that Searle seems to be saying, and it seems
to me to incorrect.

> The systems response is equivalent to asserting: 
>   There is more than one way to be the entity in question.

I'd put it that there is more than one be-able entity represented
in the "stuff" inside the CR, but yes I agree.  Michael proceeds
to characterize one as an "intentional system" and the other as
a "symbol cruncher", which is begging the question of whether
the "intentional system" IS a symbol cruncher (or whether the
"symbol cruncher" CAN BE an intentional system), which is what
the whole scenario was supposed to (but in my opinion fails to)
settle in the first place.

> Maybe the systems response is true. It's a wild leap of faith, as I've
> remarked before, which is not justified by any argument from its
> advocates.

Huh?  Where's the leap of faith involved here?  From Freud to
transactional analysis to neurological evaluations of multiple
personalities to work with split-brain epileptic patients, it seems to
me that there is a lot of work which shows that it is quite plausible
indeed for there to be many "persons" to "be" inside a single human
body.

In fact, even in "qualia", in the how-it-seems-to-the-self, there are
many instances where it there are split viewpoints, multiple "to-be"s
available in a single body, in a single memory.  From OOB experiences
to the simple feeling of watching "yourself" do something which the
observer-you feels disconnected from.

So why should the CR be any different?  Again, why is this a "leap of
faith"?

> Just how do you deal with qualia functionally or structurally, Wayne?

IMHO, the study of the mind is not yet advanced enough to conclusively
explain qualia in any way at all, let alone functionally or structurally.

> Am I correct in inferring that you think we must, for ideological
> reasons, ban qualia from the study of the mind?

No.  That is, I hold no ideology which requires such a ban, nor in
fact do I think such a ban is required or desirable in principle.

--
If someone tells me I'm not really conscious, I don't marvel about
how clever he is to have figured that out... I say he's crazy.
          --- Searle (paraphrased) from an episode of PBS's "The Mind"
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw