[mod.ai] brains vs. TTT as criteria for mind/consciousness

cugini@nbs-vms.UUCP ("CUGINI, JOHN") (12/04/86)

***  WARNING  ***  WARNING  ***  WARNING  ***  WARNING  ***  WARNING  ***
***  
***  Philosophobes (Sophophobes?) beware, industrial-strength
***  metaphysics dead ahead.  The faint of heart should skip 
***  forward about 350 lines...
***  
*************************************************************************

Recall that the main issue here is how important a criterion
brainedness (as opposed to performance/the TTT) is for mindedness.
My main reason for asserting its importance is that I take "mind" to
mean, roughly, "conscious intelligence", where consciousness is
epitomized by such things as seeing colors, feeling pain, and
intelligence by playing chess, catching mice.  No one has objected
strenously to this definition, so I'll assume we kind of agree.
While performance/TTT can be decisive evidence for intelligence, it
doesn't seem to me to be nearly as strong evidence for consciousness
out of context, ie when applied to non-brained entities.  So in the
following I will try to assess in exactly what manner brains and/or
performance provide evidence for consciousness.

I had earlier written that one naively knows that his mind causes his
performance and scientifically knows that his brain causes his mind,
and that *both* of these provide justifiable bases for induction to
other entities.

S. Harnad, in reply, writes:

> Now on to the substance of your criticism. I think the crucial points
> will turn on the difference between what you call "naively know" and
> "scientifically know." It will also involve (like it or not) the issue
> of radical scepticicm, uncertainty and the intersubjectivity and validity of
> inferences and correlations. ...
>
> Scientific knowing is indirect and inferential. It is based on
> inference to the best explanation, the weight of the evidence, probability,
> Popperian (testability, falsifiability) considerations, etc. It is the
> paradigm for all empirical inquiry, and it is open to a kind of
> radical scepticism (scepticism about induction) that we all reasonably
> agree not to worry about...
>
> What you call "naive knowing," on the other hand (and about which you
> ask "*how* do I know this?") is the special preserve of 1st-hand,
> 1st-person subjective experience. It is "privileged" (no one has
> access to it but me), direct (I do not INFER from evidence that I am
> in pain, I know it directly), and it has been described as
> "incorrigible" (can I be wrong that I am feeling pain?). ..
>
> You say that I "naively know" that my performance
> is caused by my mind and I "scientifically know" that my mind is caused
> by my brain. ...Let me translate that: I know directly that my
> performance is caused by my mind, and I infer that my
> mind is caused by my brain. I'll go even further (now that we're
> steeped in phenomenology): It is part of my EXPERIENCE of my behavior
> that it is caused by my mind. [I happen to believe (inferentially) that
> "free will" is an illusion, but I admit it's a phenomenological fact
> that free will sure doesn't FEEL like an illusion.] We do not experience our
> performance in the passive way that we experience sensory input. We
> experience it AS something we (our minds) are CAUSING. (In fact, that's
> probably the source of our intuitions about what causation IS. I'll
> return to this later.)
>
> So there is a very big difference between my direct knowledge that my
> mind causes my behavior and my inference (say, in the dentist's chair)
> that my brain causes my mind. ...So, to put it briefly,
> what I've called the "informal component" of the Total Turing Test --
> does the candidate act as if it had a mind (i.e., roughly as I would)? --
> appeals to precisely those intuitions, and not the inferential kind, about
> brains, etc.
>
> In summary: There is a vast difference between knowing causes
> directly and inferring them; subjective phenomena are unique and
> radically different from other phenomena in that they confer this
> direct certainty; and inferences about other minds (i.e., about
> subjective phenomena in others) are parasitic on these direct
> experiences of causation, rather than on ordinary causal inference,
> which carries little or no intuitive force in the case of mental
> phenomena, in ourselves or others. And rightly not, because mind is a
> private, direct, subjective matter, not something that can be
> ascertained -- even in the normal inductive sense -- by public,
> indirect, objective correlations.

Completely agreed that one's knowledge about one's own consciousness
is attained in a very different way than is "ordinary" knowledge.
The issue is how the provenance of this knowledge bears upon its
application to the inductive process for deciding who else has a
mind.  Rather than answer point-by-point, here is a scenario
which I think illustrates the issues:

Assume the following sequence of events:                    
A1. a rock falls on your foot (public external event)
B1. certain neural events occur within you (public internal event)
C1. you experience a pain "in your foot" (private)
D1. you get angry (private)
E1. some more neural events occur (public internal)
F1. you emit a stream of particularly evocative profanity (public
    external)

(a more AI-oriented account would be:
   A1'. someone asks you what 57+62 is
   B1'. neural events
   C1'. you "mentally" add the 7 and 2, etc..
   D1'. you decide to respond
   E1'. neural events
   F1'. you emit "119"
)

Now, how much do you know, and how do you know it?  Regarding the
mere existence and, to some level of detail, the quality, of these
events (ignoring any causal connections for the moment):

You know about A1 and F1 through "normal sensory means" of
finding out about the world.

You know about C1 and D1 through "direct incorrigible(?)
awareness" of your own consciousness (if you're not aware of
your own consciousness, who is?)

You know about B1 and E1 (upon reflection) only inferentially/
scientifically, via textbooks, microscopes, undergraduate courses...

Now, even though we know about these things in different ways,
they are all perfectly respectable cases of knowledge (not
necessarily certain, of course).  It's not clear why we should
be shy about extrapolating *any* of these chunks of knowledge
in other cases...but let's go on.

What do we know about the causal connections among these events?
Well, if you're an epiphenomalist, you probably believe something
like:

         C1,D1
         /
  A1 -> B1 -> E1 -> F1

the point being that mental events may be effects, but not causes,
especially of non-mental events.  If you're an interactionist:

  A1 -> B1 -> C1 -> D1 -> E1 -> F1

(Identity theorists believe B1=C1, E1=D1.  Let's ignore them
for now.  Although, for what it's worth, since they *identify*
neural and mental events, I assume that for them brainedness
would be, literally, the definitive criterion for mentality.)

Now, in either case, what is the basis for our belief in causation,
especially causation of and by C1 and D1?  This raises tricky
questions - what, in general, is the rational basis for belief in
causation?  Does it always involve an implicit appeal to a kind of
"scientific method" of experimentation, etc.?  Can we ever detect
causation in a single instance, without any knowledge of similar
types of events?  Does our feeling that we are causing some external
event have any value as evidence?

Fortunately, I think that we need to determine *neither* just what are
the rational grounds for belief in causation, *nor* whether the
epiphenomenal or interactionist picture is true.  It's enough just to
agree (don't we?) that B1 is a proximate (more than A1, anyway) cause
of C1, and that we know this.  Of course A1 is also a cause of C1,
via B1.

Now the only "fishy" thing about one's knowledge that B1 causes C1
is that C1 is a private event.  But again, so what?  If you're lying
on the operating table, and every time the neurosurgeon pokes you
at site X, you see a yellow patch, your inference about causal
connections is just as sound as if you walked in a room and
repeatedly flicked a switch to make the lights go on and off.
It's too bad that in the first case the "lights" are private, but
that in no way disbars the causation knowledge from being used
freely.  The main point here is that our knowledge that Bx's cause
Cx's is entirely untainted and projectible.  The mere fact that it is
ultimately grounded in our direct knowledge of our own experience in
no way disqualifies it (after all, isn't *all* knowledge ultimately
so grounded?). [more below on this]

Now then, suppose you see Mr. X undergoing a similar ordeal - A2, B2,
??, ??, E2, F2.  You can see, with normal sensory means, that A2 is
like A1, and that F2 is like F1 (perhaps somewhat less evocative, but
similar).  You can find out, with some trouble, that B2 is like B1 and
E2 is like E1.  On the basis of these observations, you fearlessly
induce that Mr. X probably had a C2 and D2 similar to your C1 and D1,
ie that he too is conscious, even though you can never observe C2 and
D2, either through the normal means you used for A2, B2.. or the
"privileged" means you used for C1 and D1.

Absent any one of these visible similarities, the induction is
weakened.  Suppose, for instance he had B2 but not A2 - well OK,
he was hallucinating a pain, maybe, but we're not as sure.
Suppose he had A2, but not B2 - gee, the thing dropped on his foot
and he yelled, but we didn't see the characteristic nerve firings..
hmmm (but at least he has a brain).

But now suppose we observe an AI-system:
    
A3.  a rock falls on its foot
BB3. certain electronic events occur within it
C3.  ??
D3.  ??
EE3. some more electronic events occur
F3.  it emits a stream of particularly evocative profanity

Granted A3 and F3 are similar to A1 and F1 - but you know that
BB3 is, in many ways, not similar to B1, nor EE3 to E1.  Of course,
in some structural ways, they may be similar/isomorphic/whatever
to B1 and E1, but not nearly as similar as B2 and E2 are (Mr. X's
neural events).  Surely your reasons for believing that C3, D3
exist/are similar to C1 and D1 are much weaker than for C2, D2,
especially given that we agree at least that B1 *caused* C1, and that
causation operates among relevantly similar events.  Surely it's a
much safer bet that B2 is relevantly similar to B1 than is BB3, no?
(even given the decidedly imperfect state of current brain science.
We needn't know exactly WHAT our brain events are like before we
rationally conclude THAT they are similar.  Eg, in 1700, people, if
you asked them, probably believed that stars were somewhat similar in
their internal structure, the way they worked, even though they
didn't have any idea what that structure was.)  The point being that
brainedness supplies strong additional support to the hypothesis of
consciousness.

In fact, I'd be inclined to argue that brainedness is probably
stronger evidence (for a conscious entity who knows himself to be
brained) for consciousness than performance:

1.  Proximate causation is more impressive than mediated causation.
Consider briefly what we would say about someone (a brained someone)
who lacked A and F, but had B and E, ie no outward stimulus or
response, but in whom we observed neural patterns very similar to
those normally characteristic of people feeling a sharp pain in their
foot (never mind the grammar).  If I were told that he or the
AI-system (however sophisticated its performance) was in pain, and I
had to bet which one, I'd bet on him, because of the *proximate
causation* presumed to hold between B's and C's, but not established
at all between BB's and C's.

2.  Causation between B's and C's is more firmly established than
between D's and F's.  No one seriously doubts that brain events
affect one's state of consciousness.  Whether one's consciousness
counts as a cause of performance is an open question.  It certainly
feels as if it's true, but I know of no knock-down refutation of
epiphenomenalism.  You seem to equivocate, sometimes simply saying
we KNOW that our intentions cause performance, other times doubting.
But the TTT criterion depends by analogy on questionable D-F causation;
the brain criterion depends on the less problematic B-C causation.
                                             
3.  Induction is more firmly based on analogy from causes than effects.
If you believe in the scientific method, you believe "same cause ergo
same effect".  The same effect *suggests* the same cause, but doesn't
strictly imply it, especially when the effect is not proximate.
But the TTT criterion is based on the latter (weaker) kind of
induction, the brain criterion on the former.

> Consider ordinary scientific knowledge about "unobservables," say,
> about quarks ...Were you to subtract this inferred entity from the
> (complete) theory, the theory would lose its capacity to account for
> all the (objective) data. That's the only reason we infer
> unobservables in the first place, in ordinary science: to help
> predict and causally explain all the observables. A complete, utopian
> scientific theory of the "mind," in radical contrast with this, will
> always be just as capable of accounting for all the (objective) data
> (i.e., all the observable data on what organisms and brains do) WITH
> or WITHOUT positing the existence of mind(s)!

Well, not so fast... I agree that others' minds are unobservable in a
way rather different from quarks - more on this below.  The utopian
theory explains all the objective data, as you say, but of course
this is NOT all the data.  Quite right, if I discount my own
consciousness, I have no reason whatever to believe in that of
others, but I decline the antecedent, thank you.  All *my* data
includes subjective data and I feel perfectly serene concocting a
belief system which takes my own consciousness into account.  If the
objective-utopian theory does not, then I simply conclude that it is
incomplete wrt to reality, even if not wrt, say, physics.

> In other words, the complete explanatory/predictive theory of organisms
> (and devices) WITH minds will be turing-indistinguishable from the
> complete explanatory/predicitive theory of organisms (and devices)
> WITHOUT minds, that simply behave in every observable way AS IF they
> had minds.

So the TTT is in principle incapable of distinguishing between minded
and unminded entities?  Even I didn't accuse it of that.

If this theory does not explain the contents of my own consciousness,
it does not completely explain to me every thing observable to me.
Look, you agree, I believe, that "events in the world" include a
large set S, publicly observable, and a lot of little sets P1, P2,
... each of which is observable only by one individual.  An
epistemological pain in the neck, I agree, but there it is. If
utopian theory explains S, but not P1, P2, why shouldn't I hazard a
slightly more ambitious formulation (eg, whenever you poke an x-like
site in someone's brain, they will experience a yellow patch...) ?
Don't we, in fact, all justly believe statements exactly like this ??

> That kind of inferential indeterminacy is a lot more serious than the
> underdetermination of ordinary scientific inferences about
> unobservables like quarks, gravitons or strings. And I believe that this
> amounts to a demonstration that all ordinary inferential bets (about
> brain-correlates, etc.) are off when it comes to the mind.

I don't get this at all ...

> The mind (subjectivity, consciousness, the capacity to have
> qualitative experience) is NEITHER an ordinary, intersubjectively
> verifiable objectively observable datum, as in normal science, NOR is
> it an ordinary unobservable inferred entity, forced upon us so that
> we can give a successful explanatory/predictive account of the
> objective data.  Yet the mind is undoubtedly real. We know that,
> noninferentially, for one case: our own.
 
I couldn't agree more.

> Perhaps I should emphasize that in the two "correlations" we are
> talking about -- performance/mind and brain/mind -- the basis for the
> causal inference is radically different. The causal connection between
> my mind and my performance is something I know directly from being the
> performer. There is no corresponding intuition about causation from
> being the possessor of my brain. That's just a correlation, depending
> for its causal interpretation (if any), on what theory or metatheory I
> happen to subscribe to. That's why nothing compelling follows from
> being told what my insides are made of.

Addressing the latter point first, I think there's nothing wrong
with pre-theoretic beliefs about causation.  If, every time I flip
the switch on the wall, the lights come on, I will develop a true
justified belief (=knowledge) about the causal links between the
switch and the light, even in the absence of any knowledge on my
part (or anyone else's for that matter) of how the thing works.

But the main issue here is the difference in the way we know about
the correlations.  I think this difference is just incidental.  We
are familiar with A and F type events, not so much with B and E
types, and so we develop intuitions regarding the former and not the
latter. If you had your brain poked by a neurosurgeon every day,
you'd quickly develop intuitions about brain-pokes and yellow
patches.  Conversely, if you were strapped down or paralyzed from
birth, you would not develop intuitions about your mind's causal
powers.

Further, one may *scientifically* investigate the causal connections
among B1, C1, D1, and E1, and among A1 and F1 as well, as long as
you're willing to take people's word for it that they're in pain, etc
(and why not?). Just because we usually find out about some
correlations in certain ways doesn't mean we can't find out about
them in others as well.

And even if the difference weren't incidental it is unclear why
mysterious Cartesian-type intuitions about causation between Ds and
Fs are to be preferred to scientific inferential knowledge about Bs
and Cs as a basis for induction.

   "It may be nonsense, but at least it's clever nonsense" - Tom Stoppard

John Cugini <Cugini@NBS-VMS>
------