[mod.ai] philosophy of mind stuff

cugini@NBS-VMS.ARPA ("CUGINI, JOHN") (11/18/86)

Can't resist a few more go-rounds with S. Harnad.  Lest the size of these
messages increase exponentially, I'll try to avoid re-hashing old
issues and responding to side-issues...

> Harnad:
> I agree that scientific inference is grounded in observed correlations.
> But the primary correlation in this special case is, I am arguing, between
> mental states and performance. That's what both our inferences and our
> intuitions are grounded in. The brain correlate is an additional cue, but only
> inasmuch as it agrees with performance. 

> ...in ambiguous
> cases, behavior was and is the only rational arbiter. Consider, for
> example, which way you'd go if (1) an alien body persisted in behaving like a
> clock-like automaton in every respect -- no affect, no social interaction,
> just rote repetition -- but it DID have something that was indistinguishable
> (on the minute and superficial information we have) from a biological-like
> nervous system), versus (2) if a life-long close friend of yours had
> to undergo his first operation, and when they opened him up, he turned
> out to be all transistors on the inside. I don't set much store by
> this hypothetical sci-fi stuff, especially because it's not clear
> whether the "possibilities" we are contemplating are indeed possible. But
> the exercise does remind us that, after all, performance capacity is
> our primary criterion, both logically and intuitively, and its
> black-box correlates have whatever predictive power they may have
> only as a secondary, derivative matter. They depend for their
> validation on the behavioral criterion, and in cases of conflict,
> behavior continues to be the final arbiter.

I think I may have been tactitly conceding the point above, which I
now wish to un-concede.  Roughly speaking, I think my (everyone's)
epistemological position is as follows: I know I have a mind.  In
order to determine if X has a mind I've got to look for analogous
external things about X which I know are causally connected with mind
in *my own* case.  I naively know (and *how* do I know this??) that large
parts of my performance are an effect of my mind.  I scientifically
know that my mind depends on my brain.  I can know this latter
correlation even *without* performance correlates, eg, when the dentist
puts me under, I can directly experience my own loss of mind which
results from loss of whatever brain activity.  (I hope it goes
without saying that all this knowledge is just regular old
reliable knowledge, but not necessarily certain - ie I am not
trying to respond to radical skepticism about our everyday and
scientific knowledge, the invocation of deceptive dentists, etc.)
                                                        
I'll assume that "mind" means, roughly, "conscious intelligence".
Also, assume throughout of course that "brain" is short-hand for
"brain activity known (through usual neuro-science techniques) to be
necessary for consciousness".

Now then, armed with the reasonably reliable knowledge that in my own
case, my brain is a cause of my mind, and my mind is a cause of my
performance, I can try to draw appropriate conclusions about others.
Let's take 4 cases:

1. X1 has brains and performance - ie another normal human.  Certainly
I have good reason to assume X1 has a mind (else why should similar
causes and effects be mediated by something so different from that
which mediates in my own case?)

2. X2 has neither brains nor performance - and no mind.
   
3. X3 has brains, but little/no performance - eg a case of severe
retardation.  Well, there doesn't seem much reason to believe that
X has intelligence, and so is disqualified from having mind, given
our definition.  However, it is still reasonable to believe that
X3 might have consciousness, eg can feel pain, see colors, etc.

4. X4 has normal human cognitive performance, but no brains, eg the
ultimate AI system.  Well, no doubt X4 has intelligence, but the issue
is whether X4 has consciousness.  This seems far from obvious to me,
since I know in my own case that brain causes consciousness causes
performance.  But I already know, in the case of X4, that the causal
chain starts out at a different place (non-brain), even if it ends up
in the same place (intelligent performance).  So I can certainly
question (rationally) whether it gets to performance "via
consciousness" or not.

If this seems too contentious, ask yourself: given a choice between
destroying X3 or X4, is it really obvious that the more moral choice
is to destroy X3?

Finally, a gedanken experiment (if ever there was one) - suppose
(a la sci-fi stories) they opened you up and showed you that you
really didn't have a brain after all, that you really did have
electronic circuits - and suppose it transpired that while most
humans had brains, a few, like yourself, had electronics.  Now,
never doubting your own consciousness, if you *really* found that
out, would you not then (rationally) be a lot more inclined to
attribute consciousness to electronic entities (after all you know
what it feels like to be one of them) than to brained entities (who
knows what, if anything, it feels like to be one of them?)? 
Even given *no* difference in performance between the two sub-types?
Showing that "similarity to one's own internal make-up" is always
going to be a valid criterion for consciousness, independent of
performance.

I make this latter point to show that I am a brain-chauvinist *only
insofar* as I know/believe that I *myself* am a brained entity (and
that my brain is what causes my consciousness).  This really
doesn't depend on my own observation of my own performance at all -
I'd still know I had a mind even if I never did any (external) thing
clever.

To summarize: brainedness is a criterion, not only via the indirect
path of: others who have intelligent performance also have brains,
ergo brains are a secondary correlate for mind; but also via the
much more direct path (which *also* justifies performance as a
criterion): I have a mind and in my very own case, my mind is
closely causally connected with brains (and with performance).

> As to CAUSATION -- well, I'm
> sceptical that anyone will ever provide a completely satisfying account
> of the objective causes of subjective effects. Remember that, except for
> the special case of the mind, all other scientific inferences have
> only had to account for objective/objective correlations (and [or,
> more aptly, via) their subjective/subjective experiential counterparts).
> The case under discussion is the first (and I think only) case of
> objective/subjective correlation and causation. Hence all prior bets,
> generalizations or analogies are off or moot.

I agree that there are some additional epistemological problems, compared
to the usual cases of causation.  But these don't seem all that daunting,
absent radical skepticism.  We already know which parts of the brain 
correlate with visual experience, auditory experience, speech competence,
etc.  I hardly wish to understate the difficulty of getting a full
understanding, but I can't see any problem in principle with finding
out as much as we want.  What may be mysterious is that at some level,
some constellation of nerve firings may "just" cause visual experience,
(even as electric currents "just" generate magnetic fields.)  But we are
always faced with brute-force correlation at the end of any scientific
explanation, so this cannot count against brain-explanatory theory of mind.

> Perhaps I should repeat that I take the context for this discussion to
> be science rather than science fiction, exobiology or futurology. The problem
> we are presumably concerned with is that of providing an explanatory
> model of the mind along the lines of, say, physics's explanatory model
> of the universe. Where we will need "cues" and "correlates" is in
> determining whether the devices we build have succeeded in capturing
> the relevant functional properties of minds. Here the (ill-understood)
> properties of brains will, I suggest, be useless "correlates." (In
> fact, I conjecture that theoretical neuroscience will be led by, rather
> than itself leading, theoretical "mind-science" [= cognitive
> science?].) In sci-fi contexts, where we are guessing about aliens'
> minds or those of comatose creatures, having a blob of grey matter in
> the right place may indeed be predictive, but in the cog-sci lab it is
> not.

Well, I plead guilty to diverting the discussion into philosophy, and as
a practical matter, one's attitude in this dispute will hardly affect
one's day-to-day work in the AI lab.  One of my purposes is a kind of
pre-emptive strike against a too-grandiose interpretation of the
results of AI work, particularly with regard to claims about
consciousness.  Given a behavioral definition of intelligence, there
seems no reason why a machine can't be intelligent.  But if "mind"
implies consciousness, it's a different ball-game, when claiming
that the machine "has a mind".

My as-yet-unarticulated intuition is that, at least for people, the
grounding-of-symbols problem, to which you are acutely and laudably
sensitive, inherently involves consciousness, ie at least for us,
meaning requires consciousness.  And so the problem of shoehorning
"meaning" into a dumb machine at least raises the issue about how
this can be done without making them conscious (or, alternatively,
how to go ahead and make them conscious).  Hence my interest in your
program of research.

John Cugini <Cugini@NBS-VMS>
------