[comp.ai] Searle, Turing, Symbols, Categories: Reply to Cugini

harnad@mind.UUCP (Stevan Harnad) (11/21/86)

On mod.ai <8611200632.AA19202@ucbvax.Berkeley.EDU> "CUGINI, JOHN"
<cugini@nbs-vms.ARPA> wrote:

>	I know I have a mind.  In order to determine if X
 	[i.e., anyone else but myself]
>	has a mind I've got to look for analogous
>	external things about X which I know are causally connected with mind
>	in *my own* case. I naively know (and *how* do I know this??) that large
>	parts of my performance are an effect of my mind.  I scientifically
>	know that my mind depends on my brain.  I can know this latter
>	correlation even *without* performance correlates, eg, when the dentist
>	puts me under, I can directly experience my own loss of mind which
>	results from loss of whatever brain activity.  (I hope it goes
>	without saying that all this knowledge is just regular old
>	reliable knowledge, but not necessarily certain - ie I am not
>	trying to respond to radical skepticism about our everyday and
>	scientific knowledge, the invocation of deceptive dentists, etc.)

These questions and reflections are astute ones, and very relevant to
the issues under discussion. It is a matter of some ancillary interest
that the people who seem to be keeping their heads more successfully
in the debates about artificial intelligence and (shall we call it)
"artificial consciousness" are the more sceptical ones, as you reveal
yourself to be at the end of this module. The zealous advocates, on
the other hand, seem to be more prone to flights of
over-interpretative fancy, leaving critical judgment by the wayside.
(This is not to say that some of the more dogged critics haven't waxed
irrational in their turn too.)

Now on to the substance of your criticism. I think the crucial points
will turn on the difference between what you call "naively know" and
"scientifically know." It will also involve (like it or not) the issue
of radical scepticicm, uncertainty and the intersubjectivity and validity of
inferences and correlations. Now, I am neither an expert in, nor an advocate
of, phenomenological introspection, but if you will indulge me and do
a little of it here, I think you will notice that there is something very
different about "naive knowing" as compared to "scientific knowing."

Scientific knowing is indirect and inferential. It is based on
inference to the best explanation, the weight of the evidence, probability,
Popperian (testability, falsifiability) considerations, etc. It is the
paradigm for all empirical inquiry, and it is open to a kind of
radical scepticism (scepticism about induction) that we all reasonably
agree not to worry about, except insofar as noting that scientific
"knowledge" is not certain, but only highly likely on the evidence,
and is always in principle open to inductive "risk" or falsification
by future evidence. This is normal science, and if that were all there
was to the special case of the mind/body problem (or, more perspicuously,
the other-minds problem) then a lot of the matters we are discussing
here could be settled much more easily.

What you call "naive knowing," on the other hand (and about which you
ask "*how* do I know this?") is the special preserve of 1st-hand,
1st-person subjective experience. It is "privileged" (no one has
access to it but me), direct (I do not INFER from evidence that I am
in pain, I know it directly), and it has been described as
"incorrigible" (can I be wrong that I am feeling pain?). The
inferences we make (about the outside world, about inductive
regularities, about other minds) are open to radical scepticism, but
the phenomenological content of 1st-hand experience is different. This
makes "naive knowing" radically different from "scientific knowing."

(Let me add a quick parenthetical remark, but not pursue it unless
someone brings it up: Even our inferential knowledge depends on our
capacity for phenomenological experience. Put another way: we must
have direct experience in order to make indirect inferences, otherwise
the inferences would have no content, whether right or wrong. I
conjecture that this is significantly connected with what I've called
the "grounding" problem that lies at the root of this discussion. It
is also related to Locke's (inchoate) distinction between primary and
secondary qualities, turning his distinction on its head.)

Now let's go on. You say that I "naively know" that my performance
is caused by my mind and I "scientifically know" that my mind is caused
by my brain. (Let's not quibble about "cause"; the other words, such
as "determined by," "a function of," "supervenient on," or Searle's
notorious "caused-by-and-realized-in" are just vague ways of trying to
finesse a problematic and unique relationship otherwise known as the
mind/body problem. Let's just bite the bullet with "cause" and see
where that gets us.) Let me translate that: I know directly that my
performance is caused by my mind, and I infer that my
mind is caused by my brain. I'll go even further (now that we're
steeped in phenomenology): It is part of my EXPERIENCE of my behavior
that it is caused by my mind. [I happen to believe (inferentially) that
"free will" is an illusion, but I admit it's a phenomenological fact
that free will sure doesn't FEEL like an illusion.] We do not experience our
performance in the passive way that we experience sensory input. We
experience it AS something we (our minds) are CAUSING. (In fact, that's
probably the source of our intuitions about what causation IS. I'll
return to this later.)

So there is a very big difference between my direct knowledge that my
mind causes my behavior and my inference (say, in the dentist's chair)
that my brain causes my mind. [Even my rational inference (at the
metalevel) that my mind doesn't really cause my behavior, that that's
just an illusion, leaves the incorrigible phenomenological fact that I
know directly that that's not the way it FEELS.] So, to put it briefly,
what I've called the "informal component" of the Total Turing Test --
does the candidate act as if it had a mind (i.e., roughly as I would)? --
appeals to precisely those intuitions, and not the inferential kind, about
brains, etc. Note, however, that I'm not claiming we have direct
knowledge of other minds. That's just an inference. But it's not the
same kind of inference as the inference that there are, say, quarks, or
cosmic strings. We are appealing, in the informal TTT, to our
intuitions about subjectivity, not to ordinary, objective scientific
evidence (such as brain-correlates).

As a consequence (and again I invite you to do some introspection), the
intuitive force of the direct knowledge that I have (or am) a mind, and
that that causes my behavior, is of an entirely different order from my
empirical inference that I have a brain and that that causes my mind.
Consider, for example, that there are plenty of people who doubt that
their brains are the true causes of their minds, but very few (like
me) who venture to doubt that their minds cause their behavior; and I
confess that I am not very successful in convincing myself, because my
direct experience keeps contradicting my inference, incorrigibly.

In summary: There is a vast difference between knowing causes directly and
inferring them; subjective phenomena are unique and radically different from
other phenomena in that they confer this direct certainty; and
inferences about other minds (i.e., about subjective phenomena in
others) are parasitic on these direct experiences of causation, rather
than on ordinary causal inference, which carries little or no
intuitive force in the case of mental phenomena, in ourselves or
others. And rightly not, because mind is a private, direct, subjective
matter, not something that can be ascertained -- even in the normal
inductive sense -- by public, indirect, objective correlations.

If you want some reasons why the mind/body case is so radically
different from ordinary causal inference in science, here are two:

(1) Generalizations about correlates of having a mind
are, because of the peculiar nature of subjective, 1st-person
experience, always doomed to be based on an N = 1. We can have
intersubjective agreement about a meter-reading, but not about a
subject's experience. This already puts mind-science in a class by
itself. (One can even argue that the intersubjective agreement on
"objective" meter readings is itself parasitic on, or grounded in,
some turing-equivalence assumptions about other people's reports of
their experiences -- of meter readings!)

But, still more important and revealing: (2) Consider ordinary scientific
inferences about "unobservables," say, about quarks (if they should continue
to play an inferred causal role in the future, utopian, "complete"
explanatory/predictive theory in physics): Were you to subtract this
inferred entity from the (complete) theory, the theory would lose its
capacity to account for all the (objective) data. That's the only
reason we infer unobservables in the first place, in ordinary
science: to help predict and causally explain all the observables.
A complete, utopian scientific theory of the "mind," in radical
contrast with this, will always be just as capable of accounting
for all the (objective) data (i.e., all the observable data on what
organisms and brains do) WITH or WITHOUT positing the existence of mind(s)!

In other words, the complete explanatory/predictive theory of organisms
(and devices) WITH minds will be turing-indistinguishable from the
complete explanatory/predicitive theory of organisms (and devices)
WITHOUT minds, that simply behave in every observable way AS IF they
had minds.

That kind of inferential indeterminacy is a lot more serious than the
underdetermination of ordinary scientific inferences about
unobservables like quarks, gravitons or strings. And I believe that this
amounts to a demonstration that all ordinary inferential bets (about
brain-correlates, etc.) are off when it comes to the mind.
The mind (subjectivity, consciousness, the capacity to have
qualitative experience) is NEITHER an ordinary, intersubjectively
verifiable objectively observable datum, as in normal science, NOR is
it an ordinary unobservable inferred entity, forced upon us so that we
can give a successful explanatory/predictive account of the objective
data.

Yet the mind is undoubtedly real. We know that, noninferentially, for
one case: our own. It is to THAT direct knowledge that the informal component
of the TTT appeals, and ONLY to that knowledge. Any further indirect
inferences, based on, say, correlations, depend ultimately for their
validation only on that direct knowledge, and are always secondary to
it, in that split inferences are always settled by an appeal to the
TTT criterion, not vice versa (or some third thing), as I shall try to
show below.

(The formal component of the TTT, on the other hand [i.e., the formal
computer-testing of a theory that purports to generate all of our
performance capacities], IS just a case of ordinary scientific
inference; here it is an empirical question whether brain correlates
will be helpful in guiding theory-construction. I happen to
doubt they will be helpful even there; not, at least until we
get much closer to TTT utopia, when we've all but captured
total performance capacity, and the fine-tuning [errors, reaction
times, response style, etc.] may begin to matter. There, as I've
suggested, the boundary between organism-performance and
brain-performance may break down somewhat, and microfunctional and
structural considerations may become relevant to the success and
verisimilitude of the performance modeling itself.
                                                        
>	Now then, armed with the reasonably reliable knowledge that in my own
>	case, my brain is a cause of my mind, and my mind is a cause of my
>	performance, I can try to draw appropriate conclusions about others.

As I've tried to argue, these two types of knowledge are so different
as to be virtually incommensurable. In particular, your knowledge that
your brain causes your performance is direct and incorrigible, whereas
your knowledge that your brain causes your mind is indirect,
inferential, and parasitic on the former. Inferences about other minds
are NOT ordinary cases of scientific inference. The mind/body case is
special.

>	X3 has brains, but little/no performance - eg a case of severe
>	retardation.  Well, there doesn't seem much reason to believe that
>	X has intelligence, and so is disqualified from having mind, given
>	our definition.  However, it is still reasonable to believe that
>	X3 might have consciousness, eg can feel pain, see colors, etc.

For the time being, intelligence is as mind does. X3 may not be VERY
intelligent, but if he has any mind-like performance capacity (to pass
some variant of the TTT for some organism or other -- a tricky issue),
that amounts to having some intelligence. As discussed in another
module, intelligence may be a matter of degree, but having a mind
seems to be an all-or-none matter. Also, having a mind seems to be a
sufficient condition for having intelligence; if it's not also a
necessary condition, we have the radical indeterminacy I mentioned
earlier, and we're in trouble.

So the case of severe retardation seems to represent no problem.
Retarded people pass (some variant of) the TTT, and we have no trouble
assigning them minds. This is fine as long as they have some (shall we
call it "intelligible") performance capacity, and hence some
intelligence. Comatose people are another matter. But they may well
not have minds. (I might add that our inclination to assign a mind to
a person who is so retarded that his performance capacity is reduced
to vegetative functions such as blinking, breathing and swallowing,
could conceivably be an overgeneralization, motivated by considerations
of biological origins and humanitarian concerns.) I repeat, though,
that these special cases belong more to the domain of near-utopia
fine-tuning than the basic issue of whether it is performance or brain
correlates that should guide us in inferring minds in others. Certainly
neither TTT-enthusiasts nor brain-enthusiasts have any grounds for
feeling confident about their judgments in such ambiguous cases.

>	X4 has normal human cognitive performance, but no brains, eg the
>	ultimate AI system.  Well, no doubt X4 has intelligence, but the issue
>	is whether X4 has consciousness.  This seems far from obvious to me,
>	since I know in my own case that brain causes consciousness causes
>	performance.  But I already know, in the case of X4, that the causal
>	chain starts out at a different place (non-brain), even if it ends up
>	in the same place (intelligent performance).  So I can certainly
>	question (rationally) whether it gets to performance "via
>	consciousness" or not.
>	If this seems too contentious, ask yourself: given a choice between
>	destroying X3 or X4, is it really obvious that the more moral choice
>	is to destroy X3?

I don't think the moral choice is obvious in either case. However, I
don't think you're imagining this case sufficiently vividly. Let's make
it the one I proposed: A lifelong friend turns out to be robot, versus
a human born (irremediably) with only vegetative function. These issues
are for the right-to-lifers; the alternatives imposed on us are too
hypothetical and artificial (akin to having to choose between saving
one's mother or father). But I think it's fairly clear which way I'd
go here. And what we know (or don't know) about brains has very little
to do with it.

>	Finally, a gedanken experiment (if ever there was one) - suppose
>	(a la sci-fi stories) they opened you up and showed you that you
>	really didn't have a brain after all, that you really did have
>	electronic circuits - and suppose it transpired that while most
>	humans had brains, a few, like yourself, had electronics.  Now,
>	never doubting your own consciousness, if you *really* found that
>	out, would you not then (rationally) be a lot more inclined to
>	attribute consciousness to electronic entities (after all you know
>	what it feels like to be one of them) than to brained entities (who
>	knows what, if anything, it feels like to be one of them?)? 
>	Even given *no* difference in performance between the two sub-types?
>	Showing that "similarity to one's own internal make-up" is always
>	going to be a valid criterion for consciousness, independent of
>	performance.

Frankly, although it might disturb me for other reasons, I think that
discovering I had complex, ill-understood electronic cicuits inside my
head instead of complex, ill-understood biochemical ones would not
sway me one way or the other on the basic proposition that it is
performance alone that is responsible for my inferring minds in other
people, not my (or anyone else's) dim knowledge about their inner
structure of function. I agreed in an earlier module, though, that
such a demonstration would be a bit of a blow to the sceptics about robots
(which I am not) if they discovered THEMSELVES to be robots. On the
other hand, it wouldn't move an outside sceptic one bit. For example,
*you* would presumably be unifluenced in your convictions about the
relevance of brain-correlates over and above performance if *I* turned
out to be X4. And that's just the point! Like it or not, the
1st-person stance retains center stage in the mind/body problem.

>	I make this latter point to show that I am a brain-chauvinist *only
>	insofar* as I know/believe that I *myself* am a brained entity (and
>	that my brain is what causes my consciousness).  This really
>	doesn't depend on my own observation of my own performance at all -
>	I'd still know I had a mind even if I never did any (external) thing
>	clever.

Yes. But the problem for *you* is whether *I* (or some other candidate)
have a mind, not whether *you* do. Moreover, no one suggested that the
turing test was the basis for knowing one has a mind in the 1st person
case. That problem is probably closer to the Cartesian Cogito, solved
directly and incorrigibly. The other-minds problem is the one we're
concerned with here.

Perhaps I should emphasize that in the two "correlations" we are
talking about -- performance/mind and brain/mind -- the basis for the
causal inference is radically different. The causal connection between
my mind and my performance is something I know directly from being the
performer. There is no corresponding intuition about causation from
being the possessor of my brain. That's just a correlation, depending
for its causal interpretation (if any), on what theory or metatheory I
happen to subscribe to. That's why nothing compelling follows from
being told what my insides are made of.

>	To summarize: brainedness is a criterion, not only via the indirect
>	path of: others who have intelligent performance also have brains,
>	ergo brains are a secondary correlate for mind; but also via the
>	much more direct path (which *also* justifies performance as a
>	criterion): I have a mind and in my very own case, my mind is
>	closely causally connected with brains (and with performance).

I would summarize it differently: In the 1st-person case, I know directly
that my performance is caused by my mind. I infer (from the correlation)
that my brain causes my mind. In the other-minds case I know nothing
directly; however, I am intuitively persuaded by performance similarity.
I have no intuitions about brains, but of course every confirmatory
cue helps; so if you also have a brain, my confidence is increased.
But split the ticket, and I'll go with performance every time. That
makes it seem as if performance is still the decisive criterion, and
brainedness is only a secondary correlate. 

Putting it yet another way: We have direct knowledge of the causal
connection between our minds and our performance and only indirect
inferences about the causal connection between our brains and our
minds (and performance). This parasitism is hence present in our
inferences about other minds too.

>	I agree that there are some additional epistemological problems,
>		[with subjective/objective causation, as opposed to
>		objective/objective causation, i.e., with the mind/body problem]
>	compared to the usual cases of causation.  But these don't seem
>	all that daunting, absent radical skepticism.

But "radical" scepticism makes an unavoidable, substantive appearance
in the contemporary scientific incarnation of the other-minds problem:
The problem of robot minds.

>	We already know which parts of the brain 
>	correlate with visual experience, auditory experience, speech
>	competence, etc. I hardly wish to understate the difficulty of
>	getting a full understanding, but I can't see any problem in
>	principle with finding out as much as we want.  What may be
>	mysterious is that at some level, some constellation of nerve
>	firings may "just" cause visual experience, (even as electric
>	currents "just" generate magnetic fields.)  But we are
>	always faced with brute-force correlation at the end of any scientific
>	explanation, so this cannot count against brain-explanatory theory of
>	mind.

There is not quite as much disagreement here as there may seem. We
agree on (1) the basic mystery in objective/subjective causation -- though I
disagree that it is no more mysterious than objective/objective
causation. Never mind. It's mysterious. I also agree that (2) I would
feel (negligibly) more confident in inferring that a candidate who
passed the TTT had a mind if it had a real brain than if it did not.
(I'd feel even more confident if it was my identical twin.) We agree
that (3) the brain causes the mind, that (4) the brain can be studied,
that (5) there are anatomical and physiological correlations
(objective/subjective), and that (6) these are very probably causal.

Where we may disagree is on the methodology for arriving at a causal theory
of mind. I don't think peeking-and-poking at the brain in search of
correlations is likely to generate a successful causal theory; I think
trial-and-error modeling of performance will, and that it will in fact
guide brain research, suggesting what functions to look for
implementations of, and how they cause performance. What I believe
will fall by the wayside in this brute-force correlative account --
I'm for correlations too, of course, except that I'm for
objective/objective correlations -- is subjectivity itself. For, on
all the observable evidence that will ever be available, the
complete theory of the mind -- whether implemented as a brain or as some
other artificial causal device -- will always be just as true of a
device actually having a mind as of a mindless device merely acting as
if it had a mind. And there will be no way of settling this, short of
actually BEING the device in question (which is no help to the rest of
us). If that's radical scepticism, it's come home to roost, and should
be accepted as a fact of life in mind-science. (I've dubbed this
"methodological epiphenomenalism" in the paper under discussion.)

You may feel more confident in attributing a mind to the
brain-implementation than to a synthetic one (though I can't imagine you'll
have good reasons, since they'll be functionally equivalent in every
observable and ostensibly relevant respect), but that too is a
question we will never be able settle objectively.

(Let me add, in case it's not apparent, that performances such as
reporting "It hurts now" are perfectly respectable, objective data,
both for the brain-correlation investigator and the mind-modeler. So
whereas we can never investigate subjectivity directly except in our
own case, we can approximate its behavioral manifestations as closely
as the expressive power of introspective reports will allow. What's
not clear is how useful this aspect of performance modeling will be.)

>	Well, I plead guilty to diverting the discussion into philosophy, and as
>	a practical matter, one's attitude in this dispute will hardly affect
>	one's day-to-day work in the AI lab.  One of my purposes is a kind of
>	pre-emptive strike against a too-grandiose interpretation of the
>	results of AI work, particularly with regard to claims about
>	consciousness.  Given a behavioral definition of intelligence, there
>	seems no reason why a machine can't be intelligent.  But if "mind"
>	implies consciousness, it's a different ball-game, when claiming
>	that the machine "has a mind".

I plead no less guilty than you. Neither of us is responsible for the
fact that scepticism looms large in making inferences about other
minds and how they work, which is what cognitive science is about. I
do disagree, though, that these considerations are irrelevant to one's
research strategy. It does matter whether you choose to study the
brain directly, or to model it, or to model performance-equivalent
alternatives. Other issues in this discussion matter too: modeling
toy modules versus the Total Turing Test, symbolic modeling versus
robotic modeling, and the degree of attention focused on modeling
phenomenological reports.

I also agree, of course, about the grandiose over-interpretation of
which AI (and, lately, connectionism too) has been guilty. But in the
papers under discussion I try to propose principled constraints (e.g.,
robotic capacity, groundedness, nonmodularity and the Total Turing
Test) that might restrain such excesses, rather than merely scepticism
about artificial performance. I also try to sort out the empirical
issues from the methodological and metaphysical ones. And, as I've
argued in several iterations, "inetlligence" is not just a matter of
definition.

>	My as-yet-unarticulated intuition is that, at least for people, the
>	grounding-of-symbols problem, to which you are acutely and laudably
>	sensitive, inherently involves consciousness, ie at least for us,
>	meaning requires consciousness.  And so the problem of shoehorning
>	"meaning" into a dumb machine at least raises the issue about how
>	this can be done without making them conscious (or, alternatively,
>	how to go ahead and make them conscious).  Hence my interest in your
>	program of research.

Thank you for the kind words. One of course hopes that consciousness
will be captured somewhere along the road to Utopia. But my
methodological epiphenomenalism suggests that this may be an undecidable
metaphysical problem, and that, empirically and objectively, total
performance capacity is the most we can know ("scientifically") that
we have captured.

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet