[comp.ai] Searle, Turing, Nagel

harnad@mind.UUCP (11/23/86)

On mod.ai, rjf@ukc.UUCP <8611071431.AA18436@mcvax.uucp>
Rob Faichney (U of Kent at Canterbury, Canterbury, UK) made
nonspecific reference to prior discussions of intelligence,
consciousness and Nagel. I'm not altogther certain that his
contribution was intended as a followup to the discussion that has
been going on lately under the heading "Searle, Turing, Categories,
Symbols," but since it concerns the issues of that discussion, I am
responding on the assumption that it was. R. Faichney writes:

>	[T. Nagel's] paper [See Mortal Questions, Cambridge University Press
>	1979, and The View From Nowhere, Oxford University Press 1986]
>	is not ... strictly relevant to a discussion of machine
>	intelligence, because what Nagel is concerned with is not intelligence,
>	but consciousness. That these are not the same, may be realised on a
>	little contemplation. One may be most intensely conscious while doing
>	little or no cogitation. To be intelligent - or, rather, to use
>	intelligence - it seems necessary to be conscious, but the converse
>	does not hold - that to be conscious it is necessary to be intelligent. >	I would suggest that the former relationship is not a necessary one
>	either - it just so happens that we are both conscious and (usually)
>	intelligent. 

It would seem that if you believe that "to use intelligence...it seems
necessary to be conscious" then that amounts to agreeing that Nagel's
paper on consciousness is "relevant to a discussion of machine
intelligence." It is indisputable that intelligence admits of degrees,
both as a stable trait and as a fluctuating state. What is at issue in
discussions of the turing test is not the proposition that consciousness
is the same as intelligence. Rather, it is whether a candidate has
intelligence at all. It seems that consciousness in man is a sufficient
condition for being intelligent (i.e., for exhibiting performance that is
validly described as "intelligent" in the same way we would apply that
term to our own performance). Whether consciousness is a necessary
condition for intelligence is probably undecidable, and goes to the
heart of the mind/body problem and its attendant uncertainties.

The converse proposition -- that intelligence is a necessary condition for
consciousness is synonymous with the proposition that consciousness is
a sufficient condition for intelligence, and this is indeed being
claimed (e.g., by me). The argument runs like this: The issue in
turing-testing is sorting out intelligent performance from its unintelligent
look-alikes. As a completely representative example, consider my asking
you how much 2 + 2 is, and your replying "4" -- as compared to my writing
a computer program whose only function is to put out the symbol "4" whenever
it encounters the string of symbols "How much is 2 + 2?" (this is basically
Searle's point too). There you have it all in microcosm. If the word
"intelligence" has any meaning at all, over and above displaying ANY
arbitrary performance  at all (including a rock sliding down a hill, or,
for that matter, a rock NOT sliding down a hill), then we need a principled
way of distinguishing these two cases. That's what the Total Turing
Test I've proposed is meant to do; it amounts to equating
intelligence with total performance capacities indistinguishable from
our own. This also coincides with our only basis for inferring that
anyone else but ourselves has a mind (i.e., is conscious).

There is no contradiction between agreeing that intelligence admits
of degrees and that mind is all-or-none. The Total Turing Test does
not demand the performance capacity of Newton or Bach, only that of an
(undistinguished) person indistinguishable from any other person one might
know for a lifetime. Moreover, the Total Turing Test admits of
variants for other species, although this involves problems of ecological
knowledge and intuitions that humans may lack for any other species but
their own. It even admits of pathological variants of our own species
(retardation, schizophrenia, aphasia, paralysis, coma, etc. as discussed
in other iterations of this discussion, e.g., with J. Cugini) although
here too intuitions and validity probably break down.

>	Animals probably are conscious without being intelligent. Machines may
>	perhaps be intelligent without being conscious.  If these are defined 
>	seperately, the problem of the intelligent machine becomes relatively
>	trivial (though that may seem too good to be true): an intelligent
>	machine is capable of doing that which would require intelligence in
>	a person, eg high level chess.

Not too good to be true: Too easy. And it would fail to capture
almost all of our relevant pretheoretic generalizations or intuitions.
Animals ARE intelligent (in addition to being conscious), although, as usual,
their intelligence admits of degrees, and can only be validly assessed 
relative to their ecological or adaptive contexts (although even
relative to our own ecology, many other species display some degree of
intelligence). The machine intelligence problem -- which is the heart
of the matter -- cannot be settled so quickly and easily. Moreover,
the empirical question of what intelligence is cannot be settled by a
definition (remember "2 + 2 = 4" and the rolling stone, above). Many
intelligent people (with minds) can't play high-level chess, but no
machine can currently do EVERYTHING that the least intelligent of
these people can do. That's the burden of the Total Turing Test.

>	Nagel views subjectivity as irreducible to objectivity, indeed the
>	latter derives from the former, being a corrected and generalised
>	version of it. A maximally objective view of the world must admit
>	the reality of subjectivity.

Nagel is one of the few thinkers today who doesn't lapse into
arbitrary hand-waving on the issue of consciousness and its
"reducibility" to something else. Nagel's point is that there is
something it's "like" to have experience, i.e., to be conscious, and
that it's only open to the 1st person point of view. It's hence radically
unlike all other "objective" or "intersubjective" phenomena in science 
(e.g., meter-readings), which anyone else can verify as being independent of
one's "point of view" (although Nagel correctly reminds us that even
objectivity is parasitic on subjectivity). The upshot of his analysis
is that utopian scientific mind-science (cognitive science?)
-- that future complete theory that will predict and explain it all --
will be essentially "incomplete" in a way that utopian physics will not be:
Both will successfully predict and explain all their respective observable
(objective) data, but mind-science will be left with something
irreducible, hence unexplained.

For me, this is not a great problem, since I regard the mission of
devising a candidate that can pass the Total Turing Test to be an abundantly
profound and challenging one, and I regard its potential results -- a
functional explanation of the objective features of the mind -- as
sufficiently desirable and useful, so that the part it will FAIL to
explain does not bother me. That may well forever remain philosophy's
province. But I do keep reminding the overzealous that that utopian
mind science will be turing-indistinguishable from a mindless one. I
keep doing this for two reasons: First, because I believe that this
Nagelian point is correct, and worth keeping in mind. And second, because
I believe that attempts to capture or incorporate consciousness in cognitive
science more "directly" are utterly misguided, and lead in the direction of
highly subjective over-interpretations, hermeneutics and self-delusion,
instead of down the only objective scientific road to be traveled: modeling
lifesize performance capacity (i.e., the Total Turing Test). It is for
this reason that I recommend "methodological epiphenomenalism" as a
research strategy in cognitive science.

>	So what, really, is consciousness?  According to Nagel, a thing is
>	conscious if and only if it is like something to be that thing.
>	In other words, when it may be the subject (not the object!) of
>	intersubjectivity.  This accords with Minsky (via Col. Sicherman):
>	'consciousness is an illusion to itself but a genuine and observable
>	phenomenon to an outside observer...'  Consciousness is not
>	self-consciousness, not consiousness of being conscious, as some
>	have thought, but is that with which others can identify. This opens
>	the way to self-awareness through a hall of mirrors effect - I
>	identify with you identifying with me...  And in the negative mode
>	- I am self-conscious when I feel that someone is watching me.

The Nagel part is right, but unfortunately all the rest
(Minsky/Sicherman/hall-of-mirrors) has it all wrong, and is precisely
the type of lapse into hermeneutics and euphoria I warned against earlier.
The quote above (via the Colonel) is PRECISELY THE OPPOSITE of Nagel's
point. The only aspect of conscious experience that involves direct
observability is the subjective, 1st-person aspect (and the fact THAT I
am having a conscious experience is certainly no illusion since
Descartes at least, although what it tells me about the outside world may be,
at least since Hume). Let's call this private terrain Nagel-land.
The part others "can identify" is Turing-land: Objective, observable
performance (and its structural and functional substrates). Nagel's point
is that Nagel-land is not reducible to Turing-land.

Consciousness is the capacity to have subjective experience (or perhaps
the state of having subjective experience). The rest of the "mirrors"
business is merely metaphor and word-play; such subject matter may make for
entertaining and thought-provoking reading, as in Doug Hofstadter's books,
but it hardly amounts to an objective contribution to cognitive science.

>	It may perhaps be supposed that the concept of consciousness evolved
>	as part of a social adaptation - that those individuals who were more
>	socially integrated, were so at least in part because they identified
>	more readily, more intelligently and more imaginatively with others,
>	and that this was a successful strategy for survival. To identify with
>	others would thus be an innate behavioural trait.

Except that Nagel would no doubt suggest (and I would agree) that
there's no reason to believe that the asocial or minimally social
animals are not conscious too. But apart from that, there's a much
deeper reason why it is probably futile to try to make evolutionary
conjectures about the adaptive function of conscious experience:
According to standard evolutionary theory, the only traits that are
amenable to the kind of trial-and-error selection on the basis of
their consequences for the survival of the organism and propogation of its
genes are (what Nagel would call) OBJECTIVE traits: structure,
function and behavior. Standard evolutionary conjectures about the
putative adaptive function of consciousness are open to precisely the
same objection as the utopian mind-science spoken of earlier:
Evolution is blind to the difference between organisms that are
actually conscious and organisms that merely behave as if they were
conscious. Turing-indistinguishability again. On the other hand, recent
variants of standard evolutionary theory would be compatible with a
NON-selectional origin of consciousness, as an epiphenomenon.

(In pointing out the futility of adaptive scenarios for the origin of
consciousness, I am drawing on my own theoretical failures. I tried
that route in an earlier paper and only later realized that such
"Just-SO" stories suffer from even worse liabilities in speculations
about the evolutionary origins of consciousness than they do in
speculations about the evolutionary origins of behaviors; radically
worse liabilities, for the reason given above. Caveat Emptor.)

>	...When I suppose myself to be conscious, I am imagining myself
>	outside myself - taking the point of view of an (hypothetical) other
>	person.  An individual - man or machine - which has never communicated
>	through intersubjectivity might, in a sense, be conscious, but neither
>	the individual nor anyone else could ever know it.

I'm afraid you've either gravely misunderstood Nagel or left him far
behind here. When I feel a pain -- when I am in the qualitative state of
knowing what it's like to be feeling a pain -- I am not "supposing"
anything at all. I'm simply feeling pain. If I were not conscious, I
wouldn't be feeling pain, I'd just be acting as if I felt pain. The
same is true of you and of animals. There's nothing social about this.
Nor is "imagination" particularly involved (except perhaps in whatever
external attributions are made to the pain, such as, "there must be something
wrong with my tooth"). Even what is called clinically "imaginary" or
psychosomatic pain -- such as phantom-limb pain or hysterical pain --
is subjectively real, and that's the point: When I'm really feeling
pain, I'm not imagining I'm in pain; I AM in pain.

This is referred to by philosophers as the "incorrigibility" of 1st-person
experience.  Although it's not without controversy, it's useful to keep in
mind, because it's what's really at issue in the problem of artificial
minds. We are asking whether candidates have THAT sort of qualitative,
conscious experience. (Again, the "mirror" images about
self-consciousness, etc., are mere icing or fine-tuning, compared to
the more basic issue of whether or not, to put it bluntly, a machine
can actually FEEL pain, or merely ACTS as if it did.)

>	Subjectively, we all know that consciousness is real.  Objectively,
>	we have no reason to believe in it.  Because of the relationship
>	between subjectivity and objectivity, that position can never be
>	improved on.  Pragmatism demands a compromise between the two
>	extremes, and that is what we already do, every day, the proportion
>	of each component varying from one context to another.  But the
>	high-flown theoretical issue of whether a machine can ever be
>	conscious allows no mere pragmatism.  All we can say is that we do
>	not know, and, if we follow Nagel, that we cannot know - because the
>	question is meaningless.

Some crucial corrections that may set the whole matter in a rather different
light: Subjectively (and I would say objectively too), we all know that
OUR OWN consciousness is real. Objectively, we have no way of knowing
that anyone else's consciousness is real. Because of the relationship
between subjectivity and objectivity, direct knowledge of the kind we
have in our own case is impossible in any other. The pragmatic
compromise we practice every day with one another is called the Total
Turing Test: Ascertaining that others behave indistinguishably from our
paradigmatic model for a creature with consciousness: ourselves. We
were bound to come face-to-face with the "high-flown theoretical
issue" of artificial consciousness as soon as we went beyond everyday naive
pragmatic considerations and took on the burden of constructing a
predictive and explanatory causal thoery of mind.

We cannot know directly whether any other organism OR device has a mind,
and, if we follow Nagel, our inferences are not meaningless, but in some
respects incomplete and undecidable.


-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

rjf@ukc.ac.uk (R.J.Faichney) (11/30/86)

In article <230@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:

>On mod.ai, rjf@ukc.UUCP <8611071431.AA18436@mcvax.uucp>
>Rob Faichney (U of Kent at Canterbury, Canterbury, UK) made
>nonspecific reference ...

Sorry - the articles at issue were long gone, before I learned how to
use this thing.

>... I'm not altogther certain ... intended as a followup to ...
>"Searle, Turing, Categories, Symbols," but ...
>I am responding on the assumption that it was. 

It was not.  See below.

>... Whether consciousness is a necessary
>condition for intelligence is probably undecidable, and goes to the
>heart of the mind/body problem and its attendant uncertainties.

We have various ways of getting around the problems of inadequate definitions
in these discussions, but I think we've run right up against it here.  In
psychological circles, as you know, intelligence is notorious for being 
difficult to define.

>The converse proposition -- that intelligence is a necessary condition for
>consciousness is synonymous with the proposition that consciousness is
>a sufficient condition for intelligence, and this is indeed being
>claimed (e.g., by me). 

The problem here is whether we define intelligence as implying consciousness.
I am simply suggesting that if we (re)define intelligence as *not* implying
consciousness, we will lose nothing in terms of the utility of the concept
of intelligence, and may gain a great deal regarding our understanding of the
possibilities of machine intelligence and/or consciousness.

>If the word
>"intelligence" has any meaning at all, over and above displaying ANY
>arbitrary performance  at all... 

I'm afraid that I don't think it has very much meaning, beyond the naive,
relative usage of 'graduates tend to be more intelligent than non-graduates'.

>...the Total Turing Test...amounts to equating
>intelligence with total performance capacities ...
>... also coincides with our only basis for inferring that
>anyone else but ourselves has a mind (i.e., is conscious).
>There is no contradiction between agreeing that intelligence admits
>of degrees and that mind is all-or-none. 

But intelligence implies mind?  Where do we draw the line?  Should an IQ of
=> 40 mean that something is conscious, while < 40 denotes a mindless
automaton?  You say your Test allows for cross-species and pathological
variants, but surely this relative/absolute contradiction remains.

>>	Animals probably are conscious without being intelligent. Machines may
>>	perhaps be intelligent without being conscious. 
>Not too good to be true: Too easy. 

Granted.  I failed to make clear that I was proposing a (re)definition of
intelligence, which would retain the naive usage - including that animals are
(relatively) unintelligent - while dispensing with the theoretical problems.

>...the empirical question of what intelligence is cannot be settled by a
>definition...

Indeed, it cannot begin to be tackled without a definition, which is what
I am trying to provide.  My proposition does not settle the empirical
question - it just makes it manageable. 

>Nagel's point is that there is
>something it's "like" to have experience, i.e., to be conscious, and
>that it's only open to the 1st person point of view. It's hence radically
>unlike all other "objective" or "intersubjective" phenomena in science 
>(e.g., meter-readings)...

Surely intersubjectivity is at least as close to subjectivity as to
objectivity.  Instead of meter readings, take as an example the mother-
child relationship.  Like any other, it requires responsive feedback, in
terms in this case of cuddling, cooing, crying, smiling, and it is where
the baby learns to relate and communicate with others.  I say that it's one
*essential* characteristic is intersubjectivity.  Though the child does not
consciously identify with the adult, there is nevertheless an intrinsic
tendency to copy gestures, etc., which will be complemented and completed
at maturity by a (relatively) unselfish appreciation of the other person's
point of view.  This tendency is so profound, and so bound to our origins,
both ontogenic and philogenic(sp?) that to ascribe consciousness to something
man-made, no matter how perfect it's performance, will always require an
effort of will.  Nor could it ever be intellectually justified.

The ascription of consciousness says infinitely more about the ascriptor
than the ascriptee.  It means 'I am willing and able to identify with this
thing - I really believe that it is like something to be this thing.'  It
is inevitably, intrinsically spontaneous and subjective.  You may be willing
to identify with something which can do anything you can.  I am not.  And,
though this is obviously sheer guesswork, I'm willing to bet a lot of money
that the vast majority of people (*not* of AIers) would be with me.  And, if
you agree that it's subjective, why should anyone know better than the man
in the street?  (I'm speaking here, of course, about what people would do,
not what they think they might do - I'm not suggesting that the problem 
could be solved by an opinion poll!)

>>	So what, really, is consciousness?  According to Nagel... 
>>	This accords with Minsky (via Col. Sicherman):
>>	'consciousness is an illusion to itself but a genuine and observable
>>	phenomenon to an outside observer...'  
>The quote above (via the Colonel) is PRECISELY THE OPPOSITE of Nagel's
>point. The only aspect of conscious experience that involves direct
>observability is the subjective, 1st-person aspect... 
>Let's call this private terrain Nagel-land.
>The part others "can identify" is Turing-land: Objective, observable
>performance (and its structural and functional substrates). Nagel's point
>is that Nagel-land is not reducible to Turing-land.

The part others "can identify with" is Nagel-land.  People don't identify
structural and functional substrates, they just know what it's like to be
people.  This fact does not belong to purely subjective Nagel-land or to
perfectly objective Turing-land.  It has some features of each, and
transcends both.  Consciousness as a fact is not directly observable - it
is direct observation.  Consciousness as a concept is not directly observable
either, but it is observable in a very special way, which for *practical*
purposes is incorrigible, to the extent that it is not testable, but our
intuitions seem perfectly workable.  It cannot examine itself ('...is an
illusion to itself...') but may quite validly be seen in others ('...a
genuine and observable fact to an outside observer...').

>... hardly amounts to an objective contribution to cognitive science.

I'm not interested in the Turing Test (see above) but surely to clarify
the limits of objectivity is an objective contribution.

>>	It may perhaps be supposed that the concept of consciousness evolved
>>	as part of a social adaptation...
>Except that Nagel would no doubt suggest (and I would agree) that
>there's no reason to believe that the asocial or minimally social
>animals are not conscious too. 

I said the *concept* of consciousness...

>>	...When I suppose myself to be conscious, I am imagining myself
>>	outside myself... 
>When I feel a pain -- when I am in the qualitative state of
>knowing what it's like to be feeling a pain -- I am not "supposing"
>anything at all. 

When I feel a pain I'm being conscious.  When I suppose etc., I'm thinking
about being conscious.  I'm talking here about thinking about it, because
in order to ascribe consciousness to a machine, we first have to think about
it, unlike our ascription of consciousness to each other.  Unfortunately,
such intrinsically subjective ascriptions are much more easily made via
spontanaeity than via rationalisation.  I would say, in fact, that they may
only be spontaneous.

>Some crucial corrections that may set the whole matter in a rather different
>light: Subjectively (and I would say objectively too), we all know that
>OUR OWN consciousness is real. 

Agreed.

>Objectively, we have no way of knowing
>that anyone else's consciousness is real. 

Agreed.

>Because of the relationship
>between subjectivity and objectivity, direct knowledge of the kind we
>have in our own case is impossible in any other. 

Agreed.

>The pragmatic
>compromise we practice every day with one another is called the Total
>Turing Test: 

I call it natural, naive intersubjectivity.

>Ascertaining that others behave indistinguishably from our
>paradigmatic model for a creature with consciousness: ourselves. 

They may behave indistinguishably from ourselves, but it's not only snobs
who ask 'What do we know about their background?'.  That sort of information
is perfectly relevant.  Why disallow it?  And why believe that a laboratory-
constructed creature feels like I do, no matter how perfect it's social
behaviour?  Where subjectivity is all, prejudice can be valid, even
necessary.  What else do we have?

>...a predictive and explanatory causal thoery of mind.

Is not something that we can't get by without.

>...if we follow Nagel, our inferences are not meaningless, but in some
>respects incomplete and undecidable.

I may be showing my ignorance, but to me if something is (inevitably?)
'incomplete and undecidable', it's pretty nearly meaningless for most
purposes.

To sum up: there is actually quite a substantial area of agreement between
us, but I don't think that you go quite far enough.  While I cannot deny
that much may be learned from attempting computer and/or robot simulation
of human performance, there remains the fact that similar ends may be
achieved by different means; that a perfectly convincing robot might differ
radically from us in software as well as hardware.  In short, I think that
the computer scientists have much more to gain from this than the
psychologists.  As a former member of the latter category, and a present
member of the former (though not an AIer!), I am not complaining.


-- 
Robin Faichney

UUCP:  ...mcvax!ukc!rjf             Post: RJ Faichney,
                                          Computing Laboratory,
JANET: rjf@uk.ac.ukc                      The University,
                                          Canterbury,
Phone: 0227 66822 Ext 7681                Kent.
                                          CT2 7NF