[mod.ai] son of yet more wrangling on Searle, Turing, Quine, Hume, ...

cugini@NBS-VMS.ARPA ("CUGINI, JOHN") (10/28/86)

Warning: the following message is long and exceeds the FDA maximum
daily recommended dosage of philosophizing.  You have been warned.

This is the exchange that kicked off this whole diatribe:

>>> Harnad: there is no rational reason for being more sceptical about robots'
>>> minds (if we can't tell their performance apart from that of people)
>>> than about (other) peoples' minds.

>> Cugini: One (rationally) believes other people are conscious BOTH because
>>       of their performance and because their internal stuff is a lot like
>>       one's own.

> This is a very important point and a subtle one, so I want to make
> sure that my position is explicit and clear: I am not denying that
> there exist some objective data that correlate with having a mind
> (consciousness) over and above performance data. In particular,
> there's (1) the way we look and (2) the fact that we have brains. What
> I am denying is that this is relevant to our intuitions about who has a
> mind and why. I claim that our intuitive sense of who has a mind is
> COMPLETELY based on performance, and our reason can do no better. These
> other correlates are only inessential afterthoughts, and it's irrational
> to take them as criteria.

This riposte seems implausible on the face of it.  You seem to want
to pretend that we know absolutely nothing about the basis of thought
in humans, and to "suppress" all evidence based on such knowledge.
But that's just wrong.  Brains *are* evidence for mind, in light of
our present knowledge.

> My supporting argument is very simple: We have absolutely no intuitive
> FUNCTIONAL ideas about how our brains work. (If we did, we'd have long
> since spun an implementable brain theory from our introspective
> armchairs.) Consequently, our belief that brains are evidence of minds and
> that the absence of a brain is evidence of the absence of a mind is based
> on a superficial black-box correlation. It is no more rational than
> being biased by any other aspect of appearance, such as the color of
> the skin, the shape of the eyes or even the presence or absence of a tail.

Hoo hah, you mean to say that belief based on "black-box correlation"
is irrational in the absence of a fully-supporting theoretical
framework?  Balderdash.  People in, say, 1500 AD were perfectly rational
in predicting tides based on the position of the moon (and vice-versa)
even though they hadn't a clue as to the mechanism of interaction.
If you keep asking "why" long enough, *all* science is grounded on
such brute-fact correlation (why do like charges repel, etc.) - as
Hume pointed out a while back.

> To put it in the starkest terms possible: We wouldn't know what device
> was and was not relevantly brain-like if it was staring us in the face
> -- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass
> the Total Turing Test). That's the only thing our intuitions have to
> go on, and our reason has nothing more to offer either.

Except in the case of actual other brains (which are, by definition,
relevantly brain-like).  The only skepticism open to one is that
one's own brain is unique in its causal powers - possible, but hardly
the best rational hypothesis.

> People were sure (as sure as they'll ever be) that other people had
> minds long before they ever discovered they had brains. I myself believed
> the brain was just a figure of speech for the first dozen or so years of
> my life. Perhaps there are people who don't learn or believe the news
> throughout their entire lifetimes. Do you think these people KNOW any
> less than we do about what does or doesn't have a mind? ...
 
Let me re-cast Harnad's argument (perhaps in a form unacceptable to him):
We can never know any mind directly, other than our own, if we take
the concept of mind to be something like "conscious intelligence" -
ie the intuitive (and correct, I believe) concept, rather than
some operational definition, which has been deliberately formulated
to circumvent the epistemological problems.  (Harnad, to his credit,
does not stoop to such positivist ploys.)  But the only external
evidence we are ever likely to get for "conscious intelligence"
is some kind of performance.  Moreover, the physical basis for
such performance will be known only contingently, ie we do not
know, a priori, that it is brains, rather than automatic dishwashers,
which generate mind, but rather only as an a posteriori correlation.
Therefore, in the search for mind, we should rely on the primary
criterion (performance), rather than on such derivative criteria
as brains.

I pretty much agree with the above account except for the last sentence
which prohibits us from making use of derivative criteria.  Why should
we limit ourselves so?  Since when is that part of rationality?
No, the fact is we do have more reason to suppose mind of other
humans than of robots, in virtue of an admittedly derivative (but
massively confirmed) criterion.  And we are, in this regard, in a
epistemological position *superior* to those who don't/didn't know
about such things as the role of the brain, ie we have *more* reason
to believe in the mindedness of others than they do.  That's why
primitive tribes (I guess) make the *mistake* of attributing
mind to trees, weather, etc.  Since raw performance is all they
have to go on, seemingly meaningful activity on the part of any
old thing can be taken as evidence of consciousness.  But we
sophisticates have indeed learned a thing or two, in particular, that
brains support consciousness, and therefore we (rationally) give the
benefit of the doubt to any brained entity, and the anti-benefit to
un-brained entities.  Again, not to say that we might not learn about
other bases for mind - but that hardly disparages brainedness as a
rational criterion for mindedness.

Another point, which I'll just state rather than argue for is that
even performance is only *contingently* a criterion for mind - ie,
it so happens, in this universe, that mind often expresses itself
by playing chess, etc., just as it so happens that brains cause
minds.  And so there's really not much difference between relying on
one contingent correlate (performance) rather than another (brains)
as evidence for the presence of mind.

> >  Why is consciousness a red herring just because it adds a level
> >  of uncertainty?
> 
> Perhaps I should have said indeterminacy. If my arguments for
> performance-indiscernibility (the turing test) as our only objective
> basis for inferring mind are correct, then there is a level of
> underdetermination here that is in no way comparable to that of, say,
> the unobservable theoretical entities of physics (say, quarks, or, to
> be more trendy, perhaps strings). Ordinary underdetermination goes
> like this: How do I know that your theory's right about the existence
> and presence of strings? Because WITH them the theory succeeds in
> accounting for all the objective data (let's pretend), and without
> them it does not. Strings are not "forced" by the data, and other
> rival theories may be possible that work without them. But until these
> rivals are put forward, normal science says strings are "real" (modulo
> ordinary underdetermination).

> Now try to run that through for consciousness: How do I know that your
> theory's right about the existence and presence of consciousness (i.e.,
> that your model has a mind)? "Because its performance is
> turing-indistinguishable from that of creatures that have minds." Is
> your theory dualistic? Does it give consciousness an independent,
> nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit
> the objective data just as well (indeed, turing-indistinguishably)
> without consciousness? "Well..."

> That's indeterminacy, or radical underdetermination, or what have you.
> And that's why consciousness is a methodological red herring.

I admit, I have trouble following the line of argument above.  Is this
Quine's "it's real if it's a term in our best-confirmed theories"
approach?  But I think Quine is quite wrong, if that is his
assertion.  I know consciousness (my own, at least) exists, not as
some derived theoretical construct which explains low-level data
(like magnetism explains pointer readings), but as the absolutely
lowest rock-bottom datum there is.  Consciousness is the data,
not the theory - it is the explicandum, not the explicans (hope
I got that right).  It's true that I can't directly observe the
consciousness of others, but so what?  That's an epistemological
inconvenience, but it doesn't make consciousness a red herring.

> I don't know what you mean, by the way, about always being able to
> "engineer anything with anything at some level of abstraction." Can
> anyone engineer something to pass the robotic version of the Total
> Turing Test right now? And what's that "level of abstraction" stuff?
> Robots have to do their thing in the real world. And if my
> groundedness arguments are valid, that ain't all done with symbols
> (plus add-on peripheral modules).

The engineering remark was to re-inforce the idea that, perhaps,
being-composed-of-protein might not be as practically incidental
as many assume.  Frinstance, at some level of difficulty, one can
get energy from sunlight "as plants do."  But the issues are:
do we get energy from sunlight in the same way?  How similar do
we demand that the processes are?  It might be easy to be as
efficient as plants in getting energy from sunlight through
non-biological technology.  But if we're interested in simulation at
a lower level of abstraction, eg, photosynthesis, then, maybe, a
non-biological approach will be impractical.  The point is we know we
can simulate human chess-playing abilities with non-biological
technology.  Should we just therefore declare the battle for mind won,
and go home?  Or ask the harder question: what would it take to get a
machine to play a game of chess like a person does, ie, consciously.

BTW, I quite agree with your more general thesis on the likely
inadequacy of symbols (alone) to capture mind.

John Cugini <Cugini@NBS-VMS>
------