[mod.ai] Please post on mod.ai -- first of 4

harnad@mind.UUCP (Stevan Harnad) (10/26/86)

In Message-ID: <8610190504.AA08059@ucbvax.Berkeley.EDU> on mod.ai
CUGINI, JOHN <cugini@nbs-vms.ARPA> replies to my claim that

>> there is no rational reason for being more sceptical about robots'
>> minds (if we can't tell their performance apart from that of people)
>> than about (other) peoples' minds.

with the following:

>	One (rationally) believes other people are conscious BOTH because
>	of their performance and because their internal stuff is a lot like
>	one's own.

This is a very important point and a subtle one, so I want to make
sure that my position is explicit and clear: I am not denying that
there exist some objective data that correlate with having a mind
(consciousness) over and above performance data. In particular,
there's (1) the way we look and (2) the fact that we have brains. What
I am denying is that this is relevant to our intuitions about who has a
mind and why. I claim that our intuitive sense of who has a mind is
COMPLETELY based on performance, and our reason can do no better. These
other correlates are only inessential afterthoughts, and it's irrational
to take them as criteria.

My supporting argument is very simple: We have absolutely no intuitive
FUNCTIONAL ideas about how our brains work. (If we did, we'd have long
since spun an implementable brain theory from our introspective
armchairs.) Consequently, our belief that brains are evidence of minds and
that the absence of a brain is evidence of the absence of a mind is based
on a superficial black-box correlation. It is no more rational than
being biased by any other aspect of appearance, such as the color of
the skin, the shape of the eyes or even the presence or absence of a tail.

To put it in the starkest terms possible: We wouldn't know what device
was and was not relevantly brain-like if it was staring us in the face
-- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass
the Total Turing Test). That's the only thing our intuitions have to
go on, and our reason has nothing more to offer either.

To take one last pass at setting the relevant intuitions: We know what
it's like to DO (and be able to do) certain things. Similar
performance capacity is our basis for inferring that what it's like
for me is what it's like for you (or it). We do not know anything
about HOW we do any of those things, or about what would count as the
right way and the wrong way (functionally speaking). Inferring that
another entity has a mind is an intuitive judgment based on performance.
It's called the (total) turing test. Inferring HOW other entities
accomplish their performance is ordinary scientific inference. We're in
no rational position to prejudge this profound and substantive issue on
the basis of the appearance of a lump of grey jelly to our untutored but
superstitious minds.
               
>	[W]e DO have some idea about the functional basis for mind, namely
>	that it depends on the brain (at least more than on the pancreas, say).
>	This is not to contend that there might not be other bases, but for
>	now ALL the minds we know of are brain-based, and it's just not
>	dazzlingly clear whether this is an incidental fact or somewhat
>	more deeply entrenched.

The question isn't whether the fact is incidental, but what its
relevant functional basis is. In other words, what is it about he
brain that's relevant and what incidental? We need the causal basis
for the correlation, and that calls for a hefty piece of creative
scientific inference (probably in theoretical bio-engineering). The
pancreas is no problem, because it can't generate the brain's
performance capacities. But it is simply begging the question to say
that brain-likeness is an EXTRA relevant source of information in
turing-testing robots, when we have no idea what's relevantly brain-like.

People were sure (as sure as they'll ever be) that other people had
minds long before they ever discovered they had brains. I myself believed
the brain was just a figure of speech for the first dozen or so years of
my life. Perhaps there are people who don't learn or believe the news
throughout their entire lifetimes. Do you think these people KNOW any
less than we do about what does or doesn't have a mind? Besides, how
many people do you think could really pick out a brain from a pancreas
anyway? And even those who can have absolutely no idea what it is
about the brain that makes it conscious; and whether a cow's brain or
a horse-shoe crab's has it; or whether any other device, artificial or
natural, has it or lacks it, or why. In the end everyone must revert to
the fact that a brain is as a brain does.

>	Why is consciousness a red herring just because it adds a level
>	of uncertainty? 

Perhaps I should have said indeterminacy. If my arguments for
performance-indiscernibility (the turing test) as our only objective
basis for inferring mind are correct, then there is a level of
underdetermination here that is in no way comparable to that of, say,
the unobservable theoretical entities of physics (say, quarks, or, to
be more trendy, perhaps strings). Ordinary underdetermination goes
like this: How do I know that your theory's right about the existence
and presence of strings? Because WITH them the theory succeeds in
accounting for all the objective data (let's pretend), and without
them it does not. Strings are not "forced" by the data, and other
rival theories may be possible that work without them. But until these
rivals are put forward, normal science says strings are "real" (modulo
ordinary underdetermination).

Now try to run that through for consciousness: How do I know that your
theory's right about the existence and presence of consciousness (i.e.,
that your model has a mind)? "Because its performance is
turing-indistinguishable from that of creatures that have minds." Is
your theory dualistic? Does it give consciousness an independent,
nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit
the objective data just as well (indeed, turing-indistinguishably)
without consciousness? "Well..."

That's indeterminacy, or radical underdetermination, or what have you.
And that's why consciousness is a methodological red herring.

>	Even though any correlations will ultimately be grounded on one side
>	by introspection reports, it does not follow that we will never know,
>	with reasonable assurance, which aspects of the brain are necessary for
>	consciousness and which are incidental...Now at some level of difficulty
>	and abstraction, you can always engineer anything with anything... But
>	the "multi-realizability" argument has force only if its obvious
>	(which it ain't) that the structure of the brain at a fairly high
>	level (eg neuron networks, rather than molecules), high enough to be
>	duplicated by electronics, is what's important for consciousness.

We'll certainly learn more about the correlation between brain
function and consciousness, and even about the causal (functional)
basis of the correlation. But the correlation will really be between
function and performance capacity, and the rest will remain the intuitive
inference or leap of faith it always was. And since ascertaining what
is relevant about brain function and what is incidental cannot depend
simply on its BEING brain function, but must instead depend, as usual, on
the performance criterion, we're back where we started. (What do you
think is the basis for our confidence in introspective reports? And
what are you going to say about robots'introspective reports...?)

I don't know what you mean, by the way, about always being able to
"engineer anything with anything at some level of abstraction." Can
anyone engineer something to pass the robotic version of the Total
Turing Test right now? And what's that "level of abstraction" stuff?
Robots have to do their thing in the real world. And if my
groundedness arguments are valid, that ain't all done with symbols
(plus add-on peripheral modules).

Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771