[comp.ai] Minsky on Mind

harnad@mind.UUCP (Stevan Harnad) (01/22/87)

MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU wrote in mod.ai (AIList Digest V5 #11):

>	unless a person IS in the grip of some "theoretical
>	position" - that is, some system of ideas, however inconsistent, they
>	can't "know" what anything "means"

I agree, of course. I thought it was obvious that I was referring to a
theoretical position on the mind/body problem, not on the conventions
of language and folk physics that are needed in order to discourse
intelligibly at all. There is of course no atheoretical talk. As to
atheoretical "knowledge," that's another matter. I don't think a dog
shares any of my theories, but we both know when we feel a toothache
(though he can't identify or describe it, nor does he have a theory of
nerve impulses, etc.). But we both share the same (atheoretical)
experience, and that's C-1. Now it's a THEORY that comes in and says:
"You can't have that C-1 without C-2." I happen to have a rival theory
on that. But never mind, let's just talk about the atheoretical
experience I, my rival and the dog share...

>	My point was that
>	you can't think about, talk about, or remember anything that leaves no
>	temporary trace in some part of your mind.  In other words, I agree
>	that you can't have C-2 without C-1 - but you can't have, think, say,
>	or remember that you have C-1 without C-2!  So, assuming that I know
>	EXACTLY what he means, I understand PERFECTLY that that meaning is
>	vacuous.

Fine. But until you've accounted for the C-1, your interpretation of
your processes as C-2 (rather than P-2, where P is just an unconscious
physical process that does the very same thing, physically and objectively)
has not been supported. It's hanging by a skyhook, and the label "C"
of ANY order is unwarranted.

I'll try another pass at it: I'll attempt to show how ducking or denying
the primacy of the C-1 problem gets one into infinite regress or
question-begging: There's something it's like to have the
experience of feeling a toothache. The experience may be an illusion.
You may have no tooth-injury, you may even have no tooth. You may be
feeling referred pain from your elbow. You may be hysterical,
delerious, hallucinating. You may be having a flashback to a year ago,
a minute ago, 30 milliseconds ago, when the physical and neural causes
actually occurred. But if at T-1 in real time you are feeling that
pain (let's make T-1 a smeared interval of Delta-T-1, which satisfies
both our introspective phenomenology AND the theory that there can be no
punctate, absolutely instantaneous experience), where does C-2 come into it?

Recall that C-2 is an experience that takes C-1 as its object, in the
same way C-1 takes its own phenomenal contents as object. To be 
feeling-a-tooth-ache (C-1) is to have a certain direct experience; we all
know what that's like. To introspect on, reflect on, remember, think about
or describe feeling-a-toothache (all instances of C-2) is to have
ANOTHER direct experience -- say, remembering-feeling-a-toothache, or
contemplating-feeling-a-toothache. The subtle point is that this
2nd-order experience always has TWO aspects: (1) It takes a 1st order
experience (real or imagined) as object, and is for that reason
2nd-order, and (2) it is ITSELF an experience, which is of course
1st-order (call that C-1'). The intuition is that there is something
it is like to be aware of feeling pain (C-1), and there's ALSO something
it's like to be aware of being-aware-of-feeling-pain. Because a C-1 is
the object of the latter experience, the experience is 2nd order (C-2); but
because it's still an EXPERIENCE -- i.e., there's something it's LIKE to
feel that way -- every C-2 is always also a C-1' (which can in turn become
the object of a C-3, which is then also a C-1'', etc.).

I'm no phenomenologist, nor an advocate of doing phenomenology as we
just did above. I'm also painfully aware that the foregoing can hardly be
described as "atheoretical." It would seem that only direct experience
at the C-1-level can be called atheoretical; certainly formulating a
distinction between 1st and higher-order experience is a theoretical
enterprise, although I believe that the raw phenomenology bears me
out, if anyone has the patience to introspect it through. But the
point I'm making is simple:

It's EASY to tell a story in which certain physical processes play the
role of the contents of our experience -- toothaches, memories of
toothaches, responses to toothaches, etc. All this is fine, but
hopelessly 2nd-order. What it leaves out is why there should be any
EXPERIENCE for them to be contents OF! Why can't all these processes
just be unconscious processes -- doing the same objective job as our
conscious ones, but with no qualitative experience involved? This is
the question that Marvin keeps ignoring, restating instead his
conviction that it's taken care of (by some magical property of "memory
traces," as far as I can make out), and that my phenomenology is naive
in suggesting that there's still a problem, and that he hasn't even
addressed it in his proposal. But if you pull out the C-1
underpinnings, then all those processes that Marvin interprets as C-2
are hanging by a sky-hook. You no longer have conscious toothaches and
conscious memories of toothaches, you merely have tooth-damage, and
causal sequelae of tooth-damage, including symbolic code, storage,
retrieval, response, etc.. But where's the EXPERIENCE? Why should I
believe any of that is CONSCIOUS? There's the C-2 interpretation, of
course, but that's all it is: an interpretation. I can intepret a
thermostat (and, with some effort, even a rock) that way. What
justifies the interpretation?

Without a viable C-1 story, there can be no justification. And my
conjecture is that there can be no viable C-1 story. So back to
methodological epiphenomenalism, and forget about C of any order.

[Admonition to the ambitious: If you want to try to tell a C-1 story,
don't get too fancy. All the relevant constraints are there if you can
just answer the following question: When the dog's tooth is injured,
and it does the various things it does to remedy this -- inflamation
reaction, release of white blood cells, avoidance of chewing on that
side, seeking soft foods, giving signs of distress to his owner, etc. etc.
-- why do the processes that give rise to all these sequelae ALSO need to
give rise to any pain (or any conscious experience at all) rather
than doing the very same tissue-healing and protective-behavioral job
completely unconsciously? Why is the dog not a turing-indistinguishable
automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality
does not? That's another variant of the mind/body problem, and it's what
you're up against when you're trying to justify interpreting physical
processes as conscious ones. Anything short of a convincing answer to
this amounts to mere hand-waving on behalf of the conscious interpretation
of your proposed processes.]

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

harnad@mind.UUCP (Stevan Harnad) (01/23/87)

Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

>	Given that the dog >>is<< conscious,
>	the evolutionary or teleological role of the pain stimulus seems
>	straightforward.  It is a way for bodily tissues to get the attention
>	of the reasoning centers.

Unfortunately, this is no reply at all. It is completely steeped in
the anthropomorphic interpretation to begin with, whereas the burden
is to JUSTIFY that interpretation: Why do tissues need to get the
"attention" of reasoning centers? Why can't this happen by brute
cuasality, like everything else, simple or complicated?

Nor is the problem of explaining the evolutionary function of consciousness
any easier to solve than justifying a conscious interpretation of machine
processes. For every natural-selectional scenario -- every
nondualistic one, that is, i.e., one that doesn't give consciousness
an independent, nonphysical causal force -- is faced with the problem
that the scenario is turing-indistinguishable from the exact same ecological
conditions, with the organisms only behaving AS IF they were
conscious, while in reality being insentient automata. The very same
survival/advantage story would apply to them (just as the very same
internal mechanistic story would apply to a conscious device and a
turing-indistinguishable as-if surrogate).

No, evolution won't help. (And "teleology" of course begs the
question.) Consciousness is just as much of an epiphenomenal
fellow-traveller in the Darwinian picture as in the cognitive one.
(And saying "it" was a chance mutation is again to beg the what/why
question.)

>	Why (or, more importantly, how) the dog is conscious in the first place,
>	and hence >>experiences<< the pain, is the problem you are pointing out.

That's right. And the two questions are intimately related. For when
one is attempting to justify a conscious interpretation of HOW a
device is working, one has to answer WHY the conscious interpretation
is justified, and why the device can't do exactly the same thing (objectively
speaking, i.e., behaviorally, functionally, physically) without the
conscious interpretation.

>	an analogy between the brain and a corporation,
>	...the natural tendency of everyone to view the CEO as the
>	center of corporate conscious was evidence for emergent consciousness
>	in any sufficiently complex hierarchical system.

I'm afraid that this is mere analogy. Everyone knows that there's no
AT&T to stick a pin into, and to correspondingly feel pain. You can do
that to the CEO, but we already know (modulo the TTT) that he's
conscious. You can speak figuratively, and even functionally, of a
corporation as if it were conscious, but that still doesn't make it so.

>	my previous argument that Searle's Chinese Room
>	understands Chinese even though neither the occupant nor his printed
>	instructions do.

Your argument is of course the familiar "Systems Reply." Unfortunately, 
it is open to (likewise familiar) rebuttals -- rebuttals I consider
decisive, but that's another story. To telescope the intuitive sense
of the rebuttals: Do you believe rooms or corporations feel pain, as
we do?

>	I believe that consciousness is a quantitative
>	phenomenon, so the difference between my consciousness and that of
>	one of my neurons is simply one of degree.  I am not willing to ascribe
>	consciousness to the atoms in the neuron, though, so there is a bottom
>	end to the scale.

There are serious problems with the quantitative view of
consciousness. No doubt my alertness, my sensory capacity and my
knowledge admit of degrees. I may feel more pain or less pain, more or
less often, under more or fewer conditions. But THAT I feel pain, or
experience anything at all, seems an all-or-none matter, and that's
what's at issue in the mind/body problem.

It also seems arbitrary to be "willing" to ascribe consciousness to
neurons and not to atoms. Sure, neurons are alive. And they may even
be conscious. (So might atoms, for that matter.) But the issue here
is: what justifies interpreting something/someone as conscious? The
Total Turing Test has been proposed as our only criterion. What
criterion are you using with neurons? And even if single cells are
conscious -- do feel pain, etc. -- what evidence is there that this is
RELEVANT to their collective function in a superordinate organism?

Organs can be replaced by synthetic substances with the relevant
functional properties without disturbing the consciousness of the
superordinate organism. It's a matter of time before this can be done
with the nervous system. It can already be done with minor parts of
the nervous system. Why doesn't replacing conscious nerve cells with
synthetic molecules matter? (To reply that synthetic substances with the
same functional properties must be conscious under these conditions is
to beg the question.)

[If I sound like I'm calling an awful lot of gambits "question-begging,"
it's because the mind/body problem is devilishly subtle, and the
temptation to capitulate by slipping consciousness back into one's
premises is always there. I'm just trying to make these potential
pitfalls conscious... There have been postings in this discussion
to which I have given up on replying because they've fallen so deeply
into these pits.]

>	What fraction of a neuron (or of its functionality)
>	is required for consciousness is below the resolving power of my
>	instruments, but I suggest that memory (influenced by external
>	conditions) or learning is required.  I will even grant a bit of
>	consciousness to a flip-flop :-). 
>	The consciousness only exists in situ, however: a
>	bit of memory is only part of an entity's consciousness if it is used
>	to interpret the entity's environment.

What instruments are you using? I know only the TTT. You (like Minsky
and others) are placing a lot of faith in "memory" and "learning." But
we already have systems that have remember and learn, and the whole
point of this discussion concerns whether and why this is sufficient to
justify interpreting them as conscious. To reply that it's again a matter
of degree is again to obfuscate. [The only "natural" threshold is the
TTT, and that's not just a cognitive increment in learning/memory, but
complete functional robotics. And of course even that is merely a
functional goal for the theorist and an intuitive sop for the amateur
(who is doing informal turing testing). The philosopher knows that
it's no solution to the other-minds problem.]

What you say about flip-flops of course again prejudges or begs the
question.

>	Fortunately, I don't have my heart set on creating conscious systems.
>	I will settle for creating intelligent ones, or even systems that are
>	just a little less unintelligent than the current crop.

If I'm right, this is the ONLY way to converge on a system that passes
the TTT (and therefore might be conscious). The modeling must be ambitious,
taking on increasingly life-size chunks of organisms' performance
capacity (a more concrete and specific concept than "intelligence").
But attempting to model conscious phenomenology, or interpreting toy
performance and its underlying function as if it were doing so, can
only retard and mask progress. Methodological Epiphenomenalism.
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

colonel@sunybcs.UUCP (01/26/87)

>               ... It is a way for bodily tissues to get the attention
> of the reasoning centers.  Instead of just setting some "damaged
> tooth" bit, the injured nerve grabs the brain by the lapels and says
> "I'm going to make life miserable for you until you solve my problem."

This metaphor seems to suggest that consciousness wars with itself.  I
would prefer to say that the body grabs the brain by the handles, like
a hedge clipper or a geiger counter.  In other words, just treat the
mind as a tool, without any personality of its own.  After all, it's the
body that is real; the mind is only an abstraction.

By the way, it's well known that if the brain has a twist in it, it
needs only one handle.  Ask any topologist!
-- 
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@ubvms

mmt@dciem.UUCP (Martin Taylor) (01/27/87)

>      To telescope the intuitive sense
>of the rebuttals: Do you believe rooms or corporations feel pain, as
>we do?

That final comma is crucial.  Of course they do not feel pain as we do,
but they might feel pain, as we do.

On what grounds do you require proof that something has consciousness,
rather than proof that it has not?  Can there be grounds other than
prejudice (i.e. prior judgment that consciousness in non-humans is
overwhelmingly unlikely?).  As I understand the Total Turing Test,
the objective is to find whether soemthing can be distinguished from
human, but this again prejudges the issue.  I don't think one CAN use
the TTT to assess whether another entity is conscious.

As I have tried to say in a posting that may or may not get to mod.ai,
Okham's razor demands that we describe the world using the simplest
possible hypotheses, INCLUDING the boundary conditions, which involve
our prior conceptions.  It seems to me simpler to ascribe consciousness
to an entity that resembles me in many ways than not to ascribe
consciousness to that entity.  Humans have very many points of resemblance;
comatose humans fewer.  Silicon-based entities have few overt points
of resemblance, so their behaviour has to be convincingly like mine
before I will grant them a consciousness like mine.  I don't really
care whether their behaviour is like yours, if you don't have
consciousness, and as Steve Harnad has so often said, mine is the
only consciousness I can be sure of.

The problem splits in two ways: (1) Define consciousness so that it does
not involve a reference to me, or (2) Find a way of describing behaviour
that is simpler than ascribing consciousness to me alone.  Only if you
can fulfil one of these conditions can there be a sensible argument about
the consciousness of some entity other than ME.
-- 

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

rjf@ukc.UUCP (01/28/87)

In article <2093@sunybcs.UUCP> colonel@sunybcs.UUCP (Col. G. L. Sicherman) writes:
[...]
>After all, it's the body that is real; the mind is only an abstraction.

So abstractions ain't real, huh? Does that mean I'm only imagining all
this high level programming? 

Don't worry about it - maybe this is all just your imagination.

-- 
Robin Faichney	  ("My employers don't know anything about this.")

UUCP:  ...mcvax!ukc!rjf             Post: RJ Faichney,
                                          Computing Laboratory,
JANET: rjf@uk.ac.ukc                      The University,
                                          Canterbury,
Phone: 0227 66822 Ext 7681                Kent.
                                          CT2 7NF

harnad@mind.UUCP (01/30/87)

mmt@dciem.UUCP (Martin Taylor) of D.C.I.E.M., Toronto, Canada,
writes:

>	Of course [rooms and corporations] do not feel pain as we do,
>	but they might feel pain, as we do.

The solution is not in the punctuation, I'm afraid. Pain is just an
example standing in for whether the candidate experiences anything AT
ALL. It doesn't matter WHAT a candidate feels, but THAT it feels, for
it to be conscious.

>	On what grounds do you require proof that something has consciousness,
>	rather than proof that it has not?  Can there be grounds other than
>	prejudice (i.e. prior judgment that consciousness in non-humans is
>	overwhelmingly unlikely?).

First, none of this has anything to do with proof. We're trying to
make empirical inferences here, not mathematical deductions. Second,
even as empirical evidence, the Total Turing Test (TTT) is not evidential
in the usual way, because of the mind/body problem (private vs. public
events; objective vs. subjective inferences). Third, the natural null
hypothesis seems to be that an object is NOT conscious, pending
evidence to the contrary, just as the natural null hypothesis is that
an object is, say, not alive, radioactive or massless until shown
otherwise. -- Yes, the grounds for the null hypothesis are that the
presence of consciousness is more likely than its absence; the
alternative is animism. But no, the complement to the set of
probably-conscious entities is not "non-human," because animals are
(at least to me) just about as likely to be conscious as other humans
are (although one's intuitions get weaker down the phylogenetic scale);
the complement is "inanimate." All of these are quite natural and
readily defensible default assumptions rather than prejudices.

>	[i] Occam's razor demands that we describe the world using the simplest
>	possible hypotheses.
>	[ii] It seems to me simpler to ascribe consciousness to an entity that
>	resembles me in many ways than not to ascribe consciousness to that
>	entity.
>	[iii] I don't think one CAN use the TTT to assess whether another
>	entity is conscious.
>	[iv] Silicon-based entities have few overt points of resemblance,
>	so their behaviour has to be convincingly like mine before I will
>	grant them a consciousness like mine.

{i} Why do you think animism is simpler than its alternative?
{ii} Everything resembles everything else in an infinite number of
ways; the problem is sorting out which of the similarities is relevant.
{iii} The Total Turing Test (a variant of my own devise, not to be
confused with the classical turing test -- see prior chapters in these
discussions) is the only relevant criterion that has so far been
proposed and defended. Similarities of appearance are obvious
nonstarters, including the "appearance" of the nervous system to
untutored inspection. Similarities of "function," on the other hand,
are moot, pending the empirical outcome of the investigation of what
functions will successfully generate what performances (the TTT).
{iv} [iv] seems to be in contradiction with [iii].

>	The problem splits in two ways: (1) Define consciousness so that it does
>	not involve a reference to me, or (2) Find a way of describing behaviour
>	that is simpler than ascribing consciousness to me alone.  Only if you
>	can fulfil one of these conditions can there be a sensible argument
>	about the consciousness of some entity other than ME.

It never ceases to amaze me how many people think this problem is one
that is to be solved by "definition." To redefine consciousness as
something non-subjective is not to solve the problem but to beg the
question.

[The TTT, by the way, I proposed as logically the strongest (objective) evidence
for inferring consciousness in entities other than oneself; it also seems to be
the only methodologically defensible evidence; it's what all other
(objective) evidence must ultimately be validated against; moreover, it's
already what we use in contending with the other-minds problem intuitively
every day. Yet the TTT remains more fallible than conventional inferential
hypotheses (let alone proof) because it is really only a pragmatic conjecture
rather than a "solution." It's only good up to turing-indistinguishability,
which is good enough for the rest of objective empirical science, but not
good enough to handle the problem of subjectivity -- otherwise known as the
mind/body problem.]

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

mmt@dciem.UUCP (Martin Taylor) (02/03/87)

>> = Martin Taylor (me)  > = Steven Harnad
>
>>	Of course [rooms and corporations] do not feel pain as we do,
>>	but they might feel pain, as we do.
>
>The solution is not in the punctuation, I'm afraid. Pain is just an
>example standing in for whether the candidate experiences anything AT
>ALL. It doesn't matter WHAT a candidate feels, but THAT it feels, for
>it to be conscious.
Understood.  Nevertheless, the punctuation IS important, for although it
is most unlikely they feel as we do, it is less unlikely that they feel.

>
>>	[i] Occam's razor demands that we describe the world using the simplest
>>	possible hypotheses.
>>	[ii] It seems to me simpler to ascribe consciousness to an entity that
>>	resembles me in many ways than not to ascribe consciousness to that
>>	entity.
>>	[iii] I don't think one CAN use the TTT to assess whether another
>>	entity is conscious.
>>	[iv] Silicon-based entities have few overt points of resemblance,
>>	so their behaviour has to be convincingly like mine before I will
>>	grant them a consciousness like mine.
>
>{i} Why do you think animism is simpler than its alternative?
Because of [ii].
>{ii} Everything resembles everything else in an infinite number of
>ways; the problem is sorting out which of the similarities is relevant.
Absolutely.  Watanabe's Theorem of the Ugly Duckling applies.  The
distinctions (and similarities) we deem important are no more or less
real than the infinity of ones that we ignore.  Nevertheless, we DO see
some things as more alike than other things, because we see some similarities
(and some differences) as more important than others.

In the matter of consciousness, I KNOW (no counterargument possible) that
I am conscious, Ken Laws knows he is conscious, Steve Harnad knows he is
conscious.  I don't know this of Ken or Steve, but their output on a
computer terminal is enough like mine for me to presume by that similarity
that they are human.  By Occam's razor, in the absence of evidence to the
contrary, I am forced to believe that most humans work the way I do.  Therefore
it is simpler to presume that Ken and Steve experience consciousness than
that they work according to one set of natural laws, and I, alone of all
the world, conform to another.

>{iii} The Total Turing Test (a variant of my own devise, not to be
>confused with the classical turing test -- see prior chapters in these
>discussions) is the only relevant criterion that has so far been
>proposed and defended. Similarities of appearance are obvious
>nonstarters, including the "appearance" of the nervous system to
>untutored inspection. Similarities of "function," on the other hand,
>are moot, pending the empirical outcome of the investigation of what
>functions will successfully generate what performances (the TTT).
All the TTT does, unless I have it very wrong, is provide a large set of
similarities which, taken together, force the conclusion that the tested
entity is LIKE ME, in the sense of [i] and [ii].

>{iv} [iv] seems to be in contradiction with [iii].
Not at all.  What I meant was that the biological mechanisms of natural
life follow (by Occam's razor) the same rules in me as in dogs or fish,
and that I therefore need less information about their function than I
would for a silicon entity before I would treat one as conscious.

One of the paradoxes of AI has been that as soon as a mechanism is
described, the behaviour suddenly becomes "not intelligent."   The same
is true, with more force, for consciousness.  In my theory about another
entity that looks and behaves like me, Occam's razor says I should
presume consciousness as a component of their functioning.  If I have
been told the principles by which an entity functions, and those principles
are adequate to describe the behaviour I observe, Occam's razor (in its
original form "Entities should not needlessly be multiplied") says that
I should NOT introduce the additional concept of consciousness.  For the
time being, all silicon entities function by principles that are well
enough understood that the extra concept of consciousness is not required.
Maybe this will change.

>
>>	The problem splits in two ways: (1) Define consciousness so that it does
>>	not involve a reference to me, or (2) Find a way of describing behaviour
>>	that is simpler than ascribing consciousness to me alone.  Only if you
>>	can fulfil one of these conditions can there be a sensible argument
>>	about the consciousness of some entity other than ME.
>
>It never ceases to amaze me how many people think this problem is one
>that is to be solved by "definition." To redefine consciousness as
>something non-subjective is not to solve the problem but to beg the
>question.
>
I don't see how you can determine whether something is conscious without
defining what consciousness is.  Usually it is done by self-reference.
"I experience, therefore I am conscious."  Does he/she/it experience?
But never is it prescribed what experience means.  Hence I do maintain
that the first problem is that of definition.  But I never suggested that
the problem is solved by definition.  Definition merely makes the subject
less slippery, so that someone who claims an answer can't be refuted by
another who says "that wasn't what I meant at all."

The second part of my split attempts to avoid the conclusion from
similarity that beings like me function like me.  If a simpler description
of the world can be found, then I no longer should ascribe consciousness
to others, whether human or not.  Now, I believe that better descriptions
CAN be found for beings as different from me as fish or bacteria or
computers.  I do not therefore deny or affirm that they have experiences.
(In fact, despite Harnad, I rather like Ken Law's (?) proposition that
there is a graded quality of experience, rather than an all-or-none
choice).  What I do argue is that I have better grounds for not treating
these entities as conscious than I do for more human-like entities.

Harnad says that we are not looking for a mathematical proof, which is
true.  But most of his postings demand that we show the NEED for assuming
consciousness in an entity, which is empirically the same thing as
proving them to be conscious.
-- 

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

harnad@mind.UUCP (02/03/87)

mmt@dciem.UUCP (Martin Taylor) writes:

>	we DO see some things as more alike than other things, because
>	we see some similarities (and some differences) as more important
>	than others.

The scientific version of the other-minds problem -- the one we deal
with in the lab and at the theoretical bench, as opposed to the informal
version of the other-minds problem we practice with one another every
day -- requires us to investigate what causal devices have minds, and,
in particular, what functional properties of those causal devices are
responsible for their having minds. In other words (unless you know
the answer to the theoretical problems of cognitive science and
neurosience a priori) it is an EMPIRICAL question what the relevant
underlying functional and structural similarities are. The only
defensible prior criterion of similarity we have cannot be functional
or structural, since we don't know anything about that yet; it can
only be the frail, fallible, underdetermined one we use already in
everyday life, namely, behavioral similarity.

Every other similarity is, in this state of ignorance, arbitrary,
a mere similarity of superficial appearance. (And that INCLUDES the
similarity of the nervous system, because we do not yet have the vaguest
idea what the relevant properties there are either.) Will this state of
affairs ever change? (Will we ever find similarities other than behavioral
ones on the basis of which we can infer consciousness?) I argue that it will
not change. For any other correlate of consciousness must be VALIDATED
against the behavioral criterion. Hence the relevant functional
similarities we eventually discover will always have to be grounded in
the behavioral ones. Their predictive power will always be derivative.
And finally, since the behavioral-indistinguishability criterion is itself
abundantly fallible -- incommensurably moreso than ordinary scientific
inferences and their inductive risks  -- our whole objective structure
will be hanging on a skyhook, so to speak, always turing
indistinguishable from state of affairs in which everything behaves
exactly the same way, but the similarities are all deceiving, and
consciousness is not present at all. The devices merely behave exactly
as if it were.

Throughout the response, by the way, Taylor freely interchanges the
formal scientific problem of modeling mind -- inferring its substrates,
and hence trying to judge what functional conditions are validly
inferred to be conscious (what the relevant similarities are) -- with
the informal problem of judging who else in our everyday world is
conscious. Similarities of superficial appearance may be good enough
when you're just trying to get by in the world, and you don't have the
burden of inferring causal substrate, but it won't do any good with
the hard cases you have to judge in the lab. And in the end, even
real-world judgments are grounded in behavioral similarity
(indistinguishability) rather than something else.

>	it is simpler to presume that Ken and Steve experience
>	consciousness than that they work according to one set of
>	natural laws, and I, alone of all the world, conform to another.

Here's an example of conflating the informal and the empirical
problems. Informally, we just want to make sure we're interacting with
thinking/feeling people, not insentient robots. In the lab, we have to
find out what the "natural laws" are that generate the former and not
the latter. (Your criterion for ascribing consciousness to Ken and me,
by the way, was a turing criterion...)

>	All the TTT does, unless I have it very wrong, is provide a large set of
>	similarities which, taken together, force the conclusion that the tested
>	entity is LIKE ME

The Total Turing Test simply requires that the performance capacity of
a candidate that I infer to have a mind be indistinguishable from the
performance capacity of a real person. That's behavioral similarity
only. When a device passes that test, we are entitled to infer that
its functional substrate is also relevantly similar to our own. But
that inference is secondary and derivative, depending for its
validation entirely on the behavioral similarities.

>	If a simpler description of the world can be found, then I no
>	longer should ascribe consciousness to others, whether human or not.

I can't imagine a description sufficiently simple to make solipsism
convincing. Hence even the informal other-minds problem is not settled
by "Occam's Razor." Parsimony is a constraint on empirical inference,
not on our everyday, intuitive and practical judgements, which are
often not only uneconomical, but irrational, and irresistible.

>	What I do argue is that I have better grounds for not treating
>	these [animals and machines] as conscious than I do for more
>	human-like entities.

That may be good enough for everyday practical and perhaps ethical
judgments. (I happen to think that it's extremely wrong to treat
animals inhumanely.) I agree that our intuitions about the minds of
animals are marginally weaker than about the minds of other people,
and that these intuitions get rapidly weaker still as we go down the
phylogenetic scale. I also haven't much more doubt that present-day
artificial devices lack minds than that stones lack minds. But none
of this helps in the lab, or in the principled attempt to say what
functions DO give rise to minds, and how.

>	Harnad says that we are not looking for a mathematical proof, which is
>	true. But most of his postings demand that we show the NEED for assuming
>	consciousness in an entity, which is empirically the same thing as
>	proving them to be conscious.

No. I argue for methodological epiphenomenalism for three reasons
only: (1) Wrestling with an insoluble problem is futile. (2) Gussying
up trivial performance models with conscious interpretations gives the
appearance of having accomplished more than one has; it is
self-deceptive and distracts from the real goal, which is a
performance goal. (3) Focusing on trying to capture subjective phenomenology
rather than objective performance leads to subjectively gratifying
analogy, metaphor and hermeneutics instead of to objectively stronger
performance models. Hence when I challenge a triumphant mentalist
interpretation of a process, function or performance and ask why it
wouldn't function exactly the same way without the consciousness, I am
simply trying to show up theoretical vacuity for what it is. I promise
to stop asking that question when someone designs a device that passes
the TTT, because then there's nothing objective left to do, and an
orgy of interpretation can no longer do any harm.



-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet