[mod.ai] Laws on Consciousness

harnad@seismo.CSS.GOV@mind.UUCP (01/28/87)

Ken Laws <Laws@SRI-STRIPE.ARPA> wrote:

>	I'm inclined to grant a limited amount of consciousness to corporations
>	and even to ant colonies.  To do so, though, requires rethinking the
>	nature of pain and pleasure (to something related to homeostatis).

Unfortunately, the problem can't be resolved by mere magnanimity. Nor
by simply reinterpreting experience as something else -- at least not
without a VERY persuasive argument -- one no one in the history of the M/B
problem has managed to come up with so far. This history is just one of
hand-waving. Do you think "rethinking" pain as homeostastis does the trick?

>	computer operating systems and adaptive communications networks are
>	close [to conscious]. The issue is partly one of complexity, partly
>	of structure, partly of function.

I'll get back to the question of whether experiencing is an
all-or-none phenomenon or a matter of degree below. For now, I just
wonder what kind and degree of structural/functional "complexity" you
believe adds up to EXPERIENCING pain as opposed to merely behaving as
if experiencing pain.

>	I am assuming that neurons and other "simple" systems are C-1 but
>	not C-2  -- and C-2 is the kind of consciousness that people are
>	really interested in.

Yes, but do you really think that hard questions like these can be
settled by assumption? The question is: What justifies the inference
that an organism or device is experiencing ANYTHING AT ALL (C-1), and
what justifies interpreting internal functions as conscious ones?
Assumption does not seem like a very strong justification for an
inference or interpretation. What is the basis for your assumption?

I have proposed the TTT as the only justifiable basis, and I've given
arguments in support of that proposal. The default assumptions in the
AI/Cog-Sci community seem to be that sufficiently "complex" function
and performance capacity, preferably with "memory" and "learning," can be
dubbed "conscious," especially with the help of the subsidiary
assumption that consciousness admits of degrees. The thrust of my
critique is that this position is rather weak and arbitrary, and open
to telling counter-examples (like Searle's). But, more important, it
is not an issue on which the Cog-sci community even needs to take a
stand! For Cog-sci's objective goal -- of giving a causal explanation
of organisms' and devices' functional properties -- can be achieved
without embellishing any of its functional constructs with a conscious
interpretation. This is what I've called "methodological
epiphenomenalism." Moreover, the TTT (as an asymptotic goal) even
captures the intuitions about "sufficient functional complexity and
performance capacity," in a nonarbitrary way.

It is the resolution of these issues by unsupportable assumption, circularity,
arbitrary fiat and obiter dicta that I think is not doing the field
any good. And this is not at all because (1) it simply makes cog-sci look
silly to philosophers, but because, as I've repeatedly suggested, (2) the
unjustified embellishment of (otherwise trivial, toy-like) function
or performance as "conscious" can actually side-track cog-sci from its
objective, empirical goals, masking performance weaknesses by
anthropomorphically over-interpreting them. Finally (3), the
unrealizable goal of objectively capturing conscious phenomenology,
being illogical, threatens to derail cog-sci altogether, heading it in
the direction of hermeneutics (i.e., subjective interpretation of
mental states, i.e., C-2) rather than objective empirical explanation of
behavioral capacity. [If C-2 is "what people are really interested
in," then maybe they should turn to lit-crit instead of cog-sci.]

>	The mystery for me is why only >>one<< subsystem in my brain
>	seems to have that introspective property -- but
>	multiple personalities or split-brain subjects may be examples that
>	this is not a necessary condition.

Again, we'd probably be better off tackling the mystery of what the
brain can DO in the world, rather than what subjective states it can
generate. But, for the record, there is hardly agreement in clinical
psychology and neuropsychology about whether split-brain subjects or
multiple-personality patients really have more than one "mind," rather
than merely somewhat dissociated functions -- some conscious, some not --
that are not fully integrated, either temporally or experientially.
Inferring that someone has TWO minds seems to be an even trickier
problem than the usual problem ("solved" by the TTT) of inferring that 
someone has ONE (a variant of the mind/body problem called the "other-minds"
problem). At least in the case of the latter we have our own, normal unitary
experience to generalize from...

>	[Regarding the question of whether consciousness admits of degrees:]
>	An airplane either can fly or it can't. Yet there are
>	simpler forms of flight used by other entities-- kites, frisbees,
>	paper airplanes, butterflies, dandelion seeds... My own opinion
>	is that insects and fish feel pain, but often do so in a generalized,
>	nonlocalized way that is similar to a feeling of illness in humans.

Flight is an objective, objectively definable function. Experience is
not. We can, for example, say that a massive body that stays aloft in
space for any non-zero period of time is "flying" to a degree. There
is no logical problem with this. But what does it mean to say that
something is conscious to a degree? Does the entity in question
EXPERIENCE anything AT ALL? If so, it is conscious. If not, not. What
has degree to do with it (apart from how much, or how intensely it
experiences, which is not the issue)? 

I too believe that lower animals feel pain. I don't want to conjecture
what it feels like to them; but having conceded that it feels like
anything at all, you seem to have conceded that they are conscious.
Now where does the question of degree come into it?

The mind/body problem is the problem of subjectivity. When you ask
whether something is conscious, you're asking whether it has
subjective states at all, not which ones, how many, or how strong.
That is an all-or-none matter, and it concerns C-1. You can't speak of
C-2 at all until you have a principled handle on C-1.

>	I assume that lower forms experience lower forms of consciousness
>	along with lower levels of intelligence.  Such continuua seem natural
>	to me. If you wish to say that only humans and TTT-equivalents are
>	conscious, you should bear the burden of establishing the existence
>	and nature of the discontinuity.

I happen to share all those assumptions about consciousness in lower
forms, except that I don't see any continuum of consciousness there at
all. They're either conscious or not. I too believe they are conscious,
but that's an all-or-none matter. What's on a continuum is what they're
conscious OF, how much, to what degree, perhaps even what it's "like" for
them (although the latter is more a qualitative than a quantitative
matter). But THAT it's like SOMETHING is what it is that I am
assenting to when I agree that they are conscious at all. That's C-1.
And it's the biggest discontinuity we're ever likely to know of.

(Note that I didn't say "ever likely to experience," because of course
we DON'T experience the discontinuity: We know what it is like to
experience something, and to experience more or less things, more or less
intensely. But we don't know what it's like NOT to experience
something. [Be careful of the scope of the "not" here: I know what
it's like to see not-red, but not what it's like to not-see red, or be
unconscious, etc.] To know what it's like NOT to experience
anything at all is to experience not-experiencing, which is
a contradiction in terms. This is what I've called, in another paper,
the problem of "uncomplemented" categories. It is normally solved by
analogy. But where the categories are uncomplementable in principle,
analogy fails in principle. I think that this is what is behind our
incoherent intuition that consciousness admits of degrees: Because to
experience the conscious/unconscious discontinuity is logically
impossible, hence, a fortiori, experientially impossible.)

>	[About why neurons are conscious and atoms are not:]
>	When someone demonstrates that atoms can learn, I'll reconsider.

You're showing your assumptions here. What can be more evident about
the gratuitousness of mentalistic interpretation (in place of which I'm
recommending abstention or agnosticism on methodological grounds)
than that you're prepared to equate it with "learning"?

>	You are questioning my choice of discontinuity, but mine is easy
>	to defend (or give up) because I assume that the scale of
>	consciousness tapers off into meaninglessness. Asking whether
>	atoms are conscious is like asking whether aircraft bolts can fly.

So far, it's the continuum itself that seems meaningless (and the defense
a bit too easy-going). Asking questions about subjective phenomena
is not as easy as asking about objective ones, hopeful analogies
notwithstanding. The difficulty is called the mind/body problem.

>	I hope you're not insisting that no entity can be conscious without
>	passing the TTT. Even a rock could be conscious without our having
>	any justifiable means of deciding so.

Perhaps this is a good place to point out the frequent mistake of
mixing up "ontic" questions (about what's actually TRUE of the world)
and "epistemic" ones (about what we can KNOW about what's actually true of
the world, and how). I am not claiming that no entity can be conscious
without passing the TTT. I am not even claiming that every entity that
passes the TTT must be conscious. I am simply saying that IF there is
any defensible basis for inferring that an entity is conscious, it is
the TTT. The TTT is what we use with one another, when we daily
"solve" the informal "other-minds" problem. It is also cog-sci's
natural asymptotic goal in mind-modeling, and again the only one that
seems methodologically and logically defensible.

I believe that animals are conscious; I've even spoken of
species-specific variants of the TTT; but with these variants both our
intuitions and our ecological knowledge become weaker, and with them
the usefulness of the TTT in such cases. Our inability to devise or
administer an animal TTT doesn't make animals any less conscious. It just
makes it harder to know whether they are, and to justify our inferences.

(I'll leave the case of the stone as an exercise in applying the
ontic/epistemic distinction.)

>>SH:  "(To reply that synthetic substances with the same functional properties
>>	must be conscious under these conditions is to beg the question.)"
>KL: 	I presume that a synthetic replica of myself, or any number of such
>	replicas, would continue my consciousness.

I agree completely. The problem was justifying attributing consciousness
to neurons and denying it to, say, atoms. It's circular to say
neurons are conscious because they have certain functional properties
that atoms lack MERELY on the grounds that neurons are functional
parts of (obviously) conscious organisms. If synthetic components
would work just as well (as I agree they would), you need a better
justification for imputing consciousness to neurons than that they are
parts of conscious organisms. You also need a better argument for
imputing consciousness to their synthetic substitutes. The TTT is my
(epistemic) criterion for consciousness at the whole-organism level.
Its usefulness and applicability trail off drastically with lower and lower
organisms. I've criticized cog-sci's default criteria earlier in this
response. What criteria do you propose, and what is the supporting
justification, for imputing consciousness to, say, neurons?

>	Perhaps professional philosophers are able to strive for a totally
>	consistent world view.

The only thing at issue is logical consistency, not world view. And even
professional scientists have to strive for that.

>	Why is there Being instead of Nothingness?  Who cares?

These standard examples (along with the unheard sound of the tree
falling alone in the forest) are easily used to lampoon philosophical
inquiry. They tend to be based on naive misunderstandings of what
philosophers are actually doing -- which is usual as significant and
rigorous as any other area of logically constrained intellectual
inquiry (although I wouldn't vouch for all of it, in any area of
inquiry).

But in this case consider the actual ironic state of affairs:
It is cog-sci that is hopefully opening up and taking an ambitious
position on the problems that normally only concern philosophers,
such as the mind/body problem. NONphilosophers are claiming : "this is
conscious and that's not," and "this is why," and "this is what
consciousness is." So who's bringing it up, and who's the one that cares?

Moreover, I happen myself to be a nonphilosopher (although I have a
sizeable respect for that venerable discipline and its inevitable quota
of insightful exponents); yet I repeatedly find myself in the peculiar
role of having to point out the philosophically well-known howlers
that cog-sci keeps tumbling into in its self-initiated inquiry into
"Nothingness." More ironic still, in arguing for the TTT and methodological
epiphenomenalism, I am actually saying: "Why do you care? Worrying about
consciousness will get you nowhere, and there's objective empirical
work to do!"

>	If I had to build an aircraft, I would not begin by refuting
>	theological arguments about Man being given dominion over the
>	Earth rather than the Heavens. I would start from a premise that
>	flight was possible and would try to derive enabling conditions.

Building aircraft and devices that (attempt to) pass the TTT are objective,
do-able empirical tasks. Trying to model conscious phenomenology, or to
justify interpreting processes as conscious, gets you as embroiled in
"theology" as trying to justify interpreting the Communal wafer as the
body of Christ. Now who's the pragmatist and who's the theologian?


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771