[mod.ai] Disclaimer of Consciousness

Laws@SRI-STRIPE.ARPA.UUCP (02/09/87)

  From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu  (Martin Taylor)
  In the matter of consciousness, I KNOW (no counterargument possible)
  that I am conscious, Ken Laws knows he is conscious, Steve Harnad
  knows he is conscious.

I'm not so sure that I'm conscious.  Oh, in the linguistic sense I
have the same property of consciousness that [we presume] everyone
has.  But I question the "I experience a toothache" touchstone for
consciousness that Steve has been using.  On the one hand, I'm not
sure I do experience the pain because I'm not sure what "I" is doing
the experiencing; on the other hand, I'm not sure that silicon systems
can't experience pain in essentially the same way.  Instead of claiming
that robots can be conscious, I am just as willing to claim that
consciousness is an illusion and that I am just as unconscious as
any robot.

It is difficult to put this argument into words because the
presumption of consciousness is built into the language itself, so
let's try to examine the linguistic assumptions.

First, the word "I".  The aspect of my existence that we are interested
in here is my mind, which is somehow dependent on my brain as a substrate.
My brain is system of neural circuits.  The "I" is a property of the
system, and cannot be claimed by any neural subsystem (or homunculus),
although some subsystems may be more "central" to my identity than others.

Consciousness would also seem to be a property of the whole system.
But not so fast -- there is strong evidence that consciousness (in the
sense of experiencing and responding to stimuli) is primarily located
in the brain stem.  Large portions of the cortex can be cut away with
little effect on consciousness, but even slight damage to the upper
brain stem cause loss of consciousness.  [I am not recanting my position
that consciousness is quantitative across species.  Within something
as complex as a human (or a B-52), emergent system properties can be
very fragile and thus seem to be all or nothing.]  We must be careful
not to equate sensory consciousness with personality (or personal
behavioral characteristics, as in the TTT), self, or soul.

Well, I hear someone saying, that kind of consciousness hardly counts;
all birds and mammals (at least) can be comatose instead of awake --
that doesn't prove they >>experience<< pain when they are awake.  Ah,
but that leads to further difficulties.  The experience is real --
after all, behavior changes because of it.  We need to know if the
process of experience is just the setting of bits in memory, or if
there is some awareness that goes along with the changes in the neural
substrate.

All right, then, how about self-awareness?  As the bits are changed,
some other part of the brain (or the brain as a whole) is "watching"
and interprets the neural changes as a painful experience.  But either
that pushes us back to a conscious homunculus (and ultimately to a
nonphysical soul) or we must accept that computers can be self-aware
in that same sense.  No, self-awareness is Steve's C-2 consciousness.
What we have to get a grip on is C-1 consciousness, an awareness of
the pain itself.

One way out is to assume that neurons themselves are aware of pain,
and that our overall awareness is some sum over the individual
discomforts.  But the summation requires that the neurons communicate
their pain, and we are back to the problem of how the rest of the
brain can sense and interpret that signal.  A similar dead end is
to suppose that toothache signals interfere with brain functioning and
that the brain interprets its own performance degradations as pain.
What is the "I" that has the awareness of pain?

How do we know that we experience pain?  (Or, following Descarte,
that we experience our own thought?)  We can formulate sentences
about the experience, but it seems doubtful that our speech centers
are the subsystems that actually experience the pain.  (That theory,
that all awareness is linguistic awareness, has been suggested.  I am
reminded of the saying that there is no ideas so outrageous that it
has not been championed by some philosopher.)  Similarly we can
rule out the motor center, the logical centers, and just about any
other centers of the brain.  Either the pain is experienced by some
tiny neural subsystem, in which case "I" am not the conscious agent,
or it is experienced by the system as a whole, in which case analogous
states or processes in analogous systems should also be considered
conscious.

I propose that we bite the bullet and accept that our "experience"
or "awareness" of pain is an illusion, replicable in all relevant
respect by inorganic systems.  Terms such as pain, experience,
awareness, consciousness, and self are crude linguistic analogies,
based on false models, to the true patterns of neural events.
Pain is real, as are the other concepts, but our model of how
they arise and interrelate is hopelessly animistic.

					-- Ken Laws