[mod.ai] Quantitative Consciousness

Laws@SRI-STRIPE.ARPA.UUCP (01/26/87)

  Stevan Harnad:
  Everyone knows that there's no
  AT&T to stick a pin into, and to correspondingly feel pain. You can do
  that to the CEO, but we already know (modulo the TTT) that he's
  conscious. You can speak figuratively, and even functionally, of a
  corporation as if it were conscious, but that still doesn't make it so.
  [...]   Do you believe [...] corporations feel pain, as we do?

They sure act like it when someone puts arsenic in their capsules.
I'm inclined to grant a limited amount of consciousness to corporations
and even to ant colonies.  To do so, though, requires rethinking the
nature of pain and pleasure (to something related to homeostatis).
I don't know of any purely mechanical systems that approach consciousness,
but computer operating systems and adaptive communications networks are
close.  The issue is partly one of complexity, partly of structure,
partly of function.  I am assuming that neurons and other "simple"
systems are C-1 but not C-2  -- and C-2 is the kind of consciousness
that people are really interested in.  C-2 consciousness seems to
require that at least one subsystem be "wired" to reason about its
own existence, although I gather that this may be denied in the
theory of situated automata.  The mystery for me is why only >>one<<
subsystem in my brain seems to have that introspective property -- but
multiple personalities or split-brain subjects may be examples that
this is not a necessary condition.


  There are serious problems with the quantitative view of
  consciousness. No doubt my alertness, my sensory capacity and my
  knowledge admit of degrees. I may feel more pain or less pain, more or
  less often, under more or fewer conditions. But THAT I feel pain, or
  experience anything at all, seems an all-or-none matter, and that's
  what's at issue in the mind/body problem.

An airplane either can fly or it can't.  (And there's no way half a
B-52 can fly, no matter how you choose your half.)  Yet there are
simpler forms of flight used by other entities -- kites, frisbees,
paper airplanes, butterflies, dandelion seeds, ...   My own opinion
is that insects and fish feel pain, but often do so in a generalized,
nonlocalized way that is similar to a feeling of illness in humans.  
Octopi seem to be conscious, but with a psychology like that of spiders
(i.e., if hungry, conserve energy and wait for food to come along).
I assume that lower forms experience lower forms of consciousness
along with lower levels of intelligence.  Such continuua seem natural
to me.  If you wish to say that only humans and TTT-equivalents are
conscious, you shoud bear the burden of establishing the existence
and nature of the discontinuity.


  It also seems arbitrary to be "willing" to ascribe consciousness to
  neurons and not to atoms.

When someone demonstrates that atoms can learn, I'll reconsider.
(Incidentally, this raises the metaphysical question of whether God
can be conscious if He already knows everything.)  You are questioning
my choice of discontinuity, but mine is easy to defend (or give up)
because I assume that the scale of consciousness tapers off into
meaninglessness.  Asking whether atoms are conscious is like asking
whether aircraft bolts can fly.


  The issue here is: what justifies interpreting something/someone as
  conscious?  The Total Turing Test has been proposed as our only criterion.
  What criterion are you using with neurons?

Your TTT has been put forward as the only justifiable means of deciding
that an entity is conscious.  I can't force myself to believe that,
although you have already punched holes in arguments far more cogent
than I could have raised.  Still, I hope you're not insisting that
no entity can be conscious without passing the TTT.  Even a rock could
be conscious without our having any justifiable means of deciding so.


  And even if single cells are
  conscious -- do feel pain, etc. -- what evidence is there that this is
  RELEVANT to their collective function in a superordinate organism?

What evidence is there that it isn't?  Evolved and engineered systems
generally support the "form follows function" dictum.  Aircraft parts
have to be airworthy whether or not they can fly on their own.


  Why doesn't replacing conscious nerve cells with
  synthetic molecules matter? (To reply that synthetic substances with the
  same functional properties must be conscious under these conditions is
  to beg the question.)

I beg your pardon?  Or rather, I beg to beg your question.  I presume
that a synthetic replica of myself, or any number of such replicas,
would continue my consciousness.


  If I sound like I'm calling an awful lot of gambits "question-begging,"
  it's because the mind/body problem is devilishly subtle, and the
  temptation to capitulate by slipping consciousness back into one's
  premises is always there.

Perhaps professional philosophers are able to strive for a totally
consistent world view.  We armchair amateurs have to settle for
tackling one problem at a time.  A standard approach is to open
back doors and try to push the problem through; if no one push back,
the problem is [temporarily] solved.  (Another approach is to duck
out the back way ourselves, leaving the problem unsolved:  Why is
there Being instead of Nothingness?  Who cares?)  I'm glad you've
been guarding the back doors and I appreciate your valiant efforts
to clarify the issues.  I have to live with my gut feelings, though,
and they remain unconvinced that the TTT is of any use.  If I had to
build an aircraft, I would not begin by refuting theological arguments
about Man being given dominion over the Earth rather than the Heavens.
I would start from a premise that flight was possible and would
try to derive enabling conditions.  Perhaps the attempt would be
futile.  Perhaps I would invent only the automobile and the rocket,
and fail to combine them into an aircraft.  But I would still try.

					-- Ken Laws