[mod.ai] WHY of Pain

Laws@SRI-STRIPE.ARPA (Ken Laws) (01/23/87)

  From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
  -- why do the processes that give rise to all these sequelae ALSO need
  to give rise to any pain (or any conscious experience at all) rather
  than doing the very same tissue-healing and protective-behavioral job
  completely unconsciously?


I know what you mean, but ...  Given that the dog >>is<< conscious,
the evolutionary or teleological role of the pain stimulus seems
straightforward.  It is a way for bodily tissues to get the attention
of the reasoning centers.  Instead of just setting some "damaged
tooth" bit, the injured nerve grabs the brain by the lapels and says
"I'm going to make life miserable for you until you solve my problem."
Animals might have evolved to react in the same way without the
conscious pain (denying the "need to" in your "why" question), but
the current system does work adequately.

Why (or, more importantly, how) the dog is conscious in the first place,
and hence >>experiences<< the pain, is the problem you are pointing out.


Some time ago I posted an analogy between the brain and a corporation,
claiming that the natural tendency of everyone to view the CEO as the
center of corporate conscious was evidence for emergent consciousness
in any sufficiently complex hierarchical system.  I would like to
refute that argument now by pointing out that it only works if the
CEO and perhaps the other processing elements in the hierarchy are
themselves conscious.  I still claim that such systems (which I can't
define ...) will appear to have centers of consciousness (and may well
pass Harnad's Total Turing Test), and that the >>system<< may even
>>be conscious<< in some way that I can't fathom, but if the CEO is
not itself conscious no amount of external consensus can make it so.

If it is true that a [minimal] system can be conscious without having
a conscious subsystem (i.e., without having a localized soul), we
must equate consciousness with some threshold level of functionality.
(This is similar to my previous argument that Searle's Chinese Room
understands Chinese even though neither the occupant nor his printed
instructions do.)  I believe that consciousness is a quantitative
phenomenon, so the difference between my consciousness and that of 
one of my neurons is simply one of degree.  I am not willing to ascribe
consciousness to the atoms in the neuron, though, so there is a bottom
end to the scale.  What fraction of a neuron (or of its functionality)
is required for consciousness is below the resolving power of my
instruments, but I suggest that memory (influenced by external conditions)
or learning is required.  I will even grant a bit of consciousness
to a flip-flop :-).  The consciousness only exists in situ, however: a
bit of memory is only part of an entity's consciousness if it is used
to interpret the entity's environment.

Fortunately, I don't have my heart set on creating conscious systems.
I will settle for creating intelligent ones, or even systems that are
just a little less unintelligent than the current crop.

					-- Ken Laws