[comp.ai] Experience

yamauchi@cs.rochester.edu (Brian Yamauchi) (03/05/89)

In article <330@esosun.UUCP> jackson@freyja.css.gov (Jerry Jackson) writes:
>
>I tried a few months back
>to convince people in this newsgroup that there was a difference between
>say: the *experience* of pain and the signal travelling through the
>nervous system.. or the *experience* of seeing blue and anything you could
>possibly tell a blind person about it.
>
>My conclusion: Most people who post to this newsgroup have no
>subjective experience.

Well, I agree with your premise, but not your conclusion :-).

The difference between a seeing man and a blind man is (surprise) that
the seeing man has functional visual sensors and the blind man does
not.  Thus the seeing man can associate the linguistic term "blue"
with his sensory experience of viewing light in the "blue" frequency
range.

The implication for AI seems to me to be that if we want systems which
can have "experiences" similar to our own, we need to equip them with
sensors which can perceive the physical world (or alternately, place
them in a very realistic simulated environment).

One could argue that the nature of human experience is also dependent
on the fact that human sensory inputs are processed as distributed
analog activations over a neural network (the brain).  Personally, I
am undecided about whether this is a critical point.

>Now, because I'm a glutton for punishment... Does anyone really think
>someone can be wrong about whether or not they are in pain?  Wouldn't
>it seem really odd to respond to the statement: "I just stubbed my toe
>and boy does it hurt!" with: "No it doesn't." 

I agree.  Of course, the same thing applies to robots.  Suppose an
humanoid robot walking next to you stubbed his toe and said "That
hurts!".  Would you respond with "No, you're just programmed to say
that when you damage yourself!"

Or take a more near-term example: Suppose you are working with a
mobile robot and you want to make sure it doesn't smash itself into a
wall if it's vision software screws up.  You might equip it with force
sensors on its body and a low-level behavior which causes it to move
back whenever these sensors are activated.  In addition you could have
the higher-level behaviors monitor these sensors as well, so that they
know when the robot runs into something.

Viewing this robot's behavior, it certainly appears that this robot
experiences something like a primitive form of pain.  I'm not arguing
that this is at all similar to the complex forms that humans
experience, but rather something close to what we assume the lower
animals experience -- due entirely to our observations of their
behavior and the way their nervous system is set up.  After all, none
of us really know whether animals can experience pain, but we tend to
assume that they can.

>--Jerry Jackson

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/06/89)

     THE ROBOT REPLY VS. THE ROBOTIC FUNCTIONALIST REPLY

yamauchi@cs.rochester.edu (Brian Yamauchi) of
U of Rochester, CS Dept, Rochester, NY wrote:

" The difference between a seeing man and a blind man is (surprise) that
" the seeing man has functional visual sensors and the blind man does
" not. Thus the seeing man can associate the linguistic term "blue"
" with his sensory experience of viewing light in the "blue" frequency
" range.

There are many kinds of blindness, of which the lack of a retina (which
is in any case not just a transducer) is only one. There are multiple
analogs of the retina higher and higher in the brain. Their functions
are not yet understood, but they seem to be doing feature extraction
and transformation, all on projections that are isomorphic with the
retinal surface. Blindness can arise from loss or deconnection at a
variety of levels of brain function, including levels BEFORE and AFTER
the last known retinotopic map. Nor are the nonretinotopic regions
likely to be symbolic either. What's going on there is not yet
understood. But it's certainly not the marriage of a symbol-cruncher to
transducers.

The nature of the functional transition from so-called "primary,
sensory-projection" cortex to so-called "secondary, multimodal and
supramodal association" cortex is simply not known by anyone at this
time. But the progression seems to be from sensory input representation
to multimodal activity to motor output patterns. No one has yet accused
any part of the cortex of symbol-crunching (not even the "language"
areas, Wernicke's area and Broca's area, which are receptive and
productive, respectively).

As to "associating" the symbol "blue" with the pertinent sensory
category:  It may seem easy in the case of color names (actually, it
isn't), but when you move on to other concrete sensory categories,
and eventually to abstract ones, the story becomes quite complicated.
The problem of "connecting" the right symbol to the right input is the
categorization problem, which in turn is closely related to what I've
called "the symbol grounding problem." I have edited a whole book on
this ("Categorical Perception: The Groundwork of Cognition." Cambridge
University Press 1987) whose upshot is that it's not just a matter of
hooking up a symbol cruncher to sensors and effectors.

" The implication for AI seems to me to be that if we want systems which
" can have "experiences" similar to our own, we need to equip them with
" sensors which can perceive the physical world (or alternately, place
" them in a very realistic simulated environment).

You can give a symbol-cruncher sensors, but to make it PERCEIVE is
a slightly taller order...

" One could argue that the nature of human experience is also dependent
" on the fact that human sensory inputs are processed as distributed
" analog activations over a neural network (the brain). Personally, I
" am undecided about whether this is a critical point.

One safe way to remain undecided is not to worry about the problem of
how to map symbols onto the world at all, simply assuming that it's
easy, and that it will somehow meet a top-down symbol-cruncher
half-way. Reasons and evidence are accumulating, however, to show what a
simplistic pipe-dream that is, and how the very existence and
self-sufficiency of an autonomous symbol-crunching module or level in
the brain [or any other TTT-capable SYSTEM (sic)] may have to be
rethought.

One way of conceptualizing this is to contrast the standard "robot"
reply to Searle ("You will have to connect sensors and effectors to
your symbol-cruncher" -- which is easily parried by Searle, because
it's irrelevant to his point) with my own "robotic functionalist" reply
("To pass the LTT you must be able to pass the TTT, and to pass the TTT
you will have to draw upon nonsymbolic functions [e.g., transduction,
analog transformations, A/D] which are immune to the Chinese Room
Argument"). A functionally autonomous symbolic level may not exist in
TTT-capable devices, much less be sufficient to give rise to
LTT-capable performance. To put it another way, bottom-up may be the
only route to the TTT, and the only way to arrive at a grounded symbol
system.

Ref: Harnad (1989) Minds, Machines and Searle. Journal of Experimental
and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771