[comp.ai] Emotions

aarons@syma.sussex.ac.uk (Aaron Sloman) (02/01/90)

yamauchi@cs.rochester.edu (Brian Yamauchi) writes:

> Date: 28 Jan 90 20:06:49 GMT
> Reply-To: yamauchi@cs.rochester.edu (Brian Yamauchi)
> Organization: University of Rochester Computer Science Department

> Once interesting idea is that emotions may be emergent phenomena of
> the behaviors necessary for a system to survive in the real world.

Yes. Monica Croucher and I argued to this effect in our paper
Aaron Sloman and Monica Croucher
`Why robots will have emotions', in
 Proceedings 7th International Joint Conference on Artificial
 Intelligence, Vancouver, 1981.

I elaborated the argument a little in

Aaron Sloman
`Motives Mechanisms Emotions' in
    Emotion and Cognition
    1,3, pp.217-234
    1987,
to be reprinted in M.A. Boden (ed)
    The Philosophy of Artificial Intelligence
    "Oxford Readings in Philosophy" Series
    Oxford University Press, 1990.
    (Soon to be published)

The key idea, to be found earlier in the writings of H.A. Simon (see
his collection entitled Models of Mind), and no doubt others before
him, is that the design requirements for resource-limited
intelligent agents with multiple oft-changing sources of motivation
in a complex and largely unpredictable world require mechanisms
which INCIDENTALLY are capable of generating emotional states.

FEELING emotions requires additional self-monitoring mechanisms.

Having or feeling a full range of characteristically HUMAN emotions
with all their normal qualities (including things like nausea, being
startled, etc.) would also require either similar physiology or
simulated physiological feedback loops.

But the excited anticipation of a deep mathematical discovery
and the sorrowful disappointment at subsequent failure could
occur in a pretty well disembodied intelligence, provided that
it had the right sort of cognitive architecture. (I regard all
this stuff about symbol grounding as largely a red herring. Causal
embedding in a physical environment is relevant only to a sub-set of
the huge space of possible designs for intelligent mechanisms.)

Aaron

ghh@clarity.Princeton.EDU (Gilbert Harman) (02/01/90)

Michael Scriven discusses why well designed androids would
have to have feelings and emotions in PRIMARY PHILOSOPHY
(McGraw Hill, 1966), pp. 181-197.  The robot
must be able to respond to emergencies, for example, and at
such times will

   be short with people for his time, and if they persist,
   he will be shorter, indeed rude.  And what will his
   reaction be to further interference when he has already
   made his preoccupation plain?  It will be irritation,
   annoyance, and eventually a justified anger, because all
   these are important and efficient gradations in the scale
   of motivational states in which decreasing courtesy is
   appropriate.  Whatever the mechanisms in the android,
   however different from the human being he may be in
   constitution, he must have inner states corresponding to
   these stages of disregard for the finer fellings of
   others that are justified by emergency plus
   thoughtlessness, and he must know that he has them and be
   able to recognize them even if they build up when he does
   not intend that they should.  What he does not have to
   have is an uncontrollable temper or an overirritable or
   overlethargic disposition.  For many purposes, he might
   not even need anger, the limit case on the scale of
   defensible reaction states.  But if the android is a
   close match to human beings in its powers, it will need
   to conserve the additional resources released by deep
   emotions, having them triggered only by especially
   threatening circumstances.  Within limits, emotions are
   efficient, and feelings are necessary. (pp. 194-5)

--
		       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
	               221 Nassau Street, Princeton, NJ 08542
			      
		       ghh@princeton.edu
		       HARMAN@PUCC.BITNET

tm11+@andrew.cmu.edu (Thomas James Menner, Jr.) (02/02/90)

ghh@clarity.Princeton.EDU (Gilbert Harman) writes:
> Michael Scriven discusses why well designed androids would
> have to have feelings and emotions in PRIMARY PHILOSOPHY
> (McGraw Hill, 1966), pp. 181-197.  The robot
> must be able to respond to emergencies, for example, and at
> such times will
> 
>    be short with people for his time, and if they persist,
>    he will be shorter, indeed rude... [stuff deleted]
>    ... reaction states.  But if the android is a
>    close match to human beings in its powers, it will need
>    to conserve the additional resources released by deep
>    emotions, having them triggered only by especially
>    threatening circumstances.  Within limits, emotions are
>    efficient, and feelings are necessary. (pp. 194-5)

It is not at all clear from the example provided (of an android/robot
dealing with others [human or otherwise] in an emergency situation)
that emotions or feelings are necessary to that entity's survival.
Certainly in many cases emotions *would* be helpful to some entity's
survival, but I don't see how a case could be made for the necessity
of emotions.  As a result I would argue against the claim that robots
or androids *need* emotions, unless the desired goal is to have
human-like behavior on the part of the machines.

John Haugeland (U. of Pitt Philosophy Dept.) suggested in a talk here at
CMU several years ago that human-like intelligence/behavior (including
emotional behavior) is dependent upon what he called an "ego
function", i.e. the computer/robot would have to have some sense of
"I, myself" in order for it to approach human intelligence.  These
seemed dance perilously close to the "consciousness" debate, but it
was an interesting idea.  Unfortunately he said he was just throwing
it out as a suggestion and could offer nothing more concrete.  I
suspect it might be intertwined with phenomenological/existential
thought along the lines of Heidegger (haven't people like Dreyfus and
Winograd suggested similar ideas?).

**************************************************************************
Thomas Menner			||   ARPA: tm11@andrew.cmu.edu
Carnegie-Mellon University	|| BITNET: tm11%andrew.cmu.edu@cmccvb
Pittsburgh, PA			||   UUCP: psuvax1!andrew.cmu.edu!tm11
**************************************************************************
"When you're swimmin' in the creek/And an eel bites your cheek/
 That's a moray!!"   -- Fabulous Furry Freak Brothers

kp@uts.amdahl.com (Ken Presting) (02/02/90)

Brian Yamauchi wrote:
>> Once interesting idea is that emotions may be emergent phenomena of
>> the behaviors necessary for a system to survive in the real world.

Gilbert Harman replies:
>Michael Scriven discusses why well designed androids would
>have to have feelings and emotions in PRIMARY PHILOSOPHY
>(McGraw Hill, 1966), pp. 181-197.  The robot
>must be able to respond to emergencies, for example, and at
>such times will
>
>   be short with people for his time, and if they persist,
>   he will be shorter, indeed rude.  And what will his
>   reaction be to further interference when he has already
>   made his preoccupation plain?  It will be irritation . . .
>   {certain emotions} are important and efficient gradations
>   in the scale of motivational states  . . .
>   . . .   Whatever the mechanisms in the android,
>   however different from the human being he may be in
>   constitution, he must have inner states corresponding to
>   these stages of disregard for the finer feelings of
>   others that are justified by emergency plus
>   thoughtlessness, and he must know that he has them and be
>   able to recognize them even if they build up when he does
>   not intend that they should.
>    . . .
>   Within limits, emotions are
>   efficient, and feelings are necessary. (pp. 194-5)

I think Brian's question has an interesting aspect which should be
more directly addressed.  Scriven's example shows how useful emotional
*behavior* can be, making a strong case for robot designers to
satisfy themselves that their products include emotional behavior
patterns.

But do robot designers need to explicitly include emotions in the design,
or would it be sufficient to set rationality as the design goal?  If the
robot had built-in knowledge of human responses to curt answers, harsh
tones, etc., it could very well use those techniques to *manipulate* its
hearers.  In such a case, it would have emotional behavior without
emotional states.  Outside the robot might be ranting, but inside all
would be cool calculation.  Perhaps this design is more difficult to
implement than a robot without the talents of a con man, but let's for the
moment consider only the theoretical necessities.

Of course, simply engaging in agitated behavior indicates that the robot
is in an agitated state of some sort.  Proprioceptors could present this
state back to the robot, or ordinary sight and hearing might do so.  Does
this mean that the robot might *feel* emotional but not have actual
emotions?  Is that incoherent?  Perhaps it is incorrect to suppose that
rational motivation excludes emotional motivation.

Aaron Sloman writes:
> . . . the design requirements for resource-limited
>intelligent agents with multiple oft-changing sources of motivation
>in a complex and largely unpredictable world require mechanisms
>which INCIDENTALLY are capable of generating emotional states.
>
>FEELING emotions requires additional self-monitoring mechanisms.

Aaron is suggesting a different approach, more in line with Brian's
question.  If a behavioral state (with the internal states causally
antecedent to it) *can* constitute an emotional state, then the presence
of emotions is guranteed.  At least it will be in robots that act
emotional.  Dedicated proprioceptors may be required only for enhanced
performance in the regulation of emotive behavior.


Here's a suggestion of Aaron's that I especially like:

>But the excited anticipation of a deep mathematical discovery
>and the sorrowful disappointment at subsequent failure could
>occur in a pretty well disembodied intelligence, provided that
>it had the right sort of cognitive architecture.

Masochist that I am, I find myself attracted to the odd view that emotional
states are actually cognitive states.  (No offense to rational holders
of related views :-)  For example, happiness is the belief that one's
important preferences are being realized.  Sorrow is the reverse.

On this view, a largely isolated thinker have plenty of emotional states,
and experiences them by virtue of the same means by which it is aware of
its own beliefs!  Pretty slick.  Don't ask me how we (or robots) are
aware of our beliefs, though.  I vacillate between Wittgenstein and
Descartes.

usenet@nlm-mcs.arpa (usenet news poster) (02/03/90)

I propose that emotions are actually vestigal behavior from an earlier
evolutionary level of cognition.  I'm suggesting that our ancestors
(and perhaps most living animals) were not endowed with what we think
of as rational behavior, and thus relied on a more primitive,
genetically-programmed form of cognition that predisposed them to
exhibit certain behaviors in certain situations.  These genetic
behaviors were selected for according to their survival value.  An
emotion like love might have led to increased production and of and
survival of children, fear might have led to increased survival in
threatening situations, etc.  There are other possibilities as well;
emotions might also be necessary for childrens survival until they
develop more advanced cognitive abilities.

kp@uts.amdahl.com (Ken Presting) (02/03/90)

In article <11271@nlm-mcs.arpa> pkarp@tech.NLM.NIH.GOV (Peter Karp) writes:
>
>I propose that emotions are actually vestigal behavior from an earlier
>evolutionary level of cognition.  I'm suggesting that our ancestors
>(and perhaps most living animals) were not endowed with what we think
>of as rational behavior, and thus relied on a more primitive,
>genetically-programmed form of cognition that predisposed them to
>exhibit certain behaviors in certain situations.

The way you describe the relation between situation and behavior makes
cognition almost unnecessary.  Primitive systems would use only sensation.

>These genetic
>behaviors were selected for according to their survival value.  An
>emotion like love might have led to increased production and of and
>survival of children, fear might have led to increased survival in
>threatening situations, etc.

So the primordial mental states would have been emotional rather than
rational.  It's hard to tell, looking at behavior from the outside, just
what the process is that causes the behavior.  As behavior increases in
complexity, it seems to me that emotion and cognition appear concurrently.

Looking at animals under the skin, as chemical systems, we can identify
physiological states that correspond well with so-called "drives".  Low
blood sugar and empty stomach --> hunger, for example.  The sex drive is
more complicated, but clearly has chemical components.  These drives are
surely the cause of active behavior (as opposed to resting).

Would you say that the subjective experience of having a drive (ie feeling
hungry) is what emotions are?  If not, how else wuold you distinguish
emotion from cognition?

norman@cogsci.ucsd.EDU (Donald A Norman-UCSD Cog Sci Dept) (02/04/90)

In article xxx, person YYY  writes:
^
^I propose that emotions are actually vestigial behavior from an earlier
^evolutionary level of cognition.  

Sorry, but the evidence is that emotions are a sign of a HIGHER level of
evolution.  Let me make sure that this old-fashioned view is quickly
eliminated.

Emotional expression plays an extremely important communicative role. Emotional
expressions signify intentions and current state, both of which are essential
for effective communication.

Emotional computation is done primarily through chemical mechanisms, which
provide an extremely effective, broad-band, parallel channel, with surprising
selectivity and speed (well, only surprising to those of us who think
electrical signals superior.  But electrical signals have to be channeled over
neurons, whereas chemicals can be dumped into the brain's ventricles and cover
large areas of the brain rapidly, where the chemical binding mechanisms assure
extreme selectivity.)

Carefully examine animal behavior: the higher species display more emotions
than the lower ones.  Compare the emotionality of a snake and fish with that of
a mammal.  Compare a monkey with a dog.   

And the only organism that seems to display humor is the human.

Don Norman                         	       INTERNET:  dnorman@ucsd.edu
Department of Cognitive Science D-015	       BITNET:    dnorman@ucsd
University of California, San Diego	       AppleLink: dnorman
La Jolla, California 92093 USA                 FAX:       (619) 534-1128