[comp.ai] Robot pain

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/17/89)

In article <1989Mar4.152943.10902@cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>I agree.  Of course, the same thing applies to robots.  Suppose an
>humanoid robot walking next to you stubbed his toe and said "That
>hurts!".  Would you respond with "No, you're just programmed to say
>that when you damage yourself!"

Sure.  Why not?

Maybe I happen to know that the robot *is* programmed to say that.

After all, all you've said about the robot is that it's humanoid and
that it says "that hurts" when it stubs its toe.

But suppose you tell me lots more about the robot.  Maybe it passes
the LTT and the TTT, and so on.  However, if it does pass the various
Turing Tests, then it follows that you can't tell me all that much
about it's programming -- because right now we have essentially no
idea how to program anything that can pass those tests.

Maybe, when we know a lot more about such robots (assuming they're
possible at all) and about ourselves, it will be pretty clear that
that robot does feel pain.  But maybe it will instead be clear that
it's just programmed to behave as if it feels pain.  How do you know
it won't turn out that way?

bwk@mbunix.mitre.org (Barry W. Kort) (03/18/89)

In article <337@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

 > Maybe, when we know a lot more about such robots (assuming they're
 > possible at all) and about ourselves, it will be pretty clear that
 > that robot does feel pain.  But maybe it will instead be clear that
 > it's just programmed to behave as if it feels pain.  How do you know
 > it won't turn out that way?

Perhaps it would help if we define pain.  The robot has a mechanical
body, which it relies on for locomotion and interaction with the
environmental surround.  Preservation of the functional integrity
of that mechanical body is a priority if the robot is to pursue
goals which depend on that body.  Therefore, the robot needs to
be informed of any danger or damage to its pieceparts.  Mechanical
stress sensors report such information, and the robot can act to
moderate or ameliorate conditions which endanger it's long-term
corporeal well-being.  In humans, we call such information "pain".  
In robots we can choose to call it what we like, but functionally
it serves the same purpose.

--Barry Kort

jack@cs.glasgow.ac.uk (Jack Campin) (03/22/89)

bwk@mbunix (Barry Kort) wrote:

  Perhaps it would help if we define pain [...] the robot needs to be informed
  of any danger or damage to its pieceparts.  Mechanical stress sensors report
  such information, and the robot can act to moderate or ameliorate conditions
  which endanger it's long-term corporeal well-being.  In humans, we call such
  information "pain".  In robots we can choose to call it what we like, but
  functionally it serves the same purpose.

This definition doesn't work.  If, while under spinal block anaesthesia, I see
a rat starting to chew my toe off, I'm getting information about potential
damage to my body, but that information does not constitute "pain".

-- 
Jack Campin  *  Computing Science Department, Glasgow University, 17 Lilybank
Gardens, Glasgow G12 8QQ, SCOTLAND.    041 339 8855 x6045 wk  041 556 1878 ho
INTERNET: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk    USENET: jack@glasgow.uucp
JANET: jack@uk.ac.glasgow.cs     PLINGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack