[comp.ai] Laws of Robotics

msellers@mntgfx.mentor.com (Mike Sellers) (05/26/88)

I've always like Asimov's Laws of Robotics; I suspect that they will remain
firmly entrenched in the design and engineering of ambulatory AI systems for 
some time (will they ever be obsolete?).  I have some comments on the proposed 
variations Barry proposed, however...

In article <31738@linus.UUCP>, bwk@mbunix.UUCP writes:
> 
> I propose the following variation on Asimov:
> 
>       I.   A robot may not harm a human or other sentient being,
>            or by inaction permit one to come to harm.
> 
>      II.   A robot may respond to requests from human beings,
                     ^^^
>            or other sentient beings, unless this conflicts with
>            the First Law.

Shouldn't "may" be "must" here, to be imperitive?  Otherwise it would seem 
to be up to the robot's discretion whether to respond to the human's requests. 

> 
>     III.   A robot may act to protect its own existence, unless this
>            conflicts with the First Law.

Or the Second Law.  Otherwise people could tell robots to destruct themselves
and the robot would obey.  Of course, if the destruction was necessary to keep
a human from harm, it would obey to be in keeping with the First Law.

> 
>      IV.   A robot may act to expand its powers of observation and
>            cognition, and may enlarge its knowledge base without limit.

Unless such expansion conflicts with the First, Second, or Third (?) Laws.
This is a worthy addition, but unless constrained by the other rules it 
contains within it the seeds of Prometheus (from the movie "Demon Seed" -- 
ick, what a title :-) or Colossus (from "The Forbin Project").  The last 
thing we want is a robot that learns and cogitates at the expense of humans.

> Can anyone propose a further refinement to the above?
> 
> --Barry Kort

In addition to what I've said above, I think that all references to generic
"sentient beings" should be removed.  Either this is too narrow in meaning,
providing only for humans (which are already explicitly stated in the Law),
or it is too general, easily encompassing *artificial* sentient beings, i.e.
robots.  This is precisely what the Laws were designed to prevent.  I like
the intent, and hopefully some way of engendering general pacifism and 
deference to humans, animals, and to some degree other robots can be found.
Perhaps a Fifth Law:  
         "A robot may not harm another robot, or by inaction permit 
          one to come to harm, unless such action or inaction would 
          conflict with the First Law."

Note that by only limiting the conflict resolution to the First Law, a robot 
could not respond to a human's request to harm another robot unless by not 
responding a human would come to harm (V takes precedence over II), and a 
robot might well sacrifice its existence for that of another (V takes
precedence over III).  Of course, this wouldn't necessarily prevent a military
commander from convincing a warbot that destroying a bunch of other warbots
was necessary to keep some humans from harm... I guess this is what is done
with human armies nowadays anyway.

Comments?

-- 
Mike Sellers                           ...!tektronix!sequent!mntgfx!msellers
Mentor Graphics Corp., EPAD            msellers@mntgfx.MENTOR.COM
"Hi.  So, can any of you make animal noises?" 
                   -- the first thing Francis Coppola ever said to me