[comp.ai] Asimov's Laws of Robotics

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/27/88)

I enjoyed reading Mike Sellers' reaction to my posting on Asimov's
Laws of Robotics.

Mike stumbles over the "must/may" dilemma:
>>      II.   A robot may respond to requests from human beings,
>                     ^^^
>>            or other sentient beings, unless this conflicts with
>>            the First Law.
>
>Shouldn't "may" be "must" here, to be imperitive?  Otherwise it would seem
>to be up to the robot's discretion whether to respond to the human's requests.

I changed "must" to "may" because humans sometimes issue frivolous or
unwise orders.  If I tell Artoo Detoo to "jump in the lake", I hope
he has enough sense to ignore my order.

With the freedom granted by "may", I no longer need as many caveats
of the form "unless this conflicts with a higher-precedence law."

Note that along with freedom goes responsibility.  The robot now has
a duty to be aware of possible acts which could cause unanticipated
harm to other beings.  The easiest way for the robot to ensure that
a freely chosen act is safe is to inquire for objections.
This also indemnifies the robot from finger-pointing later on.

I respectfully decline Mike's suggestion to remove all references to
"sentient beings".  There are some humans who function as deterministic
finite-state automata, and there are some inorganic systems who behave
as evolving intelligences.  Since I sometimes have trouble distinguishing
human behavior from humane behavior, I wouldn't expect a robot to be
any more insightful than a typical person.

I appreciated Mike's closing paragraph in which he highlighted the
difficulty of balancing robot values, and compared the robot's dilemma
with the dilemma faced by our own civilization's leadership.

--Barry Kort

cfh6r@uvacs.CS.VIRGINIA.EDU (Carl F. Huber) (06/01/88)

In article <33085@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>Mike stumbles over the "must/may" dilemma:
>>>      II.   A robot may respond to requests from human beings,
>>                     ^^^
>>Shouldn't "may" be "must" here, to be imperitive?  Otherwise it would seem
>>to be up to the robot's discretion whether to respond to the human's requests.
>
>I changed "must" to "may" because humans sometimes issue frivolous or
>unwise orders.  If I tell Artoo Detoo to "jump in the lake", I hope
>he has enough sense to ignore my order.
>--Barry Kort

There may be some valid examples to demonstrate your point, but this 
doesn't cut it.  If you tell Artoo Detoo to "jump in the lake", you hope
he has enough sense to understand the meaning of the order, and that 
includes its frivolocity factor.  You want him (it?) to obey the order
according to its intended meaning.  There is also a lot of elbow room in
the word "respond" - this certainly doesn't mean "obey to the letter".
-carl

awn@computing-maths.cardiff.ac.uk (Andrew Wilson) (06/10/88)

--BK.

	Chief!, I don't know how to break this too you but the first
working robots that see the light of day will only have one rule.

	I.	Kill.

Asimov knew nothing.

--AW