marty@boulder.UUCP (07/12/84)
Apropos the recent discussion of the "souls of intelligent computer
programs" and potential legal problems related to same, there was a very
interesting article in the Summer 1983 issue of AI Magazine which dealt with
some (related) issues. I believe it was entitled "Artificial Intelligence:
some legal implications", and was written by a member of the Nevada State
Supreme Court (again, my memory is weak, but I believe it was Marshall
Willick).
His major thesis seemed to be that the development of law in America has
largely been characterized by the granting of (fuller) franchise to beings
initially thought unworthy of it: blacks, women, adolescents, coma victims
and unborn children etc. He also makes some interesting points about the
rights and legal status of certain non-human entities, such as corporations.
Among the scenarios he presents: an intelligent computer system is stolen
and, realizing that this is the situation, refuses to work and attempts to
bring suit against its current "owner" . . . a factory worker dies as a
result of an accident in which responsibility is placed on an industrial
robot. To what extent should the robot be held responsible, particularly in
the case where the robot is shown to have willingly/knowingly caused the
person's death?
Interesting reading, if you're into this sort of thing ...
Marty Kent
uucp:
{ucbvax!hplabs | allegra!nbires | decvax!kpno | harpo!seismo | ihnp4!kpno}
!hao!boulder!marty
arpa: polson @ sumex-aim