[mod.comp-soc] responsibility

taylor@hplabsc.UUCP (07/18/86)

This article is from rti-sel!dg_rtp!throopw%mcnc.csnet@csnet-relay.ARPA
 and was received on Fri Jul 18 02:44:07 1986
 

>      Ron Pfeifle, Lowly UnderGrad ....watdragon!rfpfeifle
>
> An aside--let's assume for the moment that we've developed a thinking machine
> (I see those flame jet's firing up now), and that we "program" it to be "bad".
> Is it the machine that's "bad," the "program," the instance of the program on
> the machine at the the machine is running, or is the whole thing neutral and
> the programmers are really the baddies?...

You don't have to get nearly so esoteric to run into this problem.  What
about someone who trains an attack dog to kill?  If it does, in fact,
kill, is the dog, or the trainer, or the instance of training as done to
this dog responsible?  Replace the dog with a mechanical mantrap.
Replace the mantrap with a soldier.  And so on and on.  Normally, the
line is drawn at the point where the agent can understand the
consequences of its actions.  Thus, the dog and the mantrap are
innocent, and the soldier is not.  Similarly, if we admit that the
"thinking machine" understands, it is not innocent.

Again, I emphasize that you don't need to invoke esoteric AI to get into
arguments over whether understanding exists or not.  There are humans
that are held to be not responsible for their actions because they are
thought not to understand.