[mod.ai] moral responsibility

PHayes@SRI-KL.ARPA (cas) (11/03/86)

The idea of banning vision research ( or any other, for that matter ) is
even sillier and more dangerous than Bill Trost points out.  The analogy
is not to ban automatic transmissions, but THINKING about automatic
transmissions.  And banning thinking about anything is about as dangerous
as any course of action can be, no matter how highminded or sincerely morally
concerned those who call for it.  
To be fair to Weizenbaum, he does have a certain weird consistency.  He tells
me, for example, that in his view helicopters are intrinsically evil ( as
the Vietnam war has shown ).  One can see how the logic works: if an artifact
is ( or can be ) used to do more bad than good, then it is evil, and research
on evil things is immoral.  
While this is probably not the place to start a debate in theoretical ethics, 
I do think that this view, while superficially attractive, simply doesnt stand
up to a little thought, and can be used to label as wicked anything which one
dislikes for any reason at all.  Weizenbaum has made a successful career by
systematically attacking AI research on the grounds that it is somehow
immoral, and finding a large and willing audience.  He doesnt make me squirm.
Pat Hayes
-------

rggoebel%watdragon.waterloo.edu@RELAY.CS.NET (Randy Goebel LPAIG) (11/05/86)

Patrick Hayes writes
> ...Weizenbaum has made a successful career by
> systematically attacking AI research on the grounds that it is somehow
> immoral, and finding a large and willing audience.

Weizenbaum does, indeed and unfortunately, attract a large, willing and
naive audience.  For some reason, there seems to be a large not-quite-
computer-literate population that wants to believe that AI is potentially
dangerous to ``real'' intelligence.  But it is not completely fair to
conclude that Weizenbaum believes AI to be immoral;  it is correct for
Patrick to qualify his conclusion as ``somehow'' immoral.  Weizenbaum
acknowledges the general concept of intelligence, with both human and artificial
kinds as manifestations.  He even prefers the methodology of the artificial
kind, especially when it relieves us from experiments on, say, the visual
cortex of cats.  

Weizenbaum does claim that certain aspects of AI are immoral but, as the
helicopter example illustrates, his judgment is not exclusive to AI.  As AI
encroaches most closely to those things Weizenbaum values (e.g., human
dignity, human life, human emotions), it is natural for him to speak about
the potential dangers that AI poses.   I suspect that, if Weizenbaum were
a nuclear physicist instead of a computer scientist, he would focus more
attention on the immorality of fission and fusion.

It is Weizenbaum's own principles of morality that determine the judgements.
He acknowledges that, and places his prinicples in the public forum every
time he speaks.