[mod.ai] AI and the Arms Race

LIN@XX.LCS.MIT.EDU (11/08/86)

[I posted a message from AILIST on ARMS-D, and got back this reply.]

Date: Saturday, 8 November 1986  12:55-EST
From: ihnp4!utzoo!henry at ucbvax.Berkeley.EDU
To:   Arms-Discussion
Re:   Professionals and Social Responsibility for the Arms Race

> ... This year, Dr. Weizenbaum of MIT was the chosen speaker...
> The important points of the second talk can be summarized as :
>    1) not all problems can be reduced to computation, for
>       example how could you conceive of coding the human
>       emotion loneliness.

I don't want to get into an argument about it, but it should be pointed
out that this is debatable.  Coding the emotion of loneliness is difficult
to conceive of at least in part because we don't have a precise definition
of what the "emotion of loneliness" is.  Define it in terms of observable
behavior, and the observable behavior can most certainly be coded.

>   2) AI will never duplicate or replace human intelligence
>      since every organism is a function of its history.

This just says that we can't exactly duplicate (say) human intelligence
without duplicating the history as well.  The impossibility of exact
duplication has nothing to do with inability to duplicate the important
characteristics.  It's impossible to duplicate Dr. Weizenbaum too, but
if he were to die, I presume MIT *would* replace him.  I think Dr. W. is
on very thin ice here.

>    5) technical education that neglects language, culture,
>       and history, may need to be rethought.

Just to play devil's advocate, it would also be worthwhile to rethink
non-technical education that covers language, culture, and history while
completely neglecting the technological basis of our civilization.

>    8) every researcher should assess the possible end use of
>       their own research, and if they are not morally comfortable
>       with this end use, they should stop their research...
>       He specifically referred to research in machine vision, which he
>       felt would be used directly and immediately by the military for 
>       improving their killing machines...

I'm afraid this is muddy thinking again.  *All* technology has military
applications.  Mass-production of penicillin, a development of massive
humanitarian significance, came about because of massive military funding
in World War II, funding justified by the tremendous military significance
of effective antibiotics.  (WW2 was the first major war in which casualties
from disease were fewer in number than those from bullets etc.)  It's hard
to conceive of a field of research which doesn't have some kind of military
application.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

shen5%watdcsu.waterloo.edu@RELAY.CS.NET ("B. Lindsay Patten") (11/18/86)

In article <LIN.12253338541.BABYL@XX.LCS.MIT.EDU> LIN@XX.LCS.MIT.EDU writes:
>[I posted a message from AILIST on ARMS-D, and got back this reply.]
>From: ihnp4!utzoo!henry at ucbvax.Berkeley.EDU
>Re:   Professionals and Social Responsibility for the Arms Race

[some valid objections to arguments made by Dr. Weizenbaum on problems with AI]

>>    8) every researcher should assess the possible end use of
>>       their own research, and if they are not morally comfortable
>>       with this end use, they should stop their research...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>       He specifically referred to research in machine vision, which he
>>       felt would be used directly and immediately by the military for 
>>       improving their killing machines...
>
>I'm afraid this is muddy thinking again.  *All* technology has military
>applications. 

[examples of good things that came out of military research]

>It's hard
>to conceive of a field of research which doesn't have some kind of military
>application.
>
>				Henry Spencer @ U of Toronto Zoology
>				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

This is by far the most common objection I've heard since Dr. Weizenbaum's
lecture and one which I think avoids the point.  Read the first three lines
of point 8 above.  The real point Dr. Weizenbaum was trying to make (in my
opinion) was that we should weigh the good and bad applications of our work
and decide which outweighs the other.  The examples that he gave were just
areas in which he personally believed the bad applications outweighed the
good.  He was very explicit that he was just presenting HIS personal opinions
on the merits of these applications.  Basically he said that if you feel
your work will do more harm than good you should find another area to work in.

My objection to his talk is that he seemed to want to weigh entire applications
against one another.  It seems to me that we should be examining the relative
impact of our research in the applications which we approve of and in those we
object to.

Lindsay Patten
|Cognitive Engineering Group                                     (519) 746-1299|
|Pattern Analysis and Machine Intelligence Lab                   lindsay@watsup|
|University of Waterloo           {decvax|ihnp4}!watmath!watvlsi!watsup!lindsay|