HDavis.pa@XEROX.COM (Harley Davis) (07/20/88)
Date: Tue, 19 Jul 88 12:34 EDT Sender: hdavis.pa@Xerox.COM From: Harley Davis <HDavis.pa@Xerox.COM> Subject: Does AI Kill? In-reply-to: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>'s message of Mon, 18 Jul 88 00:20 EDT To: AIList@AI.AI.MIT.EDU I used to work as the artificial intelligence community of the Radar Systems Division at Raytheon Co, the primary contractors for the Aegis detection radar. (Yes, that's correct - I was the only one in the community.) Unless the use of AI in Aegis was kept extremely classified from everyone there, it did not use any of the techniques we would normally call AI, including rule-based expert systems. However, it probably used statistical/Bayesian techniques to interpret and correlate the data from the transponder, the direct signals, etc. to come up with a friend/foe analysis. This analysis is simplified by the fact that our own jets give off special transponder signals. But I don't think this changes the nature of the question - if anything, it's even more horrifying that our military decision makers rely on programs ~even less~ sophisticated than the most primitive AI systems. -- Harley Davis HDavis.PA@Xerox.Com
Gavan@SAMSON.CADR.DIALNET.SYMBOLICS.COM (Gavan Duffy) (07/20/88)
Date: Tue, 19 Jul 88 20:50 EDT From: Gavan Duffy <Gavan@SAMSON.CADR.DIALNET.SYMBOLICS.COM> Subject: Re: does AI kill? To: AIList%AI.AI.MIT.EDU@Riverside.SCRC.Symbolics.COM In-Reply-To: The message of 15 Jul 88 06:29 PDT from att!occrsh!occrsh.ATT.COM!tas@bloom-beacon.mit.edu no AI does not kill, but AI-people do. Only if they have free will.
Hoffman.es@Xerox.COM (Rodney Hoffman , (07/22/88)
Sender: Hoffman.es@Xerox.COM Date: Tue, 19 Jul 88 12:13 EDT Subject: Re: does AI kill? To: AIList@AI.AI.MIT.EDU From: Rodney Hoffman <Hoffman.es@Xerox.COM> The July 18 Los Angeles Times carries an op-ed piece by Peter D. Zimmerman, a physicist who is a senior associate at the Carnegie Endowment for International Peace and director of its Project on SDI Technology and Policy: MAN IN LOOP CAN ONLY BE AS FLAWLESS AS COMPUTERS. [In the Iranian Airbus shootdown,] the computers aboard ship use artificial intelligence programs to unscramble the torrent of infor- mation pouring from the phased array radars. These computers decided that the incoming Airbus was most probably a hostile aircraft, told the skipper, and he ordered his defenses to blast the bogey (target) out of the sky. The machine did what it was supposed to, given the programs in its memory. The captain simply accepted the machine's judgment, and acted on it.... Despite the fact that the Aegis system has been exhaustively tested at the RCA lab in New Jersey and has been at sea for years, it still failed to make the right decision the first time an occasion to fire a live round arose. The consequences of a similar failure in a "Star Wars" situation could lead to the destruction of much of the civilized world. [Descriptions of reasonable scenarios ....] The advocates of strategic defense can argue, perhaps plausibly, that we have now learned our lesson. The computers must be more sophisticated, they will say. More simulations must be run and more cases studied so that the artificial intelligence guidelines are more precise. But the real lesson from the tragedy in the Persian Gulf is that computers, no matter how smart, are fallible. Sensors, no matter how good, will often transmit conflicting information. The danger is not that we will fail to prepare the machines to cope with expected situa- tions. It is the absolute certainty that crucial events will be ones we have not anticipated. Congress thought we could prevent a strategic tragedy by insisting that all architectures for strategic defense have the man in the loop. We now know the bitter truth that the man will be captive to the computer, unable to exercise independent judgment because he will have no indepen- dent information, he will have to rely upon the recommendations of his computer adviser. It is another reason why strategic defense systems will increase instability, pushing the world closer to holocaust -- not further away. - - - - - I'm not at all sure that Aegis really uses much AI. But many lessons implicit in Zimmerman's piece are well-taken. Among them: * The blind faith many people place in computer analysis is rarely justified. (This of course includes the hype the promoters use to sell systems to military buyers, to politicians, and to voters. Perhaps the question should be "Does hype kill?") * Congress's "man in the loop" mandate is an unthinking palliative, not worth much, and it shouldn't lull people into thinking the problem is fixed. * To have a hope of being effective, "people in the loop" need additional information and training and options. * Life-critical computer systems need stringent testing by disinterested parties (including operational testing whenever feasible). * Many, perhaps most, real combat situations cannot be anticipated. * The hazards at risk in Star Wars should rule out its development. -- Rodney Hoffman -------
MORRISET@URVAX.BITNET (07/26/88)
Date: Fri, 22 Jul 88 10:26 EDT From: MORRISET%URVAX.BITNET@MITVMA.MIT.EDU Subject: Re: Does AI kill? To: ailist@ai.ai.mit.edu X-Original-To: edu%"ailist@ai.ai.mit.edu", MORRISETT Thinking on down the line... Suppose we eventually construct an artificially intelligent "entity." It thinks, "feels", etc... Suppose it kills someone because it "dislikes" them. Should the builder of the entity be held responsible? Just a thought... Greg Morrisett BITNET%"MORRISET@URVAX"
GKMARH@IRISHMVS.BITNET (steven horst 219-289-9067) (07/26/88)
X-Delivery-Notice: SMTP MAIL FROM does not correspond to sender. Date: Sat, 23 Jul 88 18:36 EDT To: ailist@ai.ai.mit.edu From: steven horst 219-289-9067 <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU> Subject: Does AI kill? (long) I must say I was taken by surprise by the flurry of messages about the tragic destruction of a commercial plane in the Persian Gulf. But what made the strongest impression was the *defensive* tone of a number of the messages. The basic defensive themes seemed to be: (1) The tracking system wasn't AI technology, (2) Even if it WAS AI technology, IT didn't shoot down the plane (3) Even if it WAS AI and DID shoot down the plane, we mustn't let people use that as a reason for shutting off our money. Now all of these are serious and reasonable points, and merit some discussion. I think we ought to be careful, though, that we don't just rationalize away the very real questions of moral responsibility involved in designing systems that can affect (and indeed terminate) the lives of many, many people. These questions arise for AI systems and non-AI systems, for military systems and commercial expert systems. Let's start simple. We've all been annoyed by design flaws in comparatively simple and extensively tested commercial software, and those who have done programming for other users know how hard it is to idiotproof programs much simpler than those needed by the military and by large private corporations. If we look at expert systems, we are faced with additional difficulties: if the history of AI has shown anything, it has shown that "reducing" human reasoning to a set of rules, even within a very circumscribed domain, is much harder than people in AI 30 years ago imagined. But of course most programs don't have life-and-death consequences. If WORD has bugs, Microsoft loses money, but nobody dies. If SAM can't handle some questions about stories, the Yale group gets a grant to work on PAM. But much of the money that supports AI research comes from DOD, and the obvious implication is that what we design may be used in ways that result in dire consequences. And it really won't do to say, "Well, that isn't OUR fault....after all, LOTS of things can be used to hurt people. But if somebody gets hit by a car, it isn't the fault of the guy on the assembly line." First of all, sometimes it IS the car company's fault (as was argued against Audi). But more to the point, the moral responsibility we undertake in supplying a product increases with the seriousness of the consequences of error and with the uncertainty of proper performance. (Of course even the "proper" performance of weapons systems involves the designer in some moral responsibility.) And the track record of very large programs designed by large teams - often with no one on the team knowing the whole system inside and out - is quite spotty, especially when the system cannot be tested under realistic conditions. My aim here is to suggest that lots of people in AI (and other computer-related fields) are working on projects that can affect lots of people somewhere down the road, and that there are some very real questions about whether a given project is RIGHT - questions which we have a right and even an obligation to ask of ourselves, and not to leave for the people supplying the funding. Programs that can result in the death or injury of human beings are not morally neutral. Nor are programs that affect privacy or the distribution of power or wealth. We won't all agree on what is good, what is necessary evil and what is unpardonable, but that doesn't mean we shouldn't take very serious account of how our projects are INTENDED to be used, how they might be used in ways we don't intend, how flaws we overlook may result in tragic consequences, and how a user who lacks our knowledge or uses our product in a context it was not designed to deal with can cause grave harm. Doctors, lawyers, military professionals and many other professionals whose decisions affect other people's lives have ethical codes. They don't always live up to them, but there is some sense of taking ethical questions seriously AS A PROFESSION. It is good to see groups emerging like Computer Professionals for Social Responsibility. Perhaps it is time for those of us who work in AI or in computer-related fields to take a serious interest, AS A PROFESSION, in ethical questions. --Steve Horst BITNET address....gkmarh@irishmvs SURFACE MAIL......Department of Philosophy Notre Dame, IN 46556
jlc@goanna.OZ.AU (Jacob L. Cybulski) (08/09/88)
To: munnari!comp-ai-digest@uunet.UU.NET Path: goanna!jlc From: Jacob L. Cybulski <munnari!goanna.oz.au!jlc@uunet.UU.NET> Newsgroups: comp.ai.digest Subject: Re: does AI kill? Date: Wed, 27 Jul 88 07:52 EDT References: <19880721201606.2.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU> Organization: Comp Sci, RMIT, Melbourne, Australia Lines: 10 The Iranian airbus disaster teaches us one thing about "AI Techniques", and this is that most of the AI companies forget that the end product of AI research is just a piece of computer software that needs to be treated like one, i.e. it needs to go through a standard software life-cycle and proper software engineering principles still apply to it no matter how much intelligence is burried in its intestines. I don't even mention the need to train the system users. Jacob