[comp.ai] does AI kill?

morgan@uxe.cso.uiuc.edu (07/13/88)

I think these questions are frivolous. First of all, there is nothing in
the article that says AI was involved. Second, even if there was, the 
responsibility for using the information and firing the missile is the
captain's. The worst you could say is that some humans may have oversold
the captain and maybe the whole navy on the reliability of the information
the system provides. That might turn out historically to be related to
the penchant of some people in AI for grandiose exaggeration. But that's
a fact about human scientists. 

And if you follow the reasoning behind these questions consistently,
you can find plenty of historical evidence to substitute 'engineering'
for 'AI' in the three questions at the end.
I take that to suggest that the reasoning is faulty.

Clearly the responsibility for the killing of the people in the Iranian
airliner falls on human Americans, not on some computer.

At the same time, one might plausibly interpret the Post article as
a good argument against any scheme that removes human judgment from the
decision process, like Reagan's lunatic fantasies of SDI.

tas@occrsh.ATT.COM (07/15/88)

>no AI does not kill, but AI-people do. The very people that can
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

	Good point!

	Here is a question.  Why blame the system when there the human in
	the loop makes the final decision?  I could understand if the Aegis
	system had interpreted the incoming plane as hostile AND fired the
	missiles, but it did not.  

	If the captain relied solely on the information given to him by the
	Aegis system, then why have the human in the loop?  The idea is as
	I always thought was for the human to be able to add in unforeseen
	factors not accounted for in the programming of the Aegis system.

	Lets face it, I am sure ultimately it will be easier to place the
	blame on a computer program (and thus on the supplier) than on a
	single individual.  Isn't that kind of the way things work, or am
	I being cynical?

Tom

smythe@iuvax.cs.indiana.edu (07/16/88)

/* Written 10:58 am  Jul 15, 1988 by kurt@fluke in iuvax:comp.ai */

-  [star wars stuff deleted]

-I will believe in Star Wars only once they can demonstrate that AEGIS works
-under realistic battlefield conditions.  The history of these systems is
-really bad.  Remember the Sheffield, smoked in the Falklands War because
-their Defense computer identified an incoming Exocet missile as friendly
-because France is a NATO ally?  What was the name of our other AEGIS cruiser
-that took a missile in the gulf, because they didn't have their guns turned on
-because their own copter pilots didn't like the way the guns tracked them in
-and out.
-/* End of text from iuvax:comp.ai */

Lets try and get the facts straight.  In the Falklands conflict the
British lost two destroyers.  One because they never saw the missle
coming until it was too late.  It is very hard to shoot down an
Exocet.  In the other case, the problem was that the air defense
system was using two separate ships, one to do fire control
calculations and the other to actually fire the missle.  The ship that
was lost had the fire control computer.  It would not send the command
to shoot down the missle because there was another British ship in the
way.  The ship in the way was actually the one that was to fire the
surface-to-air missle.  Screwy.  I don't know which event involved the
Sheffield, but there was no misidentification in either case.

The USS Stark, the ship hit by the Iraqi-fired exocet, is not an AEGIS
cruiser at all but a missile frigate, a smaller ship without the
sophisticated weapons systems found on the cruiser.  The captain did
not activate the close-support system because he did not think the
Iraqi jet was a threat.  Because of this some of his men died.  This
incident is now used as a training exercise for ship commanders and
crew.  In both the Stark's case and the Vincennes' case the captains
made mistakes and people died.  In both cases the captains (or
officers in charge, in the case of the Stark) had deactivated the
automatic systems.  On the Stark, it may have saved lives.  On the
Vincennes, the tragedy would have occurred sooner.

I don't really think that AI technology is ready for either of these
systems.  Both decisions involved weighing the risks of losing human
lives against conflicting and incorrect information, something that AI
systems do not yet do well.  It is clear that even humans will make
mistakes in these cases, and will likely continue to do so until the
quality and reliability of their information improves.

Erich Smythe
Indiana University
smythe@iuvax.cs.indiana.edu
iuvax!smythe

stewart@sunybcs.uucp (Norman R. Stewart) (07/16/88)

Is anybody saying that firing the missle was the wrong decision under
the circumstances?   The ship was, afterall, under attack by Iranian
forces at the time, and the plane was flying in an unusual manner
for a civilian aircraft (though not for a military one).  Is there
any basis for claiming the Captain would have (or should have) made
a different decision had the computer not even been there.

While I'm at it, the Iranians began attacking defenseless commercial
ships in international waters, killing innocent crew members, and
destroying non-military targets (and dumping how much crude oil into
water?).  Criticizing the American Navy for coming to defend these
ships is like saying that if I see someone getting raped or mugged
I should ignore it if it is not happening in my own yard.

The Iranians created the situation, let them live with it.



Norman R. Stewart Jr.             *  How much more suffering is
C.S. Grad - SUNYAB                *  caused by the thought of death
internet: stewart@cs.buffalo.edu  *  than by death itself!
bitnet:   stewart@sunybcs.bitnet  *                       Will Durant

spf@whuts.UUCP (Steve Frysinger of Blue Feather Farm) (07/18/88)

> Is anybody saying that firing the missle was the wrong decision under
> the circumstances?   The ship was, afterall, under attack by Iranian
> forces at the time, and the plane was flying in an unusual manner
> for a civilian aircraft (though not for a military one).  Is there
> any basis for claiming the Captain would have (or should have) made
> a different decision had the computer not even been there.

I think each of us will have to answer this for ourselves, as the
Captain of the cruiser will for the rest of his life.  Perhaps one
way to approach it is to consider an alternate scenario.

Suppose the Captain had not fired on the aircraft.  And suppose the
jet was then intentionally crashed into the ship (behavior seen in
WWII, and made plausible by other Iranian suicide missions and the
fact that Iranian forces were already engaging the ship).  Would
we now be criticizing the Captain for the death of his men by NOT
firing?

As I said, we each have to deal with this for ourselves.

Steve

smoliar@vaxa.isi.edu (Stephen Smoliar) (07/19/88)

In article <143100002@occrsh.ATT.COM> tas@occrsh.ATT.COM writes:
>
>	Lets face it, I am sure ultimately it will be easier to place the
>	blame on a computer program (and thus on the supplier) than on a
>	single individual.  Isn't that kind of the way things work, or am
>	I being cynical?
>
If you want to consider "the way things work," then I suppose we have to
go back to the whole issue of blame which is developed in what we choose
to call our civilization.  We all-too-humans are not happy with complicated
answers, particularly when they are trying to explain something bad.  We
like our answers to be simple, and we like any evil to be explained in
terms of some single cause which can usually be attributed to a single
individual.  This excuse for rational thought probably reached its nadir
of absurdity with the formulation of the doctrine of original sin and the
principle assignment of blame to the first woman.  Earlier societies realized
that it was easier to lay all blame on some dispensible animal (hence, the
term scapegoat) than to pick on any human . . . particularly when any one
man or woman might just as likely be the subject of blame as any other.
Artificial intelligence has now given us a new scapegoat.  We, as a society,
can spare ourselves all the detailed and intricate thought which goes into
understanding how a plane full of innocent people can be brought to a fiery
ruin by dismissing the whole affair as a computer error.  J. Preser Eckert,
who gave the world both Eniac and Univac, used to say that man was capable
of going to extraordinary lengths just to avoid thinking.  When it comes to
thinking about disasterous mistakes, the Aegis disaster has demonstrated, if
nothing else, just how right Eckert was.

msch@ztivax.UUCP (M. Schneider-Hufschmidt) (07/19/88)

Could someone please repost the original article?

ken@aiva.ed.ac.uk (Ken Johnson) (07/19/88)

In article <12400014@iuvax> smythe@iuvax.cs.indiana.edu writes:

> I don't know which event involved the
> Sheffield, but there was no misidentification in either case.

I had also heard the story that the Exocet, having been sold to the
Argentinians by the Europeans, was not identified as a hostile missile. 
Subsequently the computers were `reprogrammed' (media talk for giving
them a wee bit of new data.) Presumably if you sell arms to your enemies
this is what you must expect. 

-- 
------------------------------------------------------------------------------
From Ken Johnson, AI Applications Institute, The University, EDINBURGH
Phone 031-225 4464 ext 212
Email k.johnson@ed.ac.uk