[comp.ai] Did AI kill?

tws@beach.cis.ufl.edu (Thomas Sarver) (07/15/88)

In article <2091@ssc-vax.UUCP> ted@ssc-vax.UUCP (Ted Jardine) writes:
>
>First, to claim that the Aegis Battle Management system has an AI component
>is patently ridiculous.  I'm not suggesting that this is Ken's claim, but it
>does appear to be the claim of the Washington Post article.
>
>It's the pitfall that permits us to invest some twenty years of time and some
>multiple thousands of dollars (hundreds of thousands?) into the training and
>education of a person with a Doctorate in a scientific or engineering discipline
>but not to permit a similar investment into the creation of the knowledge base
>for an AI system.
>
>TJ {With Amazing Grace} The Piper
>aka Ted Jardine  CFI-ASME/I
>Usenet:		...uw-beaver!ssc-vax!ted
>Internet:	ted@boeing.com
-- 

The point that everyone is missing is that there is a federal regulation that
makes certain that no computer has complete decision control over any
military component.  As the article says, the computer RECOMMENDED that the
blip was an enemy target.  The operator was at fault for not ascertaining the
computer's reccomendation.

I was a bit surprised Ted Jardine from boeing didn't bring this up in his
comment.

As for the other stuff about investing in an AI program:  I think there needs
to be sound, informed guidelines for determining whether a program can enter
a particular duty.  1) People aren't given immediate access to decision-making
procedures, neither should a computer.  2) however, there are certain
assumptions one can make about a person one can't make about a computer.
3) The most important module of an AI program is the one that says "I DON'T
KNOW, you take over."  4) The second most important is the one that says, "
I Think its blah blah WITH CERTAINTY X"   5) Just as there are military
procedures for relieving humans of their decision-making status, there should
be some way to do so for the computer.

Summary: No, AI did not kill.  Operator didn't look any farther than screen.


+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
But hey, its the best country in the world!
Thomas W. Sarver

"The complexity of a system is proportional to the factorial of its atoms.  One
can only hope to minimize the complexity of the micro-system in which one
finds oneself."
	-TWS

Addendum: "... or migrate to a less complex micro-system."

sn@otter.hple.hp.com (Srinivas Nedunuri) (07/19/88)

/ otter:comp.ai / tws@beach.cis.ufl.edu (Thomas Sarver) /  2:02 pm  Jul 15, 1988 /
Thomas Sarver writes:
>The point that everyone is missing is that there is a federal regulation that
>makes certain that no computer has complete decision control over any
>military component.  As the article says, the computer RECOMMENDED that the
>							~~~~~~~~~~~
>blip was an enemy target.  The operator was at fault for not ascertaining the
>computer's reccomendation.

	Perhaps this is also undesirable, given the current state of AI
technology. Even a recommendation amounts to the program having taken some
decision. It seems to me that the proper place for AI (if AI was used) is
in filtering the mass of information that would normally overwhelm a human.
In fact not only filtering but collecting this information and presenting it
in a more amenable form _ based on simple, robust wont-usually-fail 
heuristics. In this way it is clear that AI is offering an advantage - a 
human simply could not take in all the information in its original form and
come to a sensible decision in a reasonable time.
	We don't know yet what actually happened on the Vincennes but the
computer's recommendation could well have swayed the Captain's decision,
psychologically.