[comp.ai.digest] AI and the Vincennes incident

JMC@SAIL.STANFORD.EDU (John McCarthy) (08/25/88)

Date: Fri, 19 Aug 88 17:49 EDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: AI and the Vincennes incident 
To: ailist@AI.AI.MIT.EDU

I agree with those who have said that AI was not involved in the
incident.  The question I want to discuss is the opposite of those
previously raised.  Namely, what would have been required so that
AI could have prevented the tragedy?

We begin with the apparent fact that no-one thought about the Aegis
missile control system being used in a situation in which discrimination
between civilian traffic and attacking airplanes would be required.
"No-one" includes both the Navy and the critics.  There was a lot
of criticism of Aegis over a period of years before 1988.  All the
criticism that I know about concerned whether it could stop multiple
missile attacks as it was designed to do.  None of it concerned the
possibility of its being used in the situation that arose.  Not even
after it was known that the Vincennes was deployed in the Persian
Gulf was the issue of shooting down airliners (or news helicopters) raised.

It would have been better if the issue had been raised, but it appears
that we Earthmen, regardless of political position, aren't smart
enough to have done so.  Now that a tragedy has occurred, changes will
be made in operating procedures and probably also in equipment and
software.  However, it seems reasonably likely in the future
additional unanticipated requirements will lead to tragedy.

Maybe an institutional change would bring about improvement, e.g.
more brainstorming sessions about scenarios that might occur.  The
very intensity of the debate about whether the Aegis could stop
missiles might have insured that any brainstorming that occurred
would have concerned that issue.

Well, if we Earthmen aren't smart enough to anticipate trouble,
let's ask if we Earthmen are smart enough and have the AI or other
computer technology to design AI systems
that might help with unanticipated requirements.
 My conclusion is that we probably don't have the technology yet.

Remember that I'm not talking about explicitly dealing with the
problem of not shooting down civilian airliners.  Now that the
problem is identified, plenty can be done about that.

Here's the scenario.

Optimum level of AI.

Captain Rogers:  Aegis, we're being sent to the Persian Gulf
to protect our ships from potential attack.

Aegis (which has been reading the A.P. wire, Aviation Week, and
the Official Airline Guide on-line edition):  Captain, there may
arise a problem of distinguishing attackers from civilian planes.
It would be very embarassing to shoot down a civilian plane.  Maybe
we need some new programs and procedures.

I think everyone knowledgable will agree that this dialog is beyond
the present state of AI technology.  We'd better back off and
ask what is the minimum level of AI technology that might have
been helpful.

Consider an expert system on naval deployment, perhaps not part
of Aegis itself.

Admiral: We're deploying an Aegis cruiser to the Persian Gulf.

System: What kinds of airplanes are likely to present within
radar range?

Admiral: Iranian military planes, Iraqi military planes, Kuwaiti
planes, American military planes, planes and helicopters hired
by oil companies, civilian airliners.

System: What is the relative importance of these kinds of airplanes
as threats?

It seems conceivable that such an expert system could have been
built and that interaction with it might have made someone think
about the problem.