ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (11/08/86)
Arms-Discussion Digest Saturday, November 8, 1986 1:58PM
Volume 7, Issue 52
Today's Topics:
Military Institutions and AI
Yet more on SDI (Star Wars flawed #3-of-10)
Mid-course Interceptions
24 hour waiting period?
----------------------------------------------------------------------
Date: Sat, 8 Nov 1986 13:26 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Military Institutions and AI
From: toma at Sun.COM (Tom Athanasiou)
Does anyone know of institutional forces within the
military that predispose positive receptions for
technologies that don't really work.
Yes. Promotions are often based on the budget one controls or on the
visibility of the program. The peacetime military has little
incentive to develop weapons that actually work. A
program goes on, and develops bugs. What happens then? What program
manager is going to say "This won't work" when his neck is on the
line? What is the incentive for him to do so? If early in the
program, he says "It's too early to tell what will happen." If late
in the program, he says "Look at all we have spent on the program --
all of that will be wasted if we stop."
There's been a lot
of talk about SDI, but I'm interested in AI per se. The
level of hype in the commerical AI world has dropped a
lot faster than in the military AI world. Why?
In the commercial world, there is a bottom line -- whether something
does its job. In the peacetime military, there is no comparable
bottom line.
Does anyone know of anyone that would be helpful to talk
to on this issue? Of anything that would be good to read?
Fallows' "National Defense" is a good place to start, though you
should not take his word as gospel. Steubing's "The Defense Game" is
also pretty good.
For people to talk to, let me know what you are interested in.
------------------------------
Date: Monday, 3 November 1986 08:04-EST
From: Jane Hesketh <jane%aiva.edinburgh.ac.uk at Cs.Ucl.AC.UK>
To: ARMS-D
Re: Yet more on SDI (Star Wars flawed #3-of-10)
Fully automatic decision-making systems
Henry Thompson
It has become clear in recent years that computer technology
is as crucial to strategic weapons systems as physics is.
Leaving aside the terrifying possibility of one of the
nuclear powers moving to a fully automatic launch on warning
or launch under attack policy, the clearest example of this
is the suggested role of computer systems in the SDI Battle
Management System (BMS). In the Fletcher report to the US
Department of Defense, it was stated that response times
would be so rapid as to preclude significant human
participation:
"The battle management system must provide for a
high degree of automation to support the accomplish-
ment of the weapons release function."
This is clearly in fact an under-statement - for effective
boost-phase response the decision time is certainly less
than two minutes, probably less than one, even without
fast-burn boosters - and it is clear that the authors of the
Eastport report expect at least the boost-phase BMS to
operate without any human participation. What this means
then is that at least during crisis periods when the system
was fully enabled, the decision that a hostile attack was
underway and that an active response should be initiated,
together with the orchestration of at least the early stages
of that response, would be inaccessible to human
intervention.
This moves us immediately into the domain of Artificial
Intelligence, but at a level so far beyond current
experience as to be difficult to imagine. In an effort to
understand just what the deployment and empowerment of a
fully automatic SDI BMS would mean, it is worth looking for
a moment at where we are now in this area. The answer is
emphatically nowhere. No existing or announced AI system
has been empowered to act independent of human
participation. All the existing expert systems of which so
much is said function in an advisory, not an executive,
capacity. The most complex and sophisticated fully
automatic systems which involve computers in use today are
barely worthy of the description `decision making', for
example traffic signal controllers, cash dispensers and
autopilots. All the systems which have been suggested as
possible models for the SDI BMS, including the control
systems for the Apollo moon flights, the phone network and
the various space probes are not in this list, because none
of them exhibit(ed) any significant autonomous decision-
making.
What then is an automatic decision-making system? What
indeed is a decision? Consider a room thermostat,
responsible for controlling the operation of a heating
system in response to variations in temperature. We can
diagram its essential properties as follows:
___________________________
heating
___________________________
temp off on
___________________________
high nothing turn off
___________________________
low turn on nothing
___________________________
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
That is, a thermostat must discriminate among four possible
sorts of situation, and act accordingly to turn the heating
on or off or leave it alone. These notions of
discrimination and action are constitutive of a decision-
making system. What then is meant by an automatic
decision-making system? Simply one that, like a thermostat,
involves no human participation, either to guide the
discrimination or to approve the action. Thermostats,
foghorns, burglar alarm and traffic signal controllers are
all examples. There are of course degrees of automation.
Systems like those for zero-visibility aircraft landing and
load-balancing in the national power grids are subject to
human supervision, but of a fairly minimal and post-hoc
nature. There are also degrees of involvement of computers,
but above a certain level of complexity electro-mechanical
ingenuity fails and only computational approaches are
possible. Even thermostats have microprocessors in them by
now, in the more sophisticated cases.
There is a crucial step in the deployment of automatic
decision-making systems, which has not so far received the
attention it deserves. This is the step of empowerment,
that is, the point at which control passes to the system.
Before we empower human beings such as teachers, pilots,
policemen or judges to make decisions we (that is, society
acting through government) typically subject them to a more
or less strict regime of training and evaluation. We will
clearly soon reach, if we have not already, the stage with
artifacts, that is with automatic decision-making systems,
where explicit controls on empowerment will be required.
What tests might one wish to perform on an artifactual
candidate for empowerment? How would one go about
determining its fitness for its appointed task? Many of
today's criticisms of the SDI programme can be seen as
reasons why there is no possibility of a sufficiently
convincing evaluation of a candidate SDI BMS to make
empowerment a responsible action.
The question of empowerment is a new one - it has not
arisen before because the necessary combination of
sophisticated technology and human impact has only recently
emerged. We cannot allow SDI to be exempted from a general
requirement for a sensible empowerment process for automatic
decision-making systems, yet it is clear it could not `pass'
such a review.
Consequences of failure - active versus passive defense
The empowerment issue for SDI gains tremendous significance
when the nature of the proposed system is examined
carefully. In a crucial way the briefly mooted name of
Peace Shield was misleading. SDI is not a passive defense,
like a shield or an umbrella. It is an active defense, like
a fly-swatter or an anti-aircraft gun. To be effective it
must be wielded, it cannot just sit there. This means it is
liable to two sorts of failures. Like a passive defense, it
might, as it were, leak. The overwhelming evidence, tacitly
acknowledged even by the SDIO, is that any SDI we can hope
to build will fail this way. But unlike a passive defense,
it might also be wielded in error. All our experience to
date of supervised computer decision-making systems suggests
this will in fact happen. The literature is full of
examples of computer errors, either in specification or
implementation, provoking false alarms, with disaster
averted only by human intervention.
But ex hypothesi in the case of SDI no such intervention
would be possible. If at some point, perhaps in the midst
of an international crisis, the system was fully empowered,
then if owing to some flaw in specification or
implementation it incorrectly determined that a missile
attack was underway and began its response, there would be
no possibility of human intervention. A vast array of
weapons would be unleashed, some harmlessly, at non-existent
missiles, perhaps a forest fire or a meteor shower, but some
at Soviet satellites. What chances then that anyone would
survive the doubtlessly automatic response to that very real
assault, to examine the program and correct the flaw?
In a fully automatic system there is no escaping the
fundamental law of signal detection - there is a trade-off
between type 1 and type 2 errors, between misses and false
alarms. You cannot eliminate one without guaranteeing that
the other will occur. A safe SDI would have to be so
constrained as to be useless - it would almost surely not
work when it was needed. A useful SDI would have to be
sufficiently unconstrained as to be unsafe - it would be too
likely to `work' when it was not needed.
Glossary
BMD Ballistic Missile Defense
BMS Battle Management System
C3I Command, Control, Communications and Intelligence
LOW Launch on Warning
LUA Launch under Attack
MOU Memorandum of Understanding (between US and UK on SDI)
SDI Strategic Defense Initiative
SDIO Strategic Defense Initiative Organisation (in the US Department of Defense)
SDIPO Strategic Defense Initiative Programme Office (in the UK Ministry of Defense)
Information about the author
Henry Thompson is a lecturer in the Department of Artificial
Intelligence and the Centre for Cognitive Science at the
University of Edinburgh. Before coming to Britain in 1980
he was a member of the Natural Language group at the Xerox
Palo Alto Research Center. He has been doing research in
the areas of knowledge representation and computational
linguistics for more than 10 years, and has a long-standing
interest in the philosophical foundations of Artificial
Intelligence. He is currently co-director of the Edinburgh
part of an Alvey Large Scale Demonstrator project whose goal
is an interactive, incremental speech input system.
------------------------------
Date: Sat, 8 Nov 1986 13:47 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Mid-course Interceptions
From: crummer at aerospace.ARPA
>Another concept, discussed in candidate architectures, is one in which
>you do very effective mid-course discrimination without doing
>boost-phase intercept, perhaps using interactive discrimination with
>neutral particle beams.
With the miserable signal/noise ratio that a sensor would have looking
at a cool target against a star field, how would these particle beams
be pointed? I'm sure the beam can't be fired and steered at the rates
necessary to "paint the sky" and find the objects even if interactive
discrimination would work. Herb, do you have any more information on
the mid-course fantasy?
The beams would probably be pointed on the basis of some space-based
radar or IR sensor looking at the threat cloud; information on objects
would be passed to NPBs for discrimination. Painting the sky is
absurd, and SDIO knows that.
------------------------------
Date: Sat, 8 Nov 1986 13:57 EST
From: LIN@XX.LCS.MIT.EDU
Subject: 24 hour waiting period?
From: Paul F. Dietz <DIETZ%slb-test.csnet at RELAY.CS.NET>
Someone has advanced the position that the US should wait 24 hours
after a nuclear attack before retaliating. A response was that
this would eliminate US bomber forces. The proponent claimed US
bombers could stay up this long, pointing to the fact that NEACP (sp?)
can stay up three days.
Well, I looked that up; the command plane can stay up for *at most*
three days, given sufficient in-air refueling. The three day time
limit comes from the engines running out of lubricant. After a
large nuclear strike there would likely be little in-air refueling
capability left.
I spoke of 24 hours, not of 72. Besides, the 3 day limit is in no way
fundamental. If we wanted to equip bombers with oil replenishment
systems, we could. I did not say that the entire bomber force could
survive, just that some of it could.
This business about whether or not all the bombers could survive and
whether or not the scenario proposed in which the Soviets attack us
and we are left with only submarine bases is starting to miss the
point of waiting.
There is certainly a down side to waiting. I don't think that is it
is as great as critics have said (for example, some bombers not in the
air WOULD survive at auxiliary airfields etc), but I willingly concede
a down side in terms of reduced effectiveness and increased costs.
The issue is NOT whether or not there is a down side. The issue is
that by waiting (and incurring the costs of waiting), you may
*increase* the probability that you won't react mistakenly to an
attack report.
My original query was of the form "Describe a scenario in which
waiting the U.S response would be significantly impeded." I haven't
received one yet, but for the sake of argument let's assume there is
one. The relevant question is "What is the likelihood of that
scenario?" and how does it compare to the likelihood of receiving a
false attack report? Both are dangers that we must consider.
------------------------------
End of Arms-Discussion Digest
*****************************