[mod.politics.arms-d] Arms-Discussion Digest V7 #6

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (08/26/86)

Arms-Discussion Digest                 Monday, August 25, 1986 10:21PM
Volume 7, Issue 6

Today's Topics:

                            administrivia
                      Dyson's "Weapons and Hope"
        Duel Phenomenology:  Computers + Computers = Computers

----------------------------------------------------------------------

Date: Mon, 25 Aug 1986  17:21 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: administrivia


Message failed for the following:
Pancari@SRI-KL.ARPA: 550 No such local mailbox as "Pancari", recipient rejected

Someone at SRI please delete this guy.  Thanks.

------------------------------


Date: Mon, 25 Aug 86 13:50:50 pdt
From: Steve Walton <ametek!walton@csvax.caltech.edu>
Subject: Dyson's "Weapons and Hope"

I just started reading this book, and I am already quite impressed
with it.  Dyson's declared purpose in writing the book is to explain
the point of view of Helen Caldicott to the generals in Washington and
vice versa.  To this end, he attempts to find common ground between
the two groups' arguments--not to resolve them, but only to get them
to debate using the same terms to mean the same thing.  To some
extent, he is guilty of automatically splitting the difference, but
for the most part he seems to succeed in his purpose.
	One example: he has read both Jonathan Schell's "The Fate of
the Earth" and Government reports on civil defense.  Both argue from
impeccable sources and come to opposite conclusions with regard to the
ultimate fate of post-nuclear-war humanity.  Dyson comes to the
following conclusions: (1) The generals and the peace activists are
talking about two different timescales (this is fundamental); Dyson
does not believe that a single nuclear exchange would wipe out
humanity, but he believes that the first one would lead to a kind of
collective insanity resulting in a series of 10 or so nuclear wars
which *would*. (2) There is nothing wrong with civil defense, but
since the number of lives lost in a nuclear exchange have such
uncertainties, planners shouldn't be allowed to assign a "number of
lives saved" value to a given civil defense action.  He comments in
passing that if the children of Hiroshima had been taught to dive
under their desks, many of them would have avoided serious burns.
	He defines in the beginning of the book 13 questions which he
thinks should form the basis for the debate.  Number 13 is, "Do we
want to live in a nuclear-free world with its attendant dangers?"
The following is not quite a direct quote, but is close (I don't have
the book handy):

	By "nuclear free world" I do not mean one in which all nuclear
	weapons have magically vanished and from which all knowledge
	of how to make them has been erased by a supernatural power.
	I mean one in which, after long and arduous negotiations
	supported by an aroused public opinion, the nuclear powers
	have agreed to destroy their nuclear weapons;  delivery
	systems and declared stockpiles of warheads have been
	destroyed under supervision;  in which a large but imperfect
	verification system is in place;  in which the possible
	existence of a few hundred hidden warheads cannot be excluded;
	and in which the knowledge of how to assemble nuclear weapons
	is widely disseminated.  It is not clear that such a world
	would be more risk-free than the one we live in now.  I
	believe that such a world has important political and social
	advantages, however, which make it a worthwhile goal.

This line of reasoning leads him to support SDI (as I heard him say in
a talk at Caltech in 1984) as defense against those "few hundred"
hidden warheads.  He estimates that defenses built with existing
technology would be a factor of 1,000 shy of perfect; a crash research
program might improve them by a factor of 30, and if that is combined
with reduction in the sizes of arsenals by a factor of 30, then we do
have essentially perfect BMD.  Furthermore, a factor of 30 reduction
is a good figure for the US and USSR to set as a goal because it would
reduce the size of their arsenals to roughly the same size as the
British, French, and Chinese nuclear forces.  Reduction to zero than
becomes a multilateral rather than a bilateral problem, and hence much
more difficult.
	All of this makes a hell of a lot of sense to me, and I think
both the "hawks" and "doves" out there should read his book if they
haven't already.  If we can all come up with an agenda based on
Dyson's ideas and our own, the people on this net probably have enough
clout to bring it before the public.
	A postscript:  One of Dyson's other questions deals with
whether you should work within the system or outside it.  His answer
again is based on timescales.  In the short run, you clearly have more
influence within the system.  But revolutionary change in the system's
assumptions (such as the abandonment of nuclear weapons) can only come
about by outside pressure.

Stephen Walton, Ametek Computer Research Division
ARPA:	ametek!walton@csvax.caltech.edu
BITNET:	walton@caltech
UUCP:	...!ucbvax!sun!megatest!ametek!walton

------------------------------

Date: Mon, 25 Aug 86 17:36:15 PDT
From: Clifford Johnson <GA.CJJ@Forsythe.Stanford.Edu>
Subject:  Duel Phenomenology:  Computers + Computers = Computers

The digest being quiet, I thought I'd send this response to a
question I left unanswered as to why I think "dual phenomenology"
is about as dual and sane as the two notes of a "cuckoo."
Figure 1 is a launch under attack timeline, and Figure 2 is a
launch under attack decision tree produced by RAND but
commissioned by the DOD.  Comments/criticisms welcome.

          "LAUNCH UNDER ATTACK" IS LAUNCH ON WARNING, AND
            DUAL PHENOMENOLOGY IS NEITHER DUAL NOR SAFE

The Report of the President's Commission on Strategic Forces,
which was headed by Lt.-General Brent Scowcroft (April 1983),
appended a doomsday dictionary, including:

> LAUNCH ON WARNING -
> This phrase is now usually, but not universally, used to mean launch
> of missiles after one side received electrical signals from radars,
> infra-red satellites, or other sensors that enemy missiles are on
> the way, but before there have been nuclear detonations on its
> territory.  "Launch under attack" is sometimes used interchangeably
> with "launch on warning" and sometimes used to designate a launch
> after more confirmation has been received, such as indications that
> detonations have been received.
> LAUNCH UNDER ATTACK -
> See "Launch on Warning."

Note how "launch under attack" is NOT synonymous with "launch after
first detonation." As shown in Figure 1 below (from Automated War
Gaming As A Technique For Exploring Strategic Command And Control
Issues, RAND N-2044-NA, 1983), "Launch Under Attack" is a time
period beginning immediately after Soviet launch, and therefore
embracing launch on warning.  Yet, year after year, time is wasted
in vital congressional hearings by the DOD's disinformative
almost-denials that launch under attack includes launch on
warning.[1] The reality is that the DOD's wrongly-named launch under
attack capability is correctly called a LOWC, for two
very good reasons: unless launch is effected before the explosion of
incoming missiles, thereafter the missiles can be "pinned down" in
their silos by explosions in their flight corridors;[2]  and, more
urgently, launch orders must be issued by the NCA before the NCA is
destroyed.

Likewise, as pointed out in the paper by Robert C.  Aldridge
(Background Paper on the Probability of a United States Launch on
Warning Policy) "dual phenomenology" generates an ATTACK condition
when two purportedly independent and consistent WARNING signals are
received.  Dual phenomenology was well-explained by former Under
Secretary of Defense for Research and Engineering William Perry
(Next Steps in the Creation of an Accidental Nuclear War Prevention
Center, Center for International Security and Arms Control,
Stanford, Oct. 1983):

> The nuclear posture of the two superpowers today is like two people
> standing about six feet apart, each of whom has a loaded gun at the
> other's head.  Each has his revolver cocked and his finger quivering
> on the trigger.  To add to the problem, each of them is shouting
> insults at the other...  the most realistic risk posed by nuclear
> weapons is the risk of a nuclear war by accident or by
> miscalculation...  In the summer of 1979, I was awakened by a call
> from a duty officer at the North American Air Defense Command
> (NORAD) who told me that the NORAD computers were indicating that
> 200 missiles were on their way from the Soviet Union to the United
> States.  That incident occurred about four years ago, but I remember
> it as vividly as if it had happened this morning...  if this event
> had occurred at a time of political tension, if the human
> intervening had not been as thoughtful as the officer on duty that
> night, and if the data had been more ambiguous, it could have led to
> a missile alert.  [This phraseology is odd, since missile crews
> WERE alerted.]  In short, a coincidence of a number of unlikely
> events could lead to a missile alert...  If [the President] believed
> that our national security required him to launch our forces based
> only on a computer alert -- "launch-on-warning" -- a false alert
> could lead to a nuclear war being triggered accidentally...
> Unfortunately, many of the actions that would reduce our likelihood
> of false alarm increase the likelihood that we will fail to launch
> when we should, and vice-versa.  For example, our alerting system
> today considers that an attack is underway only if two independent
> sensors have confirmed an attack.  Therefore, if the probability of
> one of them being in error [giving a false alarm] is one in a
> thousand and the probability of the other being in error also is one
> in a thousand, then the probability that they will both give a false
> alarm at the same time -- assuming they are truly independent -- is
> one in a million...  it is sometimes argued that we can improve
> timeliness of response by getting faster computers an by eliminating
> the human evaluation phase of calling an alert.

The above argument ignores the facts that: such "one in a million"
probabilities are faced all day and every day, and are not
representative of annual probability totals; the volatile
complication of Soviet countersystems greatly amplifies the
technological risk; and errors from two types of sensor (infra-red
and radar) are certainly not independent, for innocent flying
objects, such as meteors, could register on both.  Nor is the safety
problem fundamentally alleviated by implementation of statistical
recognition and correlation algorithms, which have the effect of
removing whatever phenomenological duality the system might possess;
instead of infra-red and radar sensors evaluated by humans, the
system thereby becomes computers and computers validated by
computers.

Already, technology provides programs that correlate attack reports
from several sensors, using such tools as a "chi-square statistic to
test equality of vector means hypothesis," which, besides wrongly
assuming independence, optimistically "assumes normal distribution
error model." Worse, the programs are primarily driven by "maximum
likelihood" techniques which presumptively induce the most likely -
or least unlikely - set of ballistic missile tracks from whatever
phenomenum registers.[3] The unavoidable reality is that for prompt
tactical warning, computers must correlate much diverse information
and display conclusions in a simple fashion; and, to be at all
credible, the system must be capable of generating warning
conditions even when some sensors are not functioning.  In General
Herres own words: "And, you might lose some information, but the
fact of its loss is warning in itself.  You get that combined with
information you are receiving from other nodes to tell you a lot of
what you need to know." (Armed Forces Journal, Jan 1986, at 59.)

This neatening-up process, called data or sensor fusion, is itself a
dangerous potential source of error, and is depicted in Figure 2
below (from same source as Figure 1).  Note that the depicted "high
alert" LOWC (which could itself have been created by a satellite
warning) would operate on satellite or radar warning ONLY in the
event of one type of sensor being destroyed.  What occurs is a sort
of voting process where satellites and radars are polled, and where
it is expected that not all sensors will be operational by virtue of
the enemy's attack.  The "yes/no" structure is consequently modified
into a probabilistic certain-versus-likely set of votes, which is
then logically summed, as explained in Scenario Agent: A Rule-Based
Model Of Political Behavior For Use In Strategic Analysis, RAND
N-1781-DNA (1982),[4]  under the heading "QUALITATIVE CERTAINTY":

> It has been argued that rules with conjunctive phrases (connected by
> "and") in their left hand (condition) side contain certainty value
> equal to the minmum of the certainty values associated with the
> phrases.  For example, if the rule is
>                        if x and y, then z
> and x is "certain" and y is "likely," then z is "likely."
> However, rules with disjunctive phrases (connected by "or") in
> the left hand side contain information of value equal to
> the maximum of the certainty value associated with the phrases.
> Thus, if x is "certain" and y is "likely," we would infer from
>                        if x or y, then z
> that z is certain.

Replacing x by the phrase "satellite warning," and y by the
phrase "radar warning," the above logical computation could read
    if satellite warning or radar warning, then attack warning
In this context "certain" could correspond to a positive warning
from a satellite sensor, and "likely" could correspond to "loss of
signal" from a radar.  By such reasoning, dual phenomenology is
logically less than dual.

Although the the DOD is careful to affirm that only an UNAMBIGUOUS
warning can generate an attack condition, this assurance loses
meaning once it is realised that the DOD's warning assessment
computer programs automatically resolve ambiguities.  Indeed,
"disambiguation" is a program goal defined as "Selecting from
alternative interpretations the one most appropriate in the given
context...  One approach is to use general heuristics for
transforming incorrect expressions into well-formed ones."[5]
The disambiguation concept is directly applied to sensor fusion in
Future Military Applications For Knowledge Engineering,
RAND N-2102-1-AF (1985), at 28-31:

> As new technology is introduced to the battlefield, the critical
> response times required for decisions are growing shorter at all
> levels of the command hierarchy.  Communications devices and control
> hardware are constantly being improved to meet this demand, but the
> human elements of the system are unable to change their timing
> characteristics...  The confluence of increasing data flow rates
> from advanced sensors, the growing need for speed, and the necessity
> for flexibility calls for increasing levels of automation of this
> facility.  Computers must take on tasks which were originally done
> by humans...  Among the potential applications of knowledge
> engineering ... is the use of expert systems to assist
> decisionmakers...  Sensor fusion involves correlating, merging, and
> interpreting the inputs from distributed sensors in the field.
> Network management systems would be electronic agents, charged with
> the responsibility of managing the C3 resources in response to
> varying load, shifting priorities, and possible hostile
> interference.  A wide variety of sensors may be used in the
> identification and disambiguation...  Combination of sensors may be
> employed to yield information not available from any single
> source...  Intelligent systems could process the raw data, using
> knowledge about how the information is best utilized, and present
> refined conclusions to the human components of the system...  The
> fusion application is sensitive to countermeasures based on
> deception and would profit from further theoretical work on
> decisionmaking in uncertain, and in fact, hostile circumstances.
> The penalty for errors by a sensor fusion system increases with the
> level of conclusion drawn...  Of particular concern is that the
> compromise of such software would allow the enemy to design
> countermeasures tailored for its specific fusion heuristics.  Thus
> it will be critical that human operators remain "in the loop," able
> to monitor and override system components as desired.

Sensor fusion systems thus comprise statistical algorithms,
implemented in software, that can be compromised.  This gives rise
to a threat of malicious, besides accidental, false alerts, which
can never be eradicated, as reported in the OTA/LUA report:

> (T)o launch silo-based ... missiles before attacking Soviet
> (missiles) could destroy them ... is called launch under attack
> (LUA)...  The United States now preserves the capability to LUA as a
> matter of stated doctrine...  To guarantee the LUA capability
> against some contingencies it might be necessary to adopt
> unpalatable procedures regarding, for instance, delegation of launch
> authority.  No matter how much money and ingenuity were devoted to
> designing safeguards for the U.S. capability to launch under
> attack... it would probably never be possible to eradicate a
> lingering fear that the Soviets might find a way to sidestep them.
> Finally, despite all safeguards, there would always remain the
> possibility of error, either that missiles were launched when there
> was no attack, or that they failed to launch when the attack was
> genuine...  There are two risks of error in a basing system of
> reliance upon LUA: the risk that launch would take place when there
> was no attack, and the risk that launch would fail to take place
> when there was an attack.  Insofar as technology is concerned in the
> assessment of these risks, one can in principle make arbitrarily
> small the probability that electronic systems by themselves make
> either kind of error, though beyond a point efforts to decrease the
> chance of one error could increase the chance of the other...  The
> risk of error for an LUA system would seem highest when the human
> being's ability to make highly structured errors combines with the
> machine's limited ability to correct them.  Mistakenly initiating a
> "simulated" attack by, e.g., loading the wrong tape into a computer,
> would be an error of this type.  It is obviously not possible to set
> and enforce a bound on the probability that such an error would
> occur in a LUA system.[6]

The launch on warning character of the launch under attack label is
implicit in Figure 2, which, as noted above, treats the algorithm
for generating a computer declaration of attack from sensor inputs.
Even though entitled "A Simplified Logic Tree For Red's Assessment
Of Blue's Ability To Launch Under Attack," this Figure proves the
impossibility of verifying a seemingly sound and necessarily
computerized attack warning.  To manually check-out such
tree-inference structures in a minute is out of the question, and
note that the report states this is a "somewhat oversimplified"
diagram.  Note futher that on the June 1980 occasion when a single
bad computer chip generated false attack warnings, despite the clear
flag provided by registers clocking streams of the digit '2,' it
took a recurrence of the problem and some two weeks of intensive
investigation to discover the cause:

> One bit of evidence was that the displays showed anomalies in that
> there were a lot of '2's -- 20, 200, 220, 2000, 2200, etc.  It
> was a very anomolous indication since it was dominated by a series
> of 2's.  Following the event, my computer analysts worked 40
> straight hours, but could not determine the cause of the fault.
> (General Hartinger, NORAD's C-in-C, Senate Armed Services Cmt., DOD
> Appropriations FY 1982 hearings, p.4222, and Failures Of The NORAD
> Attack Warning System, House Government Operations Subcommittee
> hearing, May 1981, at 117.)

However programmed, it is clear that, in order to ensure any chance
of working in a real attack situation, sophisticated logical
computations must be factored into the sensor fusion task.  The
consequences of performing "or" operations rather than "and"
operations; or of confusing a "not not" with a "not not not," and so
forth, in a computer system performing millions of such operations
per second, is surely appreciated by a anyone schooled in the
application of linguistics to the resolution of human conflict.

NOTES:

[1]
For example:

Gen. DAVIS: But certainly launch on warning could be very
destabilizing. Launch under attack, there are many interpretations
of that, but one interpretation of that is, of course, nuclear weapons
are going off in our country, and certainly a reason to retaliate,
which is launch under attack...
Sen. NUNN: Well, does that mean that you have modified your
position and that launch under attack is now acceptable?
Gen. DAVIS: Let me characterize it as a prompt response.
Now, launch under attack, certainly it must mean you are being
attacked.
Sen. NUNN: Your definition of launch under attack is, as you
have just defined it, is that we would have already sustained damage,
that the bombs would have already gone off?
Gen. DAVIS: One or many, yes.

[2]
The report MX Missile Basing, OTA, 1984, Ch.4 "Launch Under Attack,"
hereafter the OTA/LUA Report (the definitive public report on LOWCs)
makes this clear (at p.150 - see also p.156):

Speaking roughly, receipt of a launch message or Emergency Action
Message by the missile force as late as a few minutes before Soviet
(missiles) arrive would be sufficient to guarantee safe escape of
the missiles.  This brief time would be accounted for by the time
taken for the EAM to be transmitted to the missile fields, decoded,
and authenticated; the time taken to initiate the launch sequence;
the time from first missile takeoff to last; and the time needed for
the last missile to make a safe escape from the lethal effects of
the incoming (missiles).  Thus, the time available for ICBM attack
assessment and decisionmaking would be the half-hour flight time
minus this small time period.  (But) Soviet submarine-launched
(missiles) targeted at command posts and communications nodes could
arrive earlier than the ICBMs.

[3]
See Command, Control, Communications, and Intelligence, supra, at
p.220, and Launch Under Attack To Redress Minuteman Vulnerability?
by Richard Garwin in International Security, Winter 1979, at 134.

[4]
See also Inferno: A Cautious Approach To Uncertain Inference, RAND
N-1898, 1982.

[5]
Machine-Aided Heuristic Programming: A Paradigm For Knowledge
Engineering, RAND N-1007-NSF (1979), at vi,34.

[6]
The false alerts (=threat assessment conferences held) over
the past years total:
1977:   43
1978:   70
1979:   78
1980:  149
1981:  186
1982:  218
1983:  255
1984:  153
(Source: NORAD public affairs office; data after 1984 kept secret.)

To:  ARMS-D@XX.LCS.MIT.EDU

------------------------------

End of Arms-Discussion Digest
*****************************