hogg@utcsri.UUCP (John Hogg) (06/14/85)
A few days ago I said that I'd post the papers submitted to the Liberal
Task Force on Peace, Security and World Disarmament that were presented by
members of our department. Below is the SECOND of three. (I have received
permission from the author to post this; I haven't talked to the others
yet.) He attacks SDI on the basis that it will *increase* the threat of
nuclear destruction, not *decrease* it; due to time limitations, he barely
scratches the surface, and I may bring up more points later. In the other
two papers, Kelly Gotlieb shows that the economic benefit to Canada of SDI
will be negative, while Ric Hehner suggests that, from a computer
scientist's point of view, the system has no hope of working correctly.
If you agree, I suggest that you write to the Prime Minister, with a copy
to your MP.
SUBMISSION TO THE LIBERAL TASK FORCE ON
PEACE, SECURITY AND WORLD DISARMAMENT
Andrew P. Gullen
1985 May 30
First, I would like to clarify part of our position which has
been misunderstood by some proponents of the Strategic Defense
Initiative. Much has been made of the predicted impossibility of
things which have turned out to be possible after all. Passing
over the fact that most such predictions are correct, we would
like to emphasize that we are not predicting that ballistic mis-
siles cannot be knocked out. Indeed, some techniques for this
have been successfully demonstrated, and undoubtedly others will
be. What we would claim, however, is that it is not practicable
to build a system which can deal with 5000-plus missiles, and
stop enough of them to be worth the financial cost and the in-
creased risk of nuclear war. The worst of all possibles outcomes
would be a system which is only partially effective - and this is
what we predict.
While exact estimates of the cost of the SDI system vary, all
agree that it would be very expensive, both to build and to main-
tain. While virtually any price would be worth paying for freedom
from the threat of nuclear war, we will argue that not only would
the system fail to provide such protection, but that it would in-
crease the risk of the war it was supposed to protect us from.
Complex systems, especially computer systems that must in-
teract with real world, can fail in more than one way. The fami-
liar way is failure to perform when action is needed - probably
everyone here has had a bank machine fail to operate. But anoth-
er, opposite class of failures exists : a system can take action
when none is called for. Most of us are less familiar with this
class of failure, but common-life examples are available. Consid-
er the controversy over the safety of air bags in automobiles.
The question was not whether air bags saved lives in collisions,
for the evidence clearly showed that they did. Instead, the con-
cern was whether the bags could be prevented from deploying dur-
ing normal driving, as this would itself cause an accident.
Exactly the same concerns apply to the strategic defense sys-
tem, and in this case the second class of failure is the more
serious of the two, for it risks precipitating a course of events
which would likely otherwise not happen - an accidental nuclear
war.
The actions undertaken by a ballistic missile defense system
would not be innocuous and purely defensive. Some proposals have
involved: nuclear explosions in space over the Soviet Union, with
the unfortunate side-effect of electromagnetic pulse; massive
rocket launches from submarines off the Soviet coast; huge swarms
of small interceptor vehicles being launched over the Soviet Un-
ion; and massive microwave irradiation of large parts of the So-
viet Union. All approaches, to attack the missiles in boost
phase, will require offensive action in Soviet airspace, because
this is where the missiles will be.
Furthermore, the ballistic missile defense system would be
tied into the larger American system, and its false alerts would
add to the nearly one-per-day rate of false alerts that system
suffers at present. Even worse, the activity and false alerts of
the American system cannot be hidden from the Soviets, who must
take precautionary steps of their own, thus reinforcing the Amer-
ican alert. As Paul Bracken writes in "Command and Control of Nu-
clear Forces",
"... a threatening Soviet military action or alert can be
detected almost immediately by American warning and intel-
ligence systems and conveyed to force commanders. The
detected action may not have clear meaning, but because of
its possible consequences protective measures must be taken
against it. The action-reaction process does not necessari-
ly stop after two moves, however. It can proceed to many
moves and can, and often does, extend from sea-based forces
to air- and land-based forces because of the effect of
tight coupling. In certain political and military situa-
tions, this action-reaction process can be described as a
cat-and-mouse game of maneuvering for geographical and tac-
tical position.... The possibility exists that each side's
warning and intelligence systems could interact with the
other's in unusual or complicated ways that are unantici-
pated, to produce a mutually reinforcing alert. Unfor-
tunately, this is not a new phenomenon; it is precisely
what happened in Europe in 1914. What *is* new is the tech-
nology, and the speed with which it could happen.
The ballistic missile defense system forces a drastic reduction
in the decision times available to the two sides. All parties
agree that effective ballistic missile defense rests on effective
interception of missiles during boost phase, when the rocket
boosters may be attacked and before the number of targets expands
due to MIRVing and release of decoys. Unfortunately, boost phase
lasts at most 300 seconds, and for proposed missiles such as the
U.S. Midgetman will only last 40-50 seconds. We have already
seen reductions in decision time: from several hours for manned
bomber attack, to 30 minutes for ICBM attack, and to 5-10 minutes
given forward-based weapons such as submarine-launched missiles
and the Pershing. With SDI, we are looking at decision times
measured in seconds. This is far too fast for consideration, too
fast for consultation, and in fact too fast for human judgement.
Computer decision making has been proposed as a replacement, but
this is unsatisfactory, as Professor Hehner will show.
The Soviets must react quickly to the American alert. If the
possibility exists that the American SDI system would be able to
stop a ragged, greatly weakened Soviet retaliatory attack, and if
the Americans have deployed accurate, silo-killing missiles such
as the Trident D-II, the Pershing II and the MX, the Soviets must
take into account that any activation of the American system may
presage a pre-emptive strike. They are then forced to take their
forces to a high-alert, launch-on-warning status, for once an at-
tack strikes, they may be powerless to retaliate.
This speedup in events is intrinsic to any forward-based
force, whether defensive, offensive or both.
Yet more problems: while missiles are many and unpredictable,
satellites are few, fragile and move in predictable orbits. They
are thus easy targets for the SDI weapons, which would serve as
effective and very fast anti-satellite weapons. With their sens-
ing, communications and ballistic missile defense satellites
under the threat of sudden destruction in a crisis, the forces of
both sides would be under more pressure to take precautionary
moves, even to the extent of pre-emption.
The SDI system would not be the first such strategic blunder.
The MIRVing of missiles in the 1970's made it possible for an
enemy to knock out a number of one's own warheads with just one
of theirs. In fact, with the MIRV ratios in use, an enemy can as-
sign two or three warheads to each silo, raising the kill proba-
bility, while reserving most of his force for ensuing blackmail
(at least this is what goes on in the minds of strategic
planners). Thus MIRVing has led directly to today's first-strike
worries, for with rough parity in single-warhead missiles, one
cannot reliably knock out one's opponent's forces and have any-
thing left over.
In conclusion: the SDI systems, by wrongly taking dangerous
actions, by reducing decision times and by forcing certain
courses of action, will increase the danger of nuclear war
without sufficiently protecting us from it.
--
John Hogg
Computer Systems Research Institute, UofT
{allegra,cornell,decvax,ihnp4,linus,utzoo}!utcsri!hogghenry@utzoo.UUCP (Henry Spencer) (06/19/85)
While I agree with many of the points raised in the paper John posted, I must dispute one of them. > Yet more problems: while missiles are many and unpredictable, > satellites are few, fragile and move in predictable orbits. Given a tiny fraction of the money that some people claim SDI "would have to cost", we can build space-launch systems that will vastly reduce the cost of Earth-to-orbit transport. We can also, again with off-the-shelf technology and relatively modest capital investments, import lunar or asteroidal materials at lower cost than launching them from Earth. Given these developments, satellites can be numerous, and armored or maneuverable. (And a major SDI system can be far cheaper than many of its opponents claim it "must be", since launch costs usually dominate such estimates.) I would also observe that the dangerously-provocative nature of the actions needed to ready some types of SDI systems for action is an argument against those specific types of system, not against all SDI systems. Including this under "why SDI is a bad thing" is misleading advertising, to say the least. [This does not invalidate the more general point that chain-reaction readiness increases are dangerous.] -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
hogg@utcsri.UUCP (John Hogg) (06/19/85)
Posted: Tue Jun 18 19:14:50 1985 Date-Received: 19 Jun 85 00:28:56 GMT References: <1186@utcsri.UUCP> Organization: U of Toronto Zoology Lines: 24 Henry Spencer disputed the claim made in Andrew Gullen's SDI paper that > while missiles are many and unpredictable, > satellites are few, fragile and move in predictable orbits. He argues that much better space-launch systems will be built as a result of SDI, and therefore, "...satellites can be numerous, and armoured or maneuverable." Satellites will still be undefendable, however. No foreseeeable technology will allow them to maneuver constantly; this means that they can be sniped at for as long as the "other side" pleases. Missiles have to be protected for a small number of minutes, or even seconds. Satellites will be comparatively few, regardless of advances in space technology. This, by the way, is the single good point that I can see to SDI: it will force the US to spend more on space research, however inefficiently. Another claim of Henry's is that "the dangerously-provocative nature of the actions needed to ready some types of SDI systems for action is an argument against those specific types of system, not against all SDI systems." True; however, SDI proponents are seriously proposing such idiotic concepts as pop-up X-ray lasers, and until they come up with a less chameleon-like description of their program, I'll attack whatever they put forward. Regardless of how effective SDI actually is, it will be "dangerously provocative" to the extent that the Soviets will (by standard military practice) be forced to assume that it will live up to its billing: it will be able to knock out *most* of a full-scale strike, and *all* of a retaliatory strike. Thus, their threat will be totally neutralized if they wait out a first strike, and they will be forced into launch-on-warning. The US warning system is clearly imperfect, to put it politely; the USSR trusts theirs so little that, due to their current structure, they basically *can't* launch-on-warning. (Reference on request.) Would you like them to try for that capability? There are geese in the Soviet Union, too... -- John Hogg Computer Systems Research Institute, UofT {allegra,cornell,decvax,ihnp4,linus,utzoo}!utcsri!hogg
jchapman@watcgl.UUCP (john chapman) (06/24/85)
. . . > > I would also observe that the dangerously-provocative nature of the > actions needed to ready some types of SDI systems for action is an > argument against those specific types of system, not against all SDI > systems. Including this under "why SDI is a bad thing" is misleading > advertising, to say the least. [This does not invalidate the more general > point that chain-reaction readiness increases are dangerous.] > -- > Henry Spencer @ U of Toronto Zoology > {allegra,ihnp4,linus,decvax}!utzoo!henry It seems to me that the destabilising component is one of generating a situation where one side believes the other may be able to launch a strike with relative impunity. Any SDI will create this problem unless both sides could simultaneously deploy equally effective systems *and* beleive that "their" system is as good as the other's. Neither of these conditions seems very likely.
jimomura@lsuc.UUCP (Jim Omura) (06/25/85)
This response isn't really about SDI. I'm starting to get worried
about the negative attitude to research which might be labelled SDI
related. At this time, I'm trying to remain fairly open minded to SDI
arguments on both sides. Generally I'm not in favour of taking part,
for a number of reasons that have been batted around here and in the
press (nothing original--sorry), but I'm concerned that many people are
going to do a remake of the 'Commie Scare' of the McCarthy era (and the
'Yellow Terror' of pre-war North America), but against high tech
research generally. It seems to me that a lot of legitimate research
may suffer because people are going to make their decisions more on the
basis of 'is this SDI or isn't it' than 'is this a good project or
isn't it'?
Is there any such thing as research which *can't* contribute to
SDI? How close are the projects of those of you who are arguing most
vigorously against it? If it has *anything* to do with computers, then
I submit that it will probably be beneficial to SDI (sure, I'm talking
about indirect benefit, but it's a lot closer than doing research on
growing better carrots).
Is this negativism I feel around here real or am I worried about
nothing?
Jim O.
--
James Omura, Barrister & Solicitor, Toronto
ihnp4!utzoo!lsuc!jimomura