[can.general] Star Wars analysis

havens@ubc-vision.CDN (Bill Havens) (04/03/85)

In reply to henry@utzoo:
Be careful in describing other people's analyses as hysteria.  It is quite
useful to concentrate on establishing the facts from which informed debate
may proceed.  However, you have gone beyond that goal in a personal way.
The flurry of activity on this network is not a debating exercise.  We
are concerned about a very serious change in Canadian Science and an
even more serious escalation of weapons for mass destruction.

My abstract about the dangers of relying on computers to achieve reliable
automatic strategic defense was not hysterical. It was factual.  The
information that I presented was taken from a number of recent papers
published by Alan Borning, Severo Ornstein and others in such journals
as "The Bulletin of the Atomic Scientists" and soon in the CACM.  I quoted
their findings and analyses correctly.

In particular, you dismiss their assertions that we are inexorably moving
towards "launch on warning" and the removal of human (read Presidential)
decision making about starting nuclear war.  Your reason for dismissing
these frightening facts is that you have heard no suggestion that the
US should change its public policy of Presidential command.
To the contrary, I specifically outlined testimony to Congress where these 
facts were admitted by SDI representatives.

To repeat -
the Star Wars technology is only an effective deterrent if it
can be used during the "boost phase" of the rising Russian missiles.
The 60-second decision time required NECESSITATES automatic decision making
and automatic "nuclear war fighting".  Its not a matter of current US policy.  
It is dictated directly by the technology.

In the second part of your response to my argument, you miss the point about
Nuclear Power Systems.  A technology is "safe" only if we are willing to accept
the consequences of infrequent but inevitable system failures.  In an
accidental nuclear power plant "meltdown", society survives.  In an
accidental nuclear holocaust, society and possibly the planet both die!

Bill Havens.....

henry@utzoo.UUCP (Henry Spencer) (04/03/85)

> In particular, you dismiss their assertions that we are inexorably moving
> towards "launch on warning" and the removal of human (read Presidential)
> decision making about starting nuclear war.

My mistake; I am aware of the suggestions that "launch on warning" will
be increasingly attractive due to steadily-reduced decision times.  What
I'd like to know is, what does "launch on warning" of offensive weapons
have to do with automatic initiation of defensive weapons?

> the Star Wars technology is only an effective deterrent if it
> can be used during the "boost phase" of the rising Russian missiles.

The Star Wars technology is a *defence*, not a *deterrent*.  These are
two very different animals.  A defence protects against an actual attack;
a deterrent attempts to avert attacks by frightening the opposition.
(By the way, some of the suggested SDI methods do not rely on boost-phase
interception, although it is the most attractive time to do it.)

> The 60-second decision time required NECESSITATES automatic decision making
> and automatic "nuclear war fighting".

I'm willing to go along with the first half, but not the second.  I see
no reason why initiation of defensive systems, i.e. SDI, need have anything
to do with initiation of offensive systems, i.e. nuclear weapons.  They
are two quite separate issues.  In fact, it has been suggested that SDI
would substantially increase the available decision time for launching
offensive weapons, since it would interfere severely with any attempt
to quickly destroy offensive systems.

I agree with your point that a technology is "safe" only if we can live
with occasional failures, although your example of nuclear power was
singularly ill-chosen, since accidental failures are generally less
dangerous there than almost anywhere else.  (Nuclear power plants are
better protected against accidents than almost any other technology,
including many that handle dangerous chemicals or explosive fuels
in large quantities.)  Clearly, however, an accidental failure in a
*defence* system is far less dangerous than an accidental failure in
a *deterrent* system.  SDI systems do not launch nuclear missiles;
they shoot down missiles that have already been launched by someone
else.  I repeat, the worst consequence of accidental initiation of an
SDI system is shooting down a manned space launch.  This would be
regrettable, but surely we can live with the risk.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

lesperan@utai.UUCP (Yves Lesperance) (04/04/85)

In <5407@utzoo.UUCP>, Henry Spencer says:
>My mistake; I am aware of the suggestions that "launch on warning" will
>be increasingly attractive due to steadily-reduced decision times.  What
>I'd like to know is, what does "launch on warning" of offensive weapons
>have to do with automatic initiation of defensive weapons?
> ... I see no reason why initiation of defensive systems, i.e. SDI, need
>have anything to do with initiation of offensive systems, i.e. nuclear
>weapons. ...  SDI systems do not launch nuclear missiles;
>they shoot down missiles that have already been launched by someone
>else.  I repeat, the worst consequence of accidental initiation of an
>SDI system is shooting down a manned space launch.  This would be
>regrettable, but surely we can live with the risk.

This ignores the fact that offensive and defensive systems are likely
to be linked in their operation.  Moreover the opponent's perception of
threat is based on the characteristics of the combined system.

Consider the following senario: assume that both superpowers have
SDI-type systems together with all the surveillance hardware that is
necessary to make them work. The US system goes on alert as a result of
some situation that was not anticipated in the system design; these
false alarms are said to happen on a daily basis in current systems.
The Soviet system gets reports of the US system going on alert and reacts
by doing the same. This leads the US system to go on a higher level of alert,
and so on.  At some point the US system, which must make a decision while
the incoming missiles are still in boost phase, that is in about one minute,
decides to fire at the percieved missiles and maybe also at the Soviet
surveillance satellites.  All this may happen with next to no human 
involvement due to the short decision time.

Now the Soviets know that the SDI system is poor protection against a first
strike, but could work well against a second strike. They take the
US action as the beginning of an attack (with good evidence), and so they
fire their missiles.

I view this kind of scenario as very plausible.  The recent changes in the
nuclear scene, that is, cruise missiles, short and medium range missiles,
and stealth technology, have all decreased decision times and increased
uncertainty; they have done nothing to increase our security.  SDI will
extend this mad trend.  Because of these changes, our destiny will be put
in the hands of ``expert'' computer systems. The Strategic Computing
Initiative document makes clear that this is why they want these AI defence
systems although it is less candid as to where the technology will be used.
But nobody knows how to program common-sense either now or for the forseeable
future.  So the ``expert'' systems will follow their rigid rules to their
very end, as well as ours.

Yves Lesperance

utcsri!utai!lesperan

chris@aquila.UUCP (chris) (04/04/85)

<< in reference to Yves Lesperance's rebuttal >>
In my own opinion, and agreeing with Henry's, accidental SDI initiation
is not threatening by itself.  However, Yves has raised an important concern;
that if SDI is linked to offensive systems, then the chance of accidental
war may be increased.

However, the issue appears complicated by one other factor: namely, the
development of counter-force technology (so-called because it acts against
other forces of attack, as compared to MAD's counter-value technology
that acts against cities and people). The genie is out of the bottle;
who can push it back in? Yves mentions stealth bombers, cruise missles,
and new mid-range ballistic missles; these are all weapons that make
a preemptory first-strike possible, and bring about the dangers of
'launch-on-warning'. Even submarines are not invulnerable; it is anticipated
that satellite submarine detectors are possible within 10 years, allowing
the use of nuclear depth charges before a first strike to knock off all
SLBMs before use.

This is an exceedingly frightening scenario, but it is the world we live
in. Military thinking must take into account worst-case scenarios
when designing effective responses; this leads often to overkill situations.
(Remember the dreadnaughts arms race prior to 1914?) We can research
social interactions till the cows come home, but we will never be able
to avoid human conflict.

SDI offers the hope that defensive systems will deter the new first-strike
systems by threatening them enroute; deter a first-strike by protecting
silos for a possible second-strike (thus removing the impetus for a preemptory
strike), and finally (just maybe) save a few million people in case the
unthinkable happens. In this way, SDI re-balances the MAD strategy, by
countering the new technologies that destabilize MAD. The status quo
can be preserved.

Beyond that, it is very nice to dream. A truly effective defence system
may in fact lead to the end of MAD, and allow a true offensive build-down.
Even if Mutual Assured Survival (MAS) never does replace MAD, however,
the SDI is necessary to keep MAD working.

	Chris Retterath		(..utzoo!dciem!aquila!chris)

henry@utzoo.UUCP (Henry Spencer) (04/05/85)

> ....  At some point the US system, which must make a decision [fast]
> decides to fire at the percieved missiles and *maybe* also at the Soviet
> surveillance satellites.  [emphasis added]

Firing on surveillance satellites is an insane thing to do, because (as
you point out) it's a very threatening move.  I would be very surprised
to see missile defence and antisatellite attack combined under the same
automatic-response system; it's too dangerous.  Furthermore, there is
no terribly good reason for it: the missile-warning satellites tend
to be in geostationary orbit, much too high for most proposed SDI
systems to attack them effectively, and hence it isn't even the same
hardware doing the job.

You have constructed a frightening scenario, all right, but it's based
on the same assumption I was attacking in the message you cited:  that
a system which *must* have super-fast and hence automatic response will
also command other, much more dangerous, systems into action.  This is
definitely a possibility which needs to be guarded against, but there
are enough false alarms in existing systems (as you point out) that it
is *most* unlikely that anything which didn't *absolutely* *have* to
have lightning response would be placed under fully automatic control.
There is no need for 60-second decisions about attacking satellites.

> ...  Because of [shortening decision times], our destiny will be put
> in the hands of ``expert'' computer systems. The Strategic Computing
> Initiative document makes clear that this is why they want these AI defence
> systems ...
> But nobody knows how to program common-sense either now or for the forseeable
> future.  So the ``expert'' systems will follow their rigid rules to their
> very end, as well as ours.

All the more reason to support a system that lengthens decision times
on the really bad weapons, by reducing the fear of sudden obliteration
that motivates worrisome ideas like "launch on warning".
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry