ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (10/30/86)
Arms-Discussion Digest Wednesday, October 29, 1986 9:54PM
Volume 7, Issue 42
Today's Topics:
Boost-phase Star Wars
Strategy of Nuclear War and SDI
SDI assumptions (4 msgs)
"would we kill another person if we had to look him in the eye?"
----------------------------------------------------------------------
Date: Wed, 29 Oct 86 12:23:07 PST
From: Clifford Johnson <GA.CJJ@forsythe.stanford.edu>
Subject: Boost-phase Star Wars
> In the Stanford debate on SDI Pete Worden said that the boost phase
> problem may be just ignored. We give up trying to hit anything
> during boost phase. This means that we have left the post-boost
> including deployment, mid-course, and reentry (terminal) phases.
> Most people I know that are knowledgeable on the subject know that
> boost phase is about our only chance at the deal. The whole "SDI"
> pseudo-system is crumbling before our very eyes!
But who will tell Reagan it won't work? He's still refusing massive
arms reductions just so he can put Star Wars up quickly. Could it be
that boost-phase intercept is deemed OK if switched-on in conjunction
with a first-strike, because then an accident wouldn't matter?
To: ARMS-D@XX.LCS.MIT.EDU
------------------------------
Date: Wed, 29 Oct 1986 18:53 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Strategy of Nuclear War and SDI
From: Phil R. Moyer <prm at j.cc.purdue.edu>
The point of SDI is to make a nuclear war prohibitively expensive and
uncertain for the Soviets to wage.
If that is indeed the goal, then there are other ways to do it,
faster, with less techical risk, and more cheaply. A comprehensive
test ban and a missile flight test ban are two things that come to mind.
SDI
was never designed to eliminate nuclear war.
The President disagrees with you on this one.
------------------------------
Date: Monday, 27 October 1986 00:04-EST
From: prairie!dan at rsch.wisc.edu
To: arms-d
Re: SDI assumptions
I don't think any software is ever completely tested, for the obvious
reasons, but a good (and smaller scale) comparison can be made to the
STS software. There were some serious bugs, once of which (the system
shut down if an attempt was made to do a main engine restart during an
abort and landing in Spain) was found during simulator training (they
ran the real software in the simulator). No one ever said, though,
that they should never build shuttle software because it had to be
designed to deal with circumstances that would never happen except
in the worst cases.
I believe the shuttle comparison is apt, precisely because I believe
that the ultimate use of the SDI technologies is in human controlled
systems, where human simulation training will go hand in hand with
exercising the software.
-- Dan
------------------------------
Date: Monday, 27 October 1986 09:50-EST
From: LIN
To: arms-d
cc: lin
Re: SDI assumptions
From: prairie!dan at rsch.wisc.edu
How did you get my posting before it got into mod.risks?
I saw it in RISKS posted to the arpanet.
I don't think any software is ever completely tested, for the obvious
reasons, but a good (and smaller scale) comparison can be made to the
STS software. There were some serious bugs, once of which (the system
shut down if an attempt was made to do a main engine restart during an
abort and landing in Spain) was found during simulator training (they
ran the real software in the simulator). No one ever said, though,
that they should never build shuttle software because it had to be
designed to deal with circumstances that would never happen except
in the worst cases.
The question is not the feasibility of the job per se, but the
feasibility of the job considering the demands that will be put on it.
Assume that SDI software could be built to be as "good" as STS
software. Would that be "good enough" for SDI? Maybe yes, maybe no.
It depends on what the purpose is.
------------------------------
Date: Monday, 27 October 1986 10:34-EST
From: prairie!dan at rsch.wisc.edu
To: arms-d
Re: SDI assumptions
How good is good enough? "Damn Good", I guess. You never trust
a system completely; the question is, is it good enough that, given
proper human supervision, it:
a) Doesn't kill someone it shouldn't.
b) Is effective enough to justify building it.
I am not convinced about SDI for the simple reason that not enough
evidence exists one way or another about whether useful systems can
be built to do what its proponents want. That's what research is for.
When you do research, your peers presumably judge your results on a
(relatively) objective basis. Until then, comments of the "I can't
figure out how to do it, and no one has ever done it, so it's not
possible" variety are neither useful nor professional. I believe
that many computer scientists are using two standards for judging
prospective research: one for politically neutral stuff, and one
for military projects.
It's perfectly legitimate for a computer professional to say,
"No system is going to be perfect, so don't expect leakproof
protection," (that's common sense), or "We don't know how to do
something of this magnitude; there are many unanswered questions,
so don't believe it if they tell you they know it works," that's
the truth. But to say, "It cannot be done, ever, under any circumstances"?
People don't even say that about machine intelligence (at least
publicly), and intelligence and consciousness are two mysteries
far deeper than how to build real good, real huge systems.
------------------------------
Date: Wed, 29 Oct 1986 19:01 EST
From: LIN@XX.LCS.MIT.EDU
Subject: SDI assumptions
From: prairie!dan at rsch.wisc.edu
It's perfectly legitimate for a computer professional to say,
"No system is going to be perfect, so don't expect leakproof
protection," (that's common sense), or "We don't know how to do
something of this magnitude; there are many unanswered questions,
so don't believe it if they tell you they know it works," that's
the truth. But to say, "It cannot be done, ever, under any
circumstances"?
The argument is that you can never know the things you didn't
anticipate, and that it is impossible to anticipate everything that
might be relevant. Is that a controversial statement?
People don't even say that about machine intelligence (at least
publicly), and intelligence and consciousness are two mysteries
far deeper than how to build real good, real huge systems.
But these go far beyond technical considerations. Besides, that
simply isn't true. Joe Weizenbaum stands out as a prime example.
------------------------------
Date: Wed, 29 Oct 86 18:02:17 est
From: yetti!geac!charles@seismo.CSS.GOV (Charles Cohen)
Re: "would we kill another person if we had to look him in the eye . . ."
One of the first acts in the Bible is a murder (of Abel, by Cain). And
there are lots of wars -- no guns, no bombs, no planes, no artillery,
just swords, spears, rocks (David/Goliath), tent-pegs (Jael or Judith,
I forget which), and whatever else was handy. Violence -- individual
and organized -- long pre-dates modern techniques of war.
Charles Cohen @ Geac
------------------------------
End of Arms-Discussion Digest
*****************************