[mod.politics.arms-d] Arms-Discussion Digest V7 #73

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (11/28/86)

Arms-Discussion Digest               Friday, November 28, 1986 12:12AM
Volume 7, Issue 73

Today's Topics:

                            Administrivia
                           Re: Selling SDI
                                irony
                                Ethics
                         Antimatter rockets?
                Limits on what we can do with software
                  scary thought on SDI (from RISKS)
                          Launch on warning

----------------------------------------------------------------------

Date: Wed, 26 Nov 1986  22:14 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Administrivia

==>> Someone please check this one out!

    Date: Wednesday, 26 November 1986  22:09-EST
    From: MAILER-DAEMON at harvard.HARVARD.EDU (Mail Delivery Subsystem)
    To:   <ARMS-D-Request>
    Re:   Returned mail: Service unavailable

       ----- Transcript of session follows -----
    >>> RCPT To:<arms-d-incoming@endor>
    <<< 554 arms-d-incoming@das... UnknownLocalHost
    554 <arms-d-incoming@HARVARD.HARVARD.EDU>... Service unavailable

------------------------------

Date: Wed, 26 Nov 86 23:59:33 PST
From: weemba@brahms.berkeley.edu (Wimpy Math Grad Student)
Subject: Re: Selling SDI

The comment about the "Star Wars" commercial:

>>Don't you remember Reagan's prime-time soft-shoe about the "Peace Shield"?

received two factually correct replies:

>...  It wasn't an Administration-commissioned piece, though they certainly
>did not go out of their way to disavow it.	...	    [Herb Lin]

>...  Don't blame Reagan for Graham's antics.  I suspect that SDIO quietly
>wishes Graham would shut up.			...	    [Henry Spencer]

But these miss the point.

But what is a congressman supposed to do when his voting populace believes
all this "Peace Shield" nonsense?  Try and engage in intelligent discussion
with the masses?  Try and explain patiently that he is not soft on commies?

If Reagan is unwilling to come out and announce that SDI is not meant to be
thought of as this "Peace Shield", then he is feeding off their propaganda.

ucbvax!brahms!weemba	Matthew	P Wiener/UCB Math Dept/Berkeley CA 94720

------------------------------

Date: Thursday, 27 November 1986  01:18-EST
From: (Anonymous)
To:   arms-d-request
Re:   irony 

[I received this contribution from a reader who wishes to remain
anonymous.]

  I find it ironic, in light of the recent comments that expert opinion
  is often based on classified knowledge, that my comments on [*******]
  are exactly in the same boat.  I don't know if it's obvious yet from
  my submissions, but I consult regularly for [***].  No, I do not have
  special knowledge about [*** ***], but I find I censor myself when I
  want to give plausible scenarios regarding [*** *** *** ***].  It's
  frustrating.  They seem so obvious.

  If you want to include this [**time period**] from now, you may do so
  if you censor all my bracketed expressions.

------------------------------

Date: Thu, 27 Nov 86 11:45:23 EST
From: campbell%maynard.UUCP@harvard.HARVARD.EDU
Subject: Ethics
Reply-To: campbell%maynard.UUCP@harvard.HARVARD.EDU (Larry Campbell)

>From: anderson at unix.macc.wisc.edu (Jess Anderson)
>
>In article <2862@burdvax.UUCP>, blenko@burdvax.UUCP (Tom Blenko) writes:
>| Why doesn't Weizenbaum do some research and talk about it?  Why is
>| Waterloo inviting him to talk on anything other than his research
>| results? No reply necessary, but doesn't the fact that technically-
>| oriented audiences are willing to spend their time listening to this
>| sort of amateur preaching itself suggest what their limitations are
>| with regard to difficult ethical questions?
>Even as a preacher, Weizenbaum is hardly an amateur! Do be fair. On
>your last point, I would claim the evidence shows just the opposite
>of what you claim, namely that technically-oriented audiences are
>willing to spend their time listening to intelligent opinions shows
>that they are more qualified than some people think to consider
>difficult ethical questions.

What is all this bunk about people being "qualified" to make ethical
decisions?  *Everyone* is qualified to make ethical and moral
decisions, just as, in a democracy, everyone is qualified to make
political decisions.  Moreover, all decisions are fundamentally moral
decisions.  You may argue about how an individual makes decisions --
some people possess carefully thought out ethical systems, while
others just do whatever the Pope says -- but to argue that some are
more qualified than others to make moral decisions is to argue that
the "less qualified" people have no moral responsibility.  This is a
terribly dangerous and repugnant idea.

This, in fact, is one of the root causes of most twentieth century
evil.  Millions of people -- in Nazi Germany, the Soviet Union, and
yes here in the "land of the free" (intense irony) -- have been all
too willing to delegate their moral judgement to others.  The result
has been two world wars, a holocaust, and a deadly and corrosive arms
race.  It's all too easy to say, "Hey, these are decisions I'm not
qualified to make.  I'll just do whatever {Hitler, Stalin, Reagan,
Weinberger} say, since they obviously understand it all better than I
do."  The results speak for themselves.  -- Larry Campbell MCI:
LCAMPBELL The Boston Software Works, Inc.  UUCP:
{alliant,wjh12}!maynard!campbell 120 Fulton Street, Boston MA 02109
ARPA: campbell%maynard.uucp@harvisr.harvard.edu (617) 367-6846
DOMAINIZED ADDRESS (for the adventurous): campbell@maynard.BSW.COM

------------------------------

Date: Thu, 27 Nov 86 09:40:15 PST
From: weemba@brahms.berkeley.edu (Wimpy Math Grad Student)
Subject: Antimatter rockets?

>								   If you
>look at some of the work Robert Forward has done in recent years, it becomes
>clear that we are probably much closer to antimatter rockets than most people
>think.  The USAF has study contracts out already on antimatter for in-space
>propulsion;					[Henry Spencer]

Is there anything to this work of Forward?  He's considered something of a
crackpot in the physics community--if I remember correctly I saw a passing
reference to this in a recent "Discover" magazine.

ucbvax!brahms!weemba	Matthew	P Wiener/UCB Math Dept/Berkeley CA 94720

------------------------------

Date: Wed, 26 Nov 86 11:59:49 pst
From: Dave Benson <benson%wsu.csnet@RELAY.CS.NET>
Subject: "Limits on what we can do with software"

Report on speech by Professor David Parnas at the 4th Pacific
Northwest Software Quality Conference, Portland, Oregon, 10 Nov 86.

(This conference is help annually, usually in Portland.  I recommend
it to those interested in the state of the art of software quality
assurance.  To receive announcements of future conferences, write to
	Lawrence & Craig
	320 SW Stark, Rm 411
	Portland, OR 97204
)

While I took some notes, I do not claim that this report is in any
sense complete.  I hope it is reasonably accurate.

	My complements to Lawrence & Craig for providing gratis
xerographic copies of Professor Parnas' transparencies.

	The title on the transparency copy is "When can Software be
trustworthy".  Professir Parnas began by asking the question, How can
we measure the quality of software.  He stated that we must assume the
existance of a specification -- but that we do not always have one.
Correctness is a claim that the software always meets the
specification.  But this is uninteresting in the sense that no large
software is ever correct.  More interesting is reliability: What is
the probability of correct behavior?  But reliabilty is not the whole
story -- trustworthiness is defined as how likely is it that the
probability of a catastrophic flaw is acceptably low.  Trustworthiness
is the interesting measure for software which might cause a
catastrophy, reactor control, piloting an airplane, ...

	Correctness is unneeded, nice to reach for, but hard to get.
To a perfectionist, all things are equally important.  Reliability is
a adequate measure when all errors are considered equivalent, when
there are no unacceptable failures, when the operating conditions are
predictable, [dlp used a frequentist notion of probability here, I
believe -- dbb] and when the concern is inconvenience.
Trustworthiness is the appropriate measure when one can identify
unacceptable failures, when trust is vital to meeting the
requirements, and when there may be antagonists -- in the guise of
users or opponents.

	We often accept systems that are unreliable.  We do not use
systems that we cannot trust.

	What are the limits of Software Engineers?  Parnas noted that
software limits are more ephemeral than hardware limits, that there
are no physical limts.  Computability limits (undecidability,
computational complexity) are rarely relevant to the software
engineer.  The hardware limits are well understood but expanding.  The
difficult issue is human limts, the limitations on software engineers.

	Why is Software so hard?  We validate our engineering products
using mathmatics and testing.  Both require continuity, or a small
number of states, or repetitive (i.e., regular) structures as in a
memory chip which repeats the same structure over and over.  These
allow for compact representation of the mathematical functions.
	In engineering school, Parnas learned that technicians, (called
	hackers now), just experimented with the product's state space.
	Professional engineers with professional responsibility attempt
	to find regularities in the product state space.

If the size and irregularity of the state space is large, neither
mathmatics nor testing will help.  The size of software is no measure
of the difficulty of the problem -- only reducing the number of states
or introducing regularity will help. Software has little exploitable
regularity.  [I agree with the essense of this argument, although not
the details.  Continuity has little to do with the success of
traditional engineering mathematics.  Few irregularities of the state
space does.  Program size is, in my opinion, highly correlated with
the irregularity of the state space that the software traverses. --
dbb]

	Brooks' Law of Prototyping -- Plan one to throw away, you will
have to.  According to Parnas, Brooks says this because programs that
work on small scale models in unreal conditions do not work in large
scale situations with real conditions.  A prototype is a full scale
working model -- one you can use.  The Modified Law of Prototyping:
Design for change and count on changing as you gain experience.
Parnas' Corollary to either version: If the first real use has to be
succesful, you can't do it.

	Why is mathematical verification rarely practical?  Lack of
specifications, lack of continuity, inability to simplify expressions.
The exceptions that prove the rule: Small or highly regular state
spaces, simple mathmatical definition of function.

	When can we evaluate quality by testing Software?
Correctness: "Testing can show the presense of bugs not their absence"
-- E. W. Dijkstra.   But this is theoretically false, as computers are
just fintite state machines, so it is really true -- since saying "in
theory" means it really isn't so.

	Reliability: We must know the operating conditions.  Testing
time inversely proportional to acceptable failure rate.  Testing time
independent of size of state space [I didn't follow this -- dbb].
Testing times are practical.

	Trustworthiness: The Testing times is inversely proportional
to the acceptable number of catastrophic states.  The testing time is
proportional to the total number of states.  The testing time is
proportional to the log of the confidence probability.  The testing
times are rarely practical.

	Can "new" testing methods help?  Path testing makes an
unfounded distinction between data and control portions of state -- in
software there is really no distinction.  Mutation testing assumes
that we can measure "closeness" -- but this tends to only find "gross"
errors.  Component testing test components, not systems.  The
statistical limitations remain.  In the absense of exploitable
regularities, we must use statistical testing.

	Parnas mentioned an example of software in which 10% of the
code has never been executed, and this program controls the MX
missile.

	Can simulation replace testing?  Simulation is testing, not a
substitute for testing.  The statistical limitations remain, but
oversights in the program are matched by oversights in the model.
Simulation often becomes an independent project.  For all these
reasons the inevitable differences between the model and the real
world can be exploited by antagonists.  In summary, simulation is a
useful tool for inexpensive early testing, but "the behavior of any
large dynamic network cannot be fully predicted analytically or by
simulations." (Eastport report)

	Is the nature of the failure important?  An active failure is
something the system might do which the specifactions forbid.  We can,
with discipline, full information and testing, prevent specific active
catastrophies.  A passive failure is when the system fails to perform.
These are the hard problems.

	Can we solve the problems by putting more into hardware?  The
problem is the size and irregularity of the state space.  The problem
is independent of the way mechanism is represented.  Efficient
algorithms are not necessarily complex.  Most complexity comes from
the problem state space.  The software/hardware tradeoff issues are
orthogonal to the questions of Limits.

	When can decentralization and redundancy help?  Distribution
can reduce communication costs.  Some distributed systems provide
redundancy.  Distribution adds problems -- we have new problems as
well as all the old ones.

	Can we apply hardware fault tolerance techniques to software?
For hardware, these techniques work because: component failures [can
be] assumed to be independent; we do not expect to tolerate major
design flaws; the alphabet is small, {0,1}.  For software: experiments
and experience show that failures of prgrammers working independently
are not independent.  In contrast to hardware, we are only concerned
with design flaws.  The set of possible results is large.

	What about Artificial Intelligence?  AI-1 is defined as
solving hard problems.  Study the problem, not the problem solver. No
magic techniques, just good solid program design.  AI-2 is defined as
heuristic or rule based programming/expert systems.  One studies the
problem solver, not the problem, uses ad hoc (cut-and-try)
programming.  This provides little basis for confidence.

	What about Automatic Programming?  Since 1948 this has been a
euphemism for programming in a new language. [Hooray for Autocoder on
the 650 -- dbb]

	What about new programming languages?  No magic, they help if
they are simple and well understood. [I vaguely recall a dig at Ada
here -- dbb] New programming languages provide no breakthroughs.  The
fault lies not in our tools but in ourselves and in the nature of our
product.

	What about new software engineering techniques?  (1) Precise
requirements documents; abstraction/information hiding; formal
specifications.  The use of these techniques requires previous
experience with similar systems.  (2) Cooperating Sequential
Processes: Requires detailed information for real-time scheduling.
(3) Structured programming: reduces, but does not eliminate errors.
All of these help, but not enough to get the software right the first
time.

	Is there a meaningful concept of tolerance for software?  The
engineering notion of tolerance depends on an assumption of
continuity.  Statistical measures of program quality are limited in
their application to situations where individual failures are not
important.

	The Requirements of a Strategic Defense System: In March of
1983, U.S. President Reagan said, "I call upon the scientific
community, who gave us nuclear weapons, to turn their great talents to
the cause of mankind and world peace; to give us the means of
rendering these nuclear weapons impotent and obsolete."  To satisfy
this request, the software must perform the following functions:

	o  Rapid and reliable warning of attack, detecting an attack and
	   defining its location, determining likely target areas for
	   confident initiation of the battle, determining track data.

	o  Efficient intercept of the booster and postboost vehicle,
	   allocation of individual targets to weapons, weapons control,
	   birth-to-death tracking of targets.

	o  Efficent discrimination, in the postboost and midcourse phase,
	   of decoys from warheads, continued tracking using data from
	   earlier phases.

	o  Interception in midcourse and terminal phase, allocation of
	   targets to other sensors and weapons.

	o  Kill assesment, evaluation of the effectiveness of previos phases,
	   to guide future resource allocations.

It must be constructed in such a way that all the parties are
confident that it will perform these functions when called upon to do
so.

	Why is it important that the software can never be trusted?
"We" will make decisions, such as future weapons purchases, as if SDI
was not there.  "They" will make decisions as if it might work.

	How good must SDI be?  It need not be perfect.  It must be
trustworthy.  We need to know its limits.  We need confidence that it
will not fail catastrophically. --But no significantly-sized software
has worked the first time.

	What makes the SDI software much more difficult than other
projects?  Lack of reliable information on target and decoy
characteristics.  Distributed computing with unreliable nodes and
unreliable channels.  Distributed computing with hard real-time
deadlines.  Physical distribution of redundant real-time data.
Hardware failures will not be statistically independent.  Redundancy
is unusually expensive.  Information essential for real-time
scheduling will not be reliable.  Very limited opportunities for
realistic testing.  No opportunity to repair software during use.

	[Somewhere in this, Parnas mentioned the Vietnamese War
example of the fire control system which couldn't be debugged, so
contractor personnell were brought to Vietnam -- dbb]

Expected to be the largest real-time system ever attempted; frequent
changes are anticipated.

	The 90% distraction.  The SDIO says: If each stage is 90%
effective, less than 1% of the missles get through.  Isn't that better
than 100%.  What's wrong with this reasoning?  1. The 90% figure is
taken out of thin air.  2. The mathmatics assumes statistical
independence of the stages.  3. The argument assumes that the USSR
will not react by building more missles.  4. Statistical measures are
used for describing random processes.  Battles of skill are not
accurately described that way.

	More published counterarguments:

	o  "Redundancies, tiers, and workarounds could allow the system
	   to be fault tolerant."

	   The probability of an error would have to be lower than in the
	   normal code.  The errors would have to be independent.

	o  "System can be tested during peacetime by tracking satellite
	   launches and tests."

	   Such tests do not test the system under attack conditions. The
	   most critical functions would not be tested.

	o  "System can be tested in pieces."

	   Many subtle software errors not revealed until integration testing.

	o  "There could be 100,000 errors and it would still work."

	   Sure, if they are the right errors.

	o  "Earlier efforts such as SAGE and SAFEGUARD worked."

	   These were never used.  Builders had grave doubts about them.
	   These systems are not comparable in difficulty with SDI.

	o  "Three independent programs could be coordinated for reliablity."
	   Disproven is a recent experiment.  Hardware analaogies do not
	   apply here.  Most likely leads to paralysis with 3 different
	   answers.

	The "loose coordination" distraction.  Many SDI supporters
admit tightly coordinated battle stations won't work.  They propose
reducing the communications between the battle stations.  They claim
that this reduces an impossible large system to a set of possible
ones.  They argue that independence will allow results from testing
one battle station to be used to infer results for the whole set.  The
assumptions behind this argument: Battle stations need no data from
other satellites.  The "Sargent York" (DIVAD) is no problem.  The only
interaction between the stations is by explicit communication.  A
collection of communicating systems is not a system.

	The Truth About the Fletcher Report.  Did they call for a
highly centralized system?  No, they explicitly decided agaist systems
with critical nodes.  They rejected centralized control and
hierarchical structure.  They spoke at length about independence and
degraded states.  They did call for distributed data to all nodes so
that each node could function on its own, because some data from other
statiuons would be needed.  But they knew that this was an ideal, an
overdesign, and said so.

	All of these assumptions are false!

	Satellites close enough to destroy a missle did not see the
boost or separation.  Data needed for accurate tracking and
discrimination in a noisy environment with sophisticated
countermeasures.

	Each battle station has to perform all of the functions of the
whole system.  Each is a software system as complex as DIVAD and all
the orginal arguments apply to it.  It is unlikely to work; impossible
to trust.

	Battle stations interact through weapons and sensors and
through their shared targets.

	A collection of communicating programs is mathematically
equivalent to a single program and subject to all of the problems
originally noted.

	THERE IS STILL NO WAY TO KNOW THE EFFECTIVENESS OF THE SYSTEM.
	THERE IS STILL NO REASON TO TRUST THE SYSTEM.
	POTENTIAL EFFECTIVENESS AGAINST SOPHISTICATED ATTACK IS REDUCED.

	What Can We Do?  1. Ask the [aforementioned] questions.  2. If
you don't like the answers, insist that the problem be changed -- Is
there a non-software solution?  Can we solve the real problem some
other way?  Can we redefine the problem state space?  Reduce the
number of states, introduce expoitable regularity, use hardware to
limit it?  Can we insist on more data before beginning?

	There are lots of things we can do with software, both
civilian and military.  Please don't waste time trying to do things
that can't be done.

[dbb == David B. Benson.]

------------------------------

Date: Thu, 27 Nov 1986  18:53 EST
From: LIN@XX.LCS.MIT.EDU
Subject: scary thought on SDI (from RISKS)


> 8) Military systems such as the SDI control software would appear to belong
>    to the "disaster-level" classification... will they be subject to this
>    level of verification and legal responsibility, or will they be exempted
>    under national-security laws?  [Of course, if an SDI system fails,
>    I don't suppose that filing lawsuits against the programmer(s) is going
>    to be at the top of anybody's priority list...]

From: Bard Bloom <bard at THEORY.LCS.MIT.EDU>

That's a terrifying thought: don't verify Star Wars, it's too secret to have
the code so exposed!  

------------------------------

Date: Wednesday, 26 November 1986  13:01-EST
From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>
To:   LIN
Re:   Launch on warning 

REPLY TO 11/25/86 18:35 FROM LIN@XX.LCS.MIT.EDU: Launch on warning

>       We have the subs because LOW is (far) less than perfect, but we
>       nevertheless have a LOW policy because the Air Force wants MX and
>       competes with the Navy.
>
>   Please specify who you mean by "we" as in "we" have a de facto
>   policy of LOW.

In the above, we = the United States; but the de facto control is
exercised by the Air Force.

>       Re your first point, the Air Force indeed
>       cites LOW (as of about 1985) as the primary reason their big
>       missiles are survivable and relatively cheap.
>
>   True.  But the Air Force does not set national policy.

The USAF *runs* national policy!  Ergo, the USAF de facto sets it.
They gave us an immobile mobile missile.  They're giving us
launch on warning, the real thing.  And historically the USAF has
controlled such entities as the Nuclear Targeting Policy
subcommittee which subtasked the selection of the LOW target sets.

>       I say that the pre-decision to *attempt*
>       LOW would by any definition amount to having a LOW policy.
>
>   You are using the term "LOW policy" in a very slippery way.  It
>   can mean two things.  It can mean (1) "policy on the possibility
>   of an LOW" or (2) "policy that we will in fact execute an LOW".
>   When you use the phrase "LOW policy", I sometimes hear you
>   saying #2, but when pressed you revert to #1.

In my lexicon, (1)=all LOWCs; (2)=empty set.

It's crucial to realize that there's *never* more than a possibility
of performing a LOW.  Experts acknowledge it simply can't be a
guaranteed response.  You see, I'm not inconsistent, it's just that
there's more variables than meets the eye.  The crucial point is
whether it's predecided that a LOW will be *attempted*.  Whether the
attempt is successful is a later fact.  The *possibility* factor
is due to (a) no predecision to attempt LOW, and (b) technological
limitations.

>       Surely
>       you don't say "we don't have a policy if there's a chance the attempt
>       to do a LOW would fail" (which is always the case)?
>
>   In the interests of clarification, I would rephrase it to say
>   that "But being set to do something does NOT mean that you will
>   do it, or that you would even TRY to do it.  That's what makes
>   it NOT policy."

OK, we agree that any set of procedures that is preset to *attempt*
LOW in some circumstances is fairly called having a policy of
launch on warning?

>   Unreliable sensors is the only basis for your case against LOW,
>   at least as you have presented here in the Digest.

True, that's the thrust of my case.  But the environmental
catastrophe is part of the damages aspect of the allegations.

>   Indeed, your problem is that under current law, it is not, and you want to
>   change the law (by judicial review) so that it is.

Only one of my 8 causes of action is first-use oriented.  It's not
so much that the Atomic Energy Acts need "changing," as being more
explicit re first-use procedures.  That is, it still would be the
Pres. who decides, operationally, that nukes should be first-used,
but only after a special OK from Congress.

The other causes of action merely seek to apply old law to the new
peril.

>   2) I want to make it clear that I am NOT advocating a
>   declaratory policy of LOW.  I do want to retain the LOW option.

Would you support a peacetime 30-minute timelock on MX/Minuteman?

------------------------------

End of Arms-Discussion Digest
*****************************