[mod.risks] RISKS-3.83 DIGEST

RISKS@CSL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (10/22/86)

RISKS-LIST: RISKS-FORUM Digest,  Tuesday, 21 October 1986  Volume 3 : Issue 83

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Risks from Expert Articles (David Parnas, Herb Lin, Andy Freeman)
  Loss of Nuclear Submarine Scorpion (Donald W. Coley)
  Staffing Nuclear Submarines (Martin Minow)
  An SDI Debate from the Past (Ken Dymond)
  System effectiveness is non-linear (Dave Benson)
  Stealth vs Air Traffic Control (Schuster via Herb Lin)
  Missing engines & volcano alarms (Martin Ewing)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
(Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM)
  (Back issues Vol i Issue j available in CSL.SRI.COM:<RISKS>RISKS-i.j.
  Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

Date: Tue, 21 Oct 86 09:39:51 EDT
From:  parnas%qucis.BITNET@WISCVM.WISC.EDU
To:  RISKS@CSL.SRI.COM
Subject: Re: Risks from Expert Articles (RISKS-3.82)

   Andy Freeman criticizes the following by Michael L.  Scott, "Computers have
no such abilities.  They can only deal with situations they were programmed
in advance to expect."  He writes, "Dr.  Scott obviously doesn't write very
interesting programs.  :-) Operating systems, compilers, editors, mailers,
etc. all receive input that their designers/authors didn't know about
exactly.  "

   Scott's statement is not refuted by Freeman's.  Scott said that the
computer had to have been programmed, in advance, to deal with a situation.
Freeman said that sometimes the programmer did not expect what happened.
Scott made a statement about the computer.  Freeman's statement was about
the programmer.  Except for the anthropomorphic terms in which it is
couched, Scott's statement is obviously correct.

   It appears to me that Freeman considers a program interesting only if we
don't know what the program is supposed to do or what it does.  My
engineering education taught me that the first job of an engineer is to find
out what problem he is supposed to solve.  Then he must design a system
whose limits are well understood.  In Freeman's terminology, it is the job
of the software engineer to rid the world of interesting programs.

   Reliable compilers, editors, etc., (of which there are few) are all
designed on the basis of a definition of the class of inputs that they are
to process.  We cannot identify the actual indvidual inputs, but we must be
able to define the class of possible inputs if we are to talk about
trustworthiness or reliability.  In fact, to talk about reliability we need
to know, not just the set of possible inputs, but the statistical
distribution of those inputs.

Dave Parnas

------------------------------

Date: Tue, 21 Oct 1986  09:16 EDT
From: LIN@XX.LCS.MIT.EDU
To:   Andy Freeman <ANDY@SUSHI.STANFORD.EDU>
Cc:   RISKS@CSL.SRI.COM
Subject: Risks from Expert Articles

    From: Andy Freeman <ANDY at Sushi.Stanford.EDU>

    Operating systems, compilers, editors, mailers, etc. all receive input
    that their designers/authors didn't know about exactly.  

When was the last time you used a mailer, operating system, compiler,
etc.. that you trusted to work *exactly* as documented on all kinds of
input?  (If you have, pls share it with the rest of us!)

    It can be argued that SDI isn't understood well enough for humans to make
    the correct decisions (assuming super-speed people), let alone for them to
    be programmed.  That's a different argument, and Dr. Scott is (presumably)
    unqualified to give an expert opinion.  His expertise does apply
    to the "can
    SDI decision be programmed correctly?"  question, which he spends just one
    paragraph on.

You are essentially assuming away the essence of the problem by
asserting that the specs for the programs involved are not part of the
programming problem.  You can certainly SAY that, but that's too
narrow a definition in my view.

------------------------------

Date: Tue 21 Oct 86 14:40:48-PDT
From: Andy Freeman <ANDY@Sushi.Stanford.EDU>
Subject: Re: Risks from Expert Articles
To: LIN@XX.LCS.MIT.EDU
cc: RISKS@CSL.SRI.COM

Herb Lin writes:

    When was the last time you used a mailer, operating system, compiler,
    etc.. that you trusted to work *exactly* as documented on all kinds of
    input?  (If you have, pls share it with the rest of us!)

The programs I use profit me, that is, their benefits to me exceed
their costs.  The latter includes their failures (as well as mine).  A
similar metric applies to weapons in general, including SDI.  (Machine
guns jam too, but I'd rather have one than a sword in most battle
conditions.  The latter are, for the most obsolete, but there aren't
perfect defenses against them.)

Lin continued with:

    You are essentially assuming away the essence of the problem by
    asserting that the specs for the programs involved are not part of the
    programming problem.  You can certainly SAY that, but that's too
    narrow a definition in my view.

Sorry, I was unclear.  Specification and implementation are related,
but they aren't the same.  There are specs that can't be implemented
acceptably (as opposed to perfectly).  Some specs can't be implemented
acceptably in some technologies, but can in others.  (This can be
context dependent.)  Dr. Scott's expertise applies to the question of
whether a given spec can be programmed acceptably, not whether there
is an spec that can be implemented acceptably.  Much of the spec,
including the interesting parts of the definition of "acceptable", is
outside CS, and (presumably) Dr. Scott's expertise.

Another danger (apart from simplification to incorrectness) of expert
opinion articles is unwarranted claims of expertise.  Dr. Scott
(presumably) has no expertise in directed energy weapons yet he claims
that they can be used against cities and missiles in silos.  Both
proponents and opponents of SDI usually agree that it doesn't deal
with cruise missiles.  If you can kill missiles in silos and attack
cities, cruise missiles are easy.

-andy

------------------------------

Date: Tue, 21 Oct 86 12:38 EDT
From: Donald W. Coley <coley@SCRC-VALLECITO.ARPA>
Subject: Loss of Nuclear Submarine Scorpion
To: RISKS@CSL.SRI.COM

This is in response to John Allred's comments about the loss of both the
Thresher and the Scorpion (RISKS-3.82).

    Date:     Mon, 20 Oct 86 13:31:40 EDT
    From:     John Allred <jallred@labs-b.bbn.com>
    Subject:  Loss of the USS Thresher

    Thresher, according to the information I received while serving
    on submarines, was lost due to a catastrophic failure of a main
    sea water valve and/or pipe, causing the flooding of a major
    compartment.  The cause of the sinking was reported by the mother
    ship during the boat's sea trials.

Just to confirm what John stated, fracture of a hull-penetration fitting, at
the weld between the flange and the pipe, quickly flooded the engineering
spaces.  The sinking had nothing to do with the reactor.

    Scorpion, on the other hand, had no observer present.  No reason
    of loss has been given to the public.

Scorpion was in very high speed transit, westbound in one of the submarine
transit lanes, when she struck a previously uncharted undersea mountain.
The speed of the collision was "in excess of forty miles per hour" (probably
closer to sixty).  It was the very high speed that had rendered her
(acoustically) blind; unable to see the obstacle in her path.  True, no
observer was present, but a lot of people did get to hear the result.  The
"days spent searching for the lost sub" were just to avoid revealing how
accurate our tracking capabilities were.  All the Navy brass knew within the
hour, exactly what had happened and exactly where.

------------------------------

Date: 21-Oct-1986 1457
From: minow%regent.DEC@decwrl.DEC.COM  (Martin Minow, DECtalk Engineering ML3-1/U47 223-9922)
To: risks@csl.sri.com
Subject: Staffing Nuclear Submarines

Disclaimer: a few months ago, my knuckles were rapped when I incorrectly
cited a study on airline safety.  Please be warned that I know absolutely
nothing about nuclear submarines and am using the ongoing discussion about
automatic controls for nuclear reactors (on submarines) only as a starting
place for a wider discussion.

From the discussion on Risks it seems that, while automatic controls may do
a satisfactory job of running the reactor in normal circumstances, people
will still be needed to run the reactor when the automatic controls
malfunction.

Adding automatic controls adds weight (and probably noise), making the
ship less effective. 

Adding automatic controls to a nuclear submarine's reactor frees personnel
for other tasks.  But, there isn't much else for them to do (they can hardly
chip rust on the deck), so they'll get bored and lose their "combat
readiness."

Relying on totally manual control keeps the crew alert and aware of the
action of the reactor.  It also keeps them busy. 

In other words -- and I think this is directly relevant to Risks -- there
are times when external factors make it unwise to automate a task, even
when it can easily be done. 

Martin

------------------------------

Date: 21 Oct 86 11:03:00 EDT
From: "DYMOND, KEN" <dymond@nbs-vms.ARPA>
Subject: An SDI Debate from the Past
To: "risks" <risks@csl.sri.com>

While looking something up in Martin Shooman's book on software 
engineering yesterday, I came across the following footnote (p.495):

    Alan Kaplan, the editor of Modern Data magazine, posed the question,
    "Is the ABM system capable of being practically implemented or is
    it beyond our current state-of-the-art ?"  The replies to this
    question were printed in the January and April 1970 issues of the
    magazine.  John S. Foster, director of the Office of Defense
    Research and Engineering, led the proponents, and Daniel D.
    McCracken, chairman of Computer Professionals against ABM, led
    the opposition.

It's startling that the very question that so interests us today was
put 15 or so years ago; to make it the exact question, all you have
to do is change the 3 letters of the acronym.  And this was 3 (?)
generations ago in computer hardware terms (LSI, VLSI, VHSIC ?) and
some indeterminate time in terms of software engineering (I can't
think of anything so clear-cut as circuit size to mark progress in
software).  International politics, however, seems not to have
changed much at all.

I'll try to track down those articles (Modern Data no longer exists
having become Mini-Micro Systems in 1976), but in the meantime can anyone 
shed light on this debate from the dim past ?

(BTW, Shooman comments "Technical and political considerations were finally
separated, and diplomatic success caused an abrupt termination of the
project." p. 498)

------------------------------

Date: Mon, 20 Oct 86 16:01:06 pdt
From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
To: risks%csl.sri.com@RELAY.CS.NET
Subject: System effectiveness is non-linear

I agree with Anon that overall system effectiveness is non-linear:

  >If 1000 missiles strains the system to the point that it can only
  >stop 800, why would anyone think it could stop more when the number of
  >missiles and decoys is doubled, straining the system's ability to
  >identify, track, and destroy missiles at least twice as much?

The more reasonable (and conservative) assumption is that the SDI system
would stop ZERO missles when faced with, say, 2000 targets.  Case in
point is revision n of the US Navy Aegis system -- seems that being
designed to track a maximum of (17) targets,  when there are (18)
targets the computer software crashed.

Any engineered artifact has design limits.  When stressed beyond those
limits, it fails.  We understand this for civil engineering artifacts,
such as bridges.  Clearly this is not well understood for software
engineering artifacts.

------------------------------

Date: Tuesday, 21 October 1986  10:39-EDT
From: Schuster.Pasa at Xerox.COM
To:   Arms-Discussion
Re:   Stealth vs Air Traffic Control
[Remailed to RISKS by Herb Lin]

After reading the recent ARMS-D on the Stealth subject, particularly the
interesting message from Bryan Fugate where he says that "stealth
fighters and bombers have already gone into production", and in light of
some of the recent aircraft collisions, I couldn't help but wonder if
anyone has adequately considered the air traffic control consequences of
not being able to get a radar fix on a large, rapidly moving aircraft in
a high density air traffic area?

For that matter, what about ground-radar-assisted-landing in poor
visibility at a military base?

Sometimes you want an aircraft to present a GOOD radar target. As I was
writing this I thought of the answer, I guess. The stealth aircraft
would have to have a strong beacon turned on in these circumstances. I
guess it's easy to recreate a good target this way. All I can say is
that the beacon had better be working in the circumstances I described.

------------------------------

Date:     Tue, 21 Oct 86 13:41:58 PDT
From:     mse%Phobos.Caltech.Edu@DEImos.Caltech.Edu (Martin Ewing)
Subject:  Missing engines & volcano alarms
To:       risks%Phobos.Caltech.Edu@DEImos.Caltech.Edu

We visited New Zealand a few years ago and went to the major skiing area
on the North Island (the name escapes me).  It is built on the slopes of
an active volcano.  There were prominent warnings for skiers of what to
do in case of an eruption alarm.  (Head for a nearby ridge.  Don't try
to outrun the likely mud/ash slide coming down the hill.) 

How do they get the alarm?  There is an instrument hut at the lip of the
crater connected to park headquarters by a cable.  The instruments
measure some parameter(s) or other.  (heat, acceleration, pressure, ?) 
When something crosses a threshold, the warning alarms on the ski slopes 
are set off automatically. 

In fact, someone admitted, what would probably happen is that the
explosion would destroy the hut and cut the cable.  Loss of signal is
probably as good a diagnostic as anything else.

I can imagine a display on the DC-10 instrument panel inscribed with the
outline of the aircraft.  Little red lights come on when you lose
continuity on a wire to an engine, aileron, etc. - like what happens
when you leave your door open on a Honda Civic.  What you do with this
data is another matter. 

------------------------------

End of RISKS-FORUM Digest
************************
-------