[mod.risks] RISKS DIGEST 4.18

RISKS@CSL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (11/26/86)

RISKS-LIST: RISKS-FORUM Digest, Wednesday, 25 November 1986 Volume 4 : Issue 18

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  RISKS, computer-relevance, where-to-place-the-blame, etc. (PGN)
  Verification and the UK proposal (Jim Horning)
  When the going gets tough, the tough use the phone... (Jerry Leichter)
  Re: 60 minutes reporting on the Audi 5000 (Eugene Miya)
  Minireviews of Challenger article and computerized-roulette book
     (Martin Minow)
  More on the UK Software-Verification Proposal (Bill Janssen)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
(Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM)
  (Back issues Vol i Issue j available in CSL.SRI.COM:<RISKS>RISKS-i.j.  MAXj:
  Summary Contents Vol 1: RISKS-1.46; Vol 2: RISKS-2.57; Vol 3: RISKS-3.92.)

----------------------------------------------------------------------

Date: Tue 25 Nov 86 18:58:38-PST
To: RISKS@CSL.SRI.COM
From: Peter G. Neumann <Neumann@CSL.SRI.COM>
Subject: RISKS, computer-relevance, where-to-place-the-blame, etc.

This is another note on the risks of running RISKS.  We get a variety of
contributions that are not included in RISKS, on the grounds of relevance,
taste, lack of objectivity, politicization, etc.  (Once in a while I get a
flame about censorship from someone whose message is not included, but I
tend to stand by the masthead guidelines.)  I have also get an occasional
complaint about my judgement regarding RISKS messages that have been
included.  So, it is time for some further comments from your moderator.

One of the most important things to me in running RISKS is that there is a
social process going on, at many levels.  First, there is an educational
function, in raising the level of awareness in many computer professionals
and students, whether naive or young, whether sophisticated or old.  Second,
there is a communications function of letting people try out their ideas in
an open public forum.  They also have an opportunity to become more
responsible communicators -- combining both of those functions.  Also, there
is the very valuable asset of the remarkably widespread RISKS community
itself -- there is always someone who has the appropriate experience on the
topic at hand.  By the way, I try not to squelch far-out thinking unless it
is clearly out of the guidelines.  This sometimes leads to unnecessary
thrashing -- although I try to minimize that with some of my [parenthetical]
interstices.

The Audi case is one in which computer relevance is not at all clear.
However, the presence of microprocessors indicates that it is worth our
while discussing the issues here.  The Audi problem is of course a total
system problem (like so many other problems).  I tend to include those cases
for which there is a reasonable connection with computer technology, but not
necessarily only those.  There are various important issues that seem worth
including anyway -- even if the computer connection is marginal.  First,
there are total systems wherein there is an important lesson for us, both
technological and human.  Second, there are total systems that are NOT AT
PRESENT COMPUTER BASED or ONLY MARGINALLY COMPUTER BASED where greater use
of the computer might have been warranted.  (Nuclear power is a borderline
case that is exacerbated by the power people saying that the technology is
too critical [or sensitive?] for computers to be used.  THEY REALLY NEED
DEPENDABLE COMPUTING TECHNOLOGY.  Besides, then THEY could blame the
computer if something went wrong! -- see second paragraph down.)

There is an issue in computer-controlled automobiles (even if the computer
is clearly "not to blame" in a given case) whether the increased complexity
introduced by the mere existence of the computer has escalated the risks.
But that is somewhat more subtle -- even though I think it is RISKS related...

The issue of simplistically placing blame on the computer, or on people (or
on some mechanical or electrical part), or whatever, has been raised here
many times.  I would like all RISKS contributors to be more careful in not
trying to seek out a single source of "guilt".

There are undoubtably a few people in our field who are bothered by
technological guilt.  There are others who are totally oblivious to remorse
if their system were to be implicated in an otherwise avoidable death.
However, the debates over blame, guilt, and reparation are also a part of
the "total systems" view that RISKS tries to take.

I try not to interject too many comments and not to alter the intended
meaning.  However, what YOU say reflects on YOU -- although it also reflects
on me if I let something really stupid out into the great Internet.  Also,
some discussions are just not worth starting (or restarting) unless
something really new comes along -- although newer readers have not been
through the earlier process, and that is worth something.

I have an awkard choice when a constructive contribution contains a value
judgement that is somewhat off the wall.  I sometimes edit the flagrant
comments out, despite my policy of trying to maintain the author's editorial
integrity.  I thought for a while about Clive Dawson's "knowing the cause
might eventually be tracked down to some software bug sent chills down my
spine."  The same could be said for the products of other technical
professionals such as engineers and auto mechanics.  (But that statement is
a sincere statement of Clive's feelings, and this one was left in.)

[Apologies for some long-windedness.  I probably have to do this every now
and then for newer readers.]  PGN

------------------------------

Date: Tue, 25 Nov 86 11:37:49 pst
From: horning@src.DEC.COM (Jim Horning)
To: RISKS FORUM    (Peter G. Neumann -- Coordinator) <RISKS@CSL.SRI.COM>
Subject: Verification and the UK proposal (RISKS 4.17)

I find myself largely in agreement with Bard Bloom's comments in
RISKS 4.17. However, it seems to me that recent discussion has
overlooked one of the most important points I saw in the UK proposal:
verification is a way of FINDING errors in programs, not a way of
absolutely ensuring that there are none. (The same is true of testing.)

Thus the kinds of question we should be asking are

  - How many errors (in programs AND in specifications) can be found by
  presently available proof techniques? How many errors would be avoided
  altogether by "constructing the program along with its proof"?

  - What is the cost per error detected of verification compared with
  testing? Does this ratio change as software gets larger? as the
  reliability requirements become more stringent?

  - Do verification and testing tend to discover different kinds of errors?
  (If so, that strengthens the case for using both when high reliability
  is required, and may also indicate applications for which one or the other
  is more appropriate.)

  - Can (partial) verification be applied earlier in the process of
  software development, or to different parts of the software than testing?

  - Is there a point of diminishing returns in making specifications
  more complete? more precise? more formal? of having more independent
  specifications for a program?

I would dearly love to have convincing evidence that verification wins
all round, since it would indicate that my work on formal specification
is more valuable. But, to date, I haven't seen any convincing studies,
and the arguments I can offer have been around for 10 or 15 years.
(They look plausible. Why can't we prove them?)
                                                            Jim H.

------------------------------

Date: 25 NOV 1986 14:54:31 EST
From: <LEICHTER-JERRY@YALE.ARPA>
To: risks@csl.sri.com
Subject:  When the going gets tough, the tough use the phone...
  or, Would you trust your teen-aged computer with a phone of its own?

From Monday's (24-Nov-86) New York Times:

Lyons, Ore., a town of about 875 people about 25 miles east of Salem, the
state capital, has a small budget and a big problem.  The monthly city
budget is about $3,500.  Back in October, the public library, with an annual
budget of $1,000, installed a computer that made it possible to find a
requested book at any library in the county through one telephone call to
Salem.

After the trial run, no one knew that it was necessary to unplug
the computer.	It maintained the connection and ran up a bill of $1,328
with the Peoples' Telephone Company, the cooperative that runs the Lyons
phone system.

"It leaves a problem I've got to figure out," said Mayor Jerry Welter.  "I'm
going before the phone company board to ask them to forgive the bill, and I
don't know just how we'll manage if they won't do it."

------------------------------

Date: Mon, 24 Nov 86 22:43:49 pst
From: eugene@AMES-NAS.ARPA (Eugene Miya)
To: risks@sri-csl.ARPA
Subject: Re: 60 minutes reporting on the Audi 5000

It's interesting -- the four perspectives collected on this telecast.

  1) This was a subject broadcast several months ago on ABC 20/20.  No mention
  on the microprocessor problem was made at that time, but the idle problem
  was demonstrated.

  2) The microprocessor problem took very little time in the show, yet
  generated so much on RISKs (as it probably should).

  3) I recall TWO deaths in the program, not just one, and probably more.  Two
  correspondents pointed out the dead child, but the others did not mention
  the gas station attendent who was dragged underneath the car when it lurched
  backward over 200 feet.  Five different views of the same show.  (Rashomon)
  Could we expect a computer to do better?  I hope so.

--eugene miya

------------------------------

Date: Tue, 25 Nov 86 15:22:31 pst
From: minow%bolt.DEC@src.DEC.COM (25-Nov-1986 1808)
To: "RISKS@csl.sri.com"@src.DEC.COM
Subject: Minireviews of Challenger article and a computerized-roulette book

"Letter from the Space Center" by Henry S. F. Cooper in the New Yorker,
November 10, 1986, pp. 83-114.  Discusses the Challenger accident and
the way it was investigated.  New (to me) information includes some
things that were known to the engineers before the accident, but not
taken into account when the decision to fly was made.  There is also
mention of a few things "hidden" in the appendices to the Presidential
Commission's report.

Nothing specific on computers, but a lot on the *management* of
technological risks, and -- as such -- would be interesting reading to the
Risks community.

Book: The Eudaemonic Pie, by Thomas A. Bass.  Vintage Books (paper),
Houghton-Mifflin (hardbound).  ISBN 0-394-74310-5 (paper).  Relates the
engrossing tale of a bunch of California grad students who decided that
roulette is "just" an experiment in ballistics (with a bit of mathematical
chaos theory thrown in).  Unfortunately, the adventurers were better
physicists than engineers and their computer system, built into a pair of
shoes, never worked well enough to break the bank.  They had some good
moments, though.  The physicists went on to more and better things, and have
just published an article on chaos in the current Scientific American.

Martin

------------------------------

Date: Tue, 25 Nov 86 18:09:22 CST
From: Bill Janssen <janssen@mcc.com>
To: RISKS@CSL.SRI.COM
Subject: More on the UK Software-Verification Proposal

  > Bard Bloom in RISKS 4.17:

  > 1) Are existing programming languages constructed in a way that makes
  >    valid proofs-of-correctness practical (or even possible)?  I can
  >    imagine that a thoroughly-specified language such as Ada [trademark
  >    (tm) Department of Defense] might be better suited for proofs than
  >    machine language; there's probably a whole spectrum in between.
  > 2) Is the state of the art well enough advanced to permit proofs of
  >    correctness of programs running in a highly asynchronous, real-time
  >    environment?

Drs. K. Mani Chandy and Jayadev Misra of the University of Texas at Austin
have developed a language called UNITY, which allows one to write programs
for distributed asynchronous systems, and reason about the relationship
between the program and its specification, which may allow one to prove that
the program correctly implements the spec.  (More often, one proves it does
not...)  At least one compiler for UNITY exists.

     [Further discussion on this probably belongs in Soft-Eng@XX.MIT.EDU.
      (See also various papers by Leslie Lamport.)  But I let this one
      through because proving properties of asynchronous programs is
      generally a very high-risk area.  Many asynchronous algorithms widely
      thought to be "correct" or "safe" or whatever are not...  PGN]

------------------------------

End of RISKS-FORUM Digest
************************
-------