[comp.risks] RISKS DIGEST 7.38

RISKS@KL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (08/23/88)

RISKS-LIST: RISKS-FORUM Digest   Monday 22 August 1988   Volume 7 : Issue 38

        FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  British vs American safety rules (Henry Spencer)
  Another boundary case bug (Tom Lane)
  Retired couple jolted by $5 million electric bill (David Sherman)
  Hotel could get soaked in lawsuit? (Don Chiasson)
  RISKS contributions (PGN)
  Risks of CAD programs (Alan Kaminsky)
  Can current CAD/simulation methods handle long-term fatigue analysis?
    (Henry Spencer)
  Vincennes and Cascaded Inference (Carl Feehrer)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, and nonrepetitious.  Diversity is welcome.
CONTRIBUTIONS to RISKS@CSL.SRI.COM, with relevant, substantive "Subject:" line
(otherwise they may be ignored).  REQUESTS to RISKS-Request@CSL.SRI.COM.
FOR VOL i ISSUE j / ftp kl.sri.com / login anonymous (ANY NONNULL PASSWORD) /
  get stripe:<risks>risks-i.j ... (OR TRY cd stripe:<risks> / get risks-i.j ...
  Volume summaries in (i, max j) = (1,46),(2,57),(3,92),(4,97),(5,85),(6,95).

----------------------------------------------------------------------

Date: Fri, 19 Aug 88 23:51:36 EDT
From: attcan!utzoo!henry@uunet.UU.NET
Subject: British vs American safety rules

> ... He added that he thought British law and tradition were more
> protective of people and sensitive to safety concerns than in the USA.  For
> example, MOD regulations explicitly prohibit any cost saving that might
> increase hazard to life -- you are not allowed to trade lives off against
> money.

Britons should not consider this reassuring; it is, in fact, alarming.
This rule was obviously written by a politician (possibly not an elected
one or a recognized one, mind you...), not by an experienced engineer.
It is *always* necessary to trade lives off against money, because there
is no limit to the money that can be spent adding more 9's on the end of
99.9999% reliability.  What this rule is doing is guaranteeing that the
risks and tradeoffs will not be discussed openly!

Consider what is likely to happen.  Manufacturers, under pressure to keep
costs under control, will quietly and privately evaluate the tradeoffs.
Since they are forbidden to introduce changes that increase hazards, yet
may need to make changes as the customer (the government) shifts its
priorities around, the obvious thing to do is to start out with the
cheapest and *most hazardous* version.  If this can be gotten past the
inspectors somehow -- nobody should be surprised if manufacturers are
less than completely open about hazard analyses! -- then with luck, all
further changes that affect safety can be in the direction of improved
safety.  At higher cost, of course.  Assuming that it can be funded in
that year's budget, of course.

Is this rule really serving its intended purpose?

Henry Spencer @ U of Toronto Zoology  

------------------------------

Date: Sun, 21 Aug 88 16:19:35 EDT
From: Tom.Lane@ZOG.CS.CMU.EDU
Subject: Another boundary case bug

Just in case you thought this kind of thing no longer happens in 1988...

The Pittsburgh city water authority just switched over to a new billing
system.  My first new-format bill reads as follows:

	Previous reading: 988  (thousand gallons)
	Current reading:    2
	Amount used:        0

Sure enough, all customers whose meters "wrapped around" during the past
quarter received minimum bills.  "We didn't catch it until after the first
batch of bills went out," according to the customer assistance person I
spoke to.  Those customers who don't call in to ask about it will not be
billed for the difference until next quarter; the difference in my case was
only about $20, but I suspect that the authority may lose a good bit in
interest overall.

I find it interesting that the bill wasn't for -986K gallons.  Perhaps the
programmer actually thought about the case current reading < old reading,
and concluded it would always represent a reading error; or maybe it's just
a byproduct of some data format declaration.
                              				tom lane

------------------------------

Date: Sat, 20 Aug 88 00:00:07 EDT
From: attcan!lsuc!dave@uunet.UU.NET     <David Sherman>
Subject: Retired couple jolted by $5 million electric bill

TAMPA (AP) - A retired couple got a jolt from their July electric bill --
$5,062,599.57 U.S.  An offer by Tampa Electric Co. for them to pay "budget"
monthly instalments of $62,582.27 didn't help.  Company officials apologized
to Jim and Winnie Schoelkopf after the mistake was blamed on a computer
operator.  The correct bill was for $146.76.  
[from the Toronto Star, 29 July 1988]

------------------------------

Date: Mon, 22 Aug 88 14:03:27 ADT
From:  Don Chiasson <G.CHIASSON@DREA-XX.ARPA>
Subject: Hotel could get soaked in lawsuit?

                 Faulty Bath Shower Blamed for Air Crash
                 From Canadian Aviation, Aug 88, p. 8

  A U.S. pilot has sued the Marriott hotel chain for $7 million, claiming
  that he was struck in the head by a faulty shower head, causing a flashback
  nine months later that led to the crash of his helicopter.  Joseph Mendes
  claims that Marriott was negligent because he was injured when struck by a
  pulsating shower head that came loose and fell on him in 1987 when he was
  staying at the Marriott hotel in Lexington, KY.  Mendes says that he "has
  suffered a psychic [sic] trauma known as and referred to as post-traumatic
  stress disorder which is continuing in nature."  His suit alleges that "as a
  direct and proximate result" of the alleged disorder, Mendes suffered a
  flashback while piloting a helicopter last March, "thereby losing control of
  the helicopter and causing it to crash to the ground."

What has this got to do with computers?  Without knowing more about the
incident, I have some difficulty in seeing a *falling* shower head as being a
"direct and proximate" cause.  If businesses and individuals must defend
themselves from suits such as this they will have to keep detailed records of
any incident that happens to a customer or visitor.  Did the incident happen
how, when and where he claims, and if so did he report it promptly?  Such
records are most likely to be kept on computers.  What are the implications for
privacy?

  [The usual privacy problems exist.  More significantly, the trustworthiness
  of computer evidence is always in doubt.  It is too easy to fudge the data
  subsequently, even in the presence of time-stamps, cryptoseals, etc.  (The 
  most serious problem is probably the increase in frivilous lawsuits.) PGN]

------------------------------

Date: Mon, 22 Aug 88 14:28:45 PDT
From: Peter G. Neumann <Neumann@KL.SRI.COM>
Subject: RISKS contributions [I try to sun-dry sundry messages...]

The backlog of contributions is enormous, but the topics and subject material
are really rather marginal.  There are at least 15 messages on VDTs (eyes,
backs, eyesight changes, etc.), and scads on the use of signatures, credit
cards, vending machines, and on into the night.  Much of it has drifted widely
from computer relevance, coherence, nonrepetitiousness, etc., and so I am
inclined to suggest we all take a rest for a while until some more exciting
material materializes.  I get more hate mail when I let marginal stuff through
than when I don't, so perhaps I'll try to keep the standards up.

------------------------------

Date: Mon, 22 Aug 88 10:12:09 EDT
From: ark%hoder@CS.RIT.EDU
Subject: Risks of CAD programs

My father has worked as a civil engineer for forty years.  Recently I had
a conversation with him on how civil engineering design methods have changed
since when he started, especially with the recent widespread use of CAD
programs.  This made me think of a RISK in using such programs.

He described his latest design project.  A vacuum vessel had a large circular
opening that was covered with a flat plate.  The plate needed to have
several irregular holes at odd positions, for mounting pieces of equipment.
Since one side of the plate was under vacuum, the plate bulged inwards,
destroying the alignment between the mounted pieces of equipment.  His task
was to design a set of stiffening braces to control the bulge.

In the "old days," he said, a plate with stiffening braces and irregularly-
placed holes would be impossible to analyze.  It would have been easy to
determine the amount of bulge for a plate with braces but _without_ holes,
or with, say, one circular hole in the plate's center.  In fact, he could
look up the solution in any standard textbook.  The holes would increase the
bulge, but to an unknown extent.  He would have made a guess at the amount,
thrown in a generous safety factor, and designed the braces accordingly.

Nowadays, he uses a computer program (NASTRAN) to do finite-element analysis.
He built a model of his plate, holes and all, ran the machine for x minutes,
and found out exactly where and how much the plate would bulge for a given
vacuum level.  He then knew just how much stiffening would be needed.

Now for the RISK.  With a detailed picture of the exact stresses and
deflections on a particular structural member, the engineer can justify
designing with a smaller safety margin.  No longer are large safety factors
needed to compensate for the inability to do exact analysis.  Instead, one
can design with a smaller margin and reduce the cost (fewer or smaller
braces are needed).

Do practicing civil engineers reduce their safety margins these days because
they use computer-aided analysis?  How much?  How small a safety margin is
small enough?  How well-validated are the structural analysis programs in
common use?  Do they always give accurate answers?  Do engineers include in
their safety margins any consideration of inaccuracies or bugs in their
programs ("I'd better add x% in case my program's results are off")?  I'd
appreciate hearing from anyone who knows.

Alan Kaminsky, School of Computer Science, Rochester Institute of Technology

------------------------------

Date: Wed, 17 Aug 88 23:23:56 EDT
From: attcan!utzoo!henry@uunet.UU.NET
Subject: Can current CAD/simulation methods handle long-term fatigue analysis?

>...  Are current supercomputer simulation methods capable of
>handling the complexity of long-term stess, fatigue, corrosive environments,
>etc., all of which were (apparently) factors in the Aloha incident?  Also I am
>not at all sure that such things were handled any better before wide-spread use
>of CAD -- I am just asking the question.  Any aeronautical engineers out there?

Well, I'm not an aeronautical engineer, but maybe I'll do... :-)  As I
understand it, metal fatigue in general is poorly understood, and there
is really no way of calculating it.  The whole area is still very much
rule-of-thumb engineering plus empirical testing.  There are rules that
give a rough idea of the fatigue life of an airframe, after which a big
safety margin is added (we're talking factor of 2, not 10%).  Even this
is only a tentative number.  Most any large, volume-production aircraft
will have one of the early prototypes shunted off into a corner to be the
"fatigue test" specimen, which means that it spends years (literally)
having its wings bent back and forth and pressurization cycled up and down
and generally having stresses applied in a speeded-up, exaggerated simulation
of real service life.  The objective is to keep the fatigue-test aircraft
well ahead of all the real ones, while watching it carefully for signs
of fatigue cracks.

Even so, one still gets surprises.  Not just the occasional accidents, but
also more mundane discoveries of cracks; fatigue-life estimates are not
trusted very much, and aircraft are inspected regularly.  Now and then
such an inspection yields a surprise, and the manufacturer or the FAA sends
the other users of that aircraft a telegram saying "inspect area XXX right
away and let us know if you find cracks".  This seldom makes the news, but
it happens with some frequency.

I guess this is not a bad example of how to manage a poorly-known risk...

				Henry Spencer @ U of Toronto Zoology
				uunet!attcan!utzoo!henry henry@zoo.toronto.edu

------------------------------

Date: 16 Aug 1988 07:32-EDT
From: CFEEHRER@G.BBN.COM
Subject: Vincennes and Cascaded Inference

Thank you for the comments and interest in the cascaded inference notion.  As a
newcomer to the Comp.Risks forum, I judged that the Bayesian approach to
decision making, if not cascaded inference, per se, had probably been kicked
around before, and I didn't quite know quite where to start.  I'll try here to
be a little more crisp in my arguments and provide some references to the
general area.  When I have something better than this 300 baud connection,
maybe I can push on!

        First ...  to get some of the fuzz out of the earlier arguments!:
(Please note that I am indebted to Clifford Johnson -- 11 Aug -- for forcing me
to make some of the assumptions clearer.)

        (1) Although the Vincennes affair provides fertile ground for the
imagination, little enough is available in the common media, my only source, to
get any kind of closure on the validity or truth of assertions.  After Seymour
Hirsch writes a book, I'll feel more comfortable with any position I take! I'm
drawn to the case only because I think it is altogether likely -- and I think
newspaper reports bear it out so far -- that not all of the information that
could have been brought to bear on the decision was used, and some of the
information that was used was "noisy" in ways that could have misled decision
makers if they didn't have means for coping with it.Those characteristics can
be identified in almost any post-analysis of a complex decision situation,
whether things went well or badly, and I take it on faith that they can be
found here.  To observe them again would be almost trivial if it were not for
the fact that their ubiquity argues, at least to me, that we had better soon
figure out how to mitigate their impact.

        (2) I DO NOT want to argue that crucial decisions should be automated,
except where human and/or system response times are so long that non-automated
decisions are impractical.  I DO want to argue that decision makers should be
taught how to make the best use of all of the information they have and then be
provided with the best computerized aids to diagnosis and inference that we can
devise.  I think we put individual decision makers, whole systems, and maybe
the entire world at risk if we do otherwise.

        (3) We need to keep in mind that "hit" and "false alarm" rates are
inextricably associated.  If one MUST avoid a Stark, then one will occasionally
have a Vincennes.  There is no level of system perfection that can, in
principle, destroy that association.  Ironically, it may be that, unless we are
very careful in their design, the more complex our systems are made to be in
the interest of greater accuracy, the more false alarms may ocur and need to be
dealt with.

        (4) I have little interest in defending Bayes Theorem or, for that
matter, any particular method of statistical inference (e.g., Neyman-Pearson
methods).  We know pretty well the limitations of probabilitic statements in
the context of low frequency events.  (Or even high frequency events, for that
matter: "So the probability of rain is .6! Do I take the umbrella or not?!")
And I certainly don't want to suggest that military commanders and CEOs and
surgeons should sit around with calculators and compute posterior odds,
F-ratios,ts, etc.  If it makes any sense to do that, it should be done
automatically as part of a computer-based decision support system!

I DO want to suggest that the Bayesian model, and to a lesser extent, classical
N/P models, have been extremely useful in highlighting the fact that human
decision makers do not typically extract from data, whatever their quality, as
much information as is justified.  On some occasions, that "shortcoming"
results in the need for more data to be acquired and processed, thus incurring
additional costs in resources and time.  On other occasions, particularly if
time is critical, it can result in the choice of an incorrect hypothesis.

Now, there are two questions concerning that finding: Why?, and What can be
done about it?  I'd like to contribute some stuff on those questions and get
your reactions, but it's a big topic and a digression at this point.  If
anybody is interested, HOLLER!

        (5) I agree thoroughly with CJ that there is a welter of data and no
"unequivocal attack warnings" (until a hostile action has been taken) -- that's
precisely why it's an interesting problem.A few years ago, several colleagues
and I were studying (with their cooperation) the decisions of nuclear power
plant operators who were on duty at times during which plant instabilities
occurred and who, for considerable periods, were in doubt about what had gone
wrong and what the current states of their plants were.  Two of these
instabilities had resulted in radioactive releases.  If you've been in a plant
when things start to go bad, you know something about the enormity of data that
suddenly comes your way, the confusion and tension it can produce, and how
critical it can be that things get figured out quickly and correctly ...  you
don't simply "turn it off" and fix it later! I think the operators on Vincennes
must have endured a very similar situation.

        (6) I can also agree that (a) Captain's role (could) be reduced to
"second-guessing" the computer, although I know too little about SOPs in this
case to have much confidence in that judgment.  (If the system is built that
way and the SOP reads that way, serious mistakes have been made.  Similar
problems have arisen in the design of advanced tactical and commercial aircraft
and have become real concerns.)  I disagree that there is a moral issue in
second-guessing so long as there is total uncertainty and a guess is truly
called for.  I believe that there are moral issues in this whole thing but that
they lie elsewhere.

        (7) I still have difficulty, in principle, with the argument that there
cannot be (should not be?)  decision thresholds (or decision criteria, if you
will).  Ultimately, there must be the decision to shoot or not to shoot, to buy
or to sell, to cut or not to cut, to render a guilty or a not-guilty verdict,
etc.  In any of these situations, of course, one can almost always defer action
in the hope that more information may become available and clarify the issue.
But that choice is usually not without cost and, eventually, a course of action
must be taken that is based on what can be inferred.  A criterion for that
decision MUST be defined.  For me, it is not the concept of "threshold", per
se, but our inabiliity to help the decision maker understand whether or not
it's been reached and/or whether it CAN be reached given the validity, and
reliability of the available data that presents the problem.

There are some interesting data on the ordering of radiological tests that bear
on the this last point.  Namely, it can be shown that, in order to be
"perfectly sure", some physicians may order tests to be performed that could
not, under the very most favorable circumstances, produce increments in
confidence great enough to affirm or disaffirm their preliminary diagnosis.  A
reason seems to be that many are untrained in the utilization of the "yield"
statistic -- a quantitative estimate of false positive and false alarm rates
based on experience and generally available for standard diagnostic tests.  For
this knowledge may be substituted a naive belief that if enough tests are
performed, the truth will out (before the patient packs it in!).

        (8) Here are some initial references on the cascaded
inference stuff.  Some may be hard to find, so I'll provide some
Xeroxed pages from the last reference if you drop me a SASE.  The
pages summarize some of the arguments in the references:

The earliest paper I'm aware of that sets the stage for a prescriptive
approach: Dodson, J.D.  Simulation system design for a TEAS simulation research
facility.  Report AFCRL 1112 and Planning Research Corporation, LA, Nov.  15,
1961.

A very concise/well-written statement of a Bayesian prescriptive approach (my
favorite!): Schum, D.A. and DuCharme, W.M.  Comments on the relationship
between the impact and the reliabiliity of evidence.  Org.  Behav.  and Human
Perf., 1971, 6, 111-131.

Examples of attempts to provide empirical models of what untrained DMs actually
do:  Snapper, K.J.  and Fryback, D.G.  Inference based on unreliable reports.
J.  exp.  Psychol., 1971, 87, 401-404.

Gettys, C.F., Kelly, C.W., and Petersen, C.R.  Best guess hypothesis in
multi-stage inference.  Org.  Behav.  and Human Perf., 1973, 10, 364-373.

Funaro, J.F.  An empirical analysis of five descriptive models for cascaded
inference.  Rice Univ., Dept.  of Psychology Research Report Series, Report No.
74-2, May 1974.

A different approach, inspired by Bayesian thinking but developed within the
signal detection framework and applied to medical diagnosis (references in here
are also useful) -- Swets, J.A.  and Pickett, R.M.  Evaluation of Diagnostic
Systems: Methods from Signal Detection Theory.  New York: Academic Press, 1982.

Nickerson, R.S.  and Feehrer, C.E.  Decision Making and Training: A Review of
Theoretical and Empirical Studies of Decision Making and Their Implications for
the Training of Decision Makers.  Tech.  Rept.  No.  NAVTRAEQUIPCEN
73-C-0128-1, August, 1975.

        That's a start.  Maybe I can sort through the many others and make
recommendations if we really get into this area.  Some are fascinating!
                                                                         --Carl

------------------------------

End of RISKS-FORUM Digest 7.38
************************
-------