[comp.risks] RISKS DIGEST 6.35

RISKS@KL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (03/03/88)

RISKS-LIST: RISKS-FORUM Digest   Wednesday 2 March 1988   Volume 6 : Issue 35

        FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Double pay?  Thank the bank.  (Dave Horsfall)
  [Psychological Aspects of] Safe Systems (Nancy Leveson, Steve Philipson)
  Disappearing skills (Len Popp)
  Re: Slippery slopes and the legitimatization of illegitimacy (Bob English)
  Sins of RISKS and Risks of SINs (Robert Slade)
  Dumb Digital Leap Year Madness
    (Mark Jackson, Matthew Kruk, Brint Cooper, Robert Slade)
  Re: Virus security hole (Scot E. Wilcoxon)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM.
  For Vol i issue j, FTP SRI.COM, CD STRIPE:<RISKS>, GET RISKS-i.j.
  Volume summaries in (i, max j) = (1,46),(2,57),(3,92),(4,97),(5,85).

----------------------------------------------------------------------

Date: Fri, 26 Feb 88 12:10:06 est
From: munnari!stcns3.stc.oz.au!dave@uunet.UU.NET (Dave Horsfall)
Subject: Double pay?  Thank the bank.
Organization: Alcatel-STC Australia

From the Sydney Morning Herald, Friday 26th February:

  Double Pay?  Thank the bank.

  The Commonwealth Bank's computer gave many of its customers a raise
  yesterday -- in fact, it doubled their pay.  To make matters worse,
  as well as doubling the usual Thursday salary transfer, the computer
  doubled every other transaction.

  The bank's computer malfunctioned overnight and had to be controlled
  manually.  It finished up processing transactions twice.  Customers
  all over Australia with Keycard or cheque accounts found they had twice
  the amount debited or credited in their accounts.

  "These are the hazards of computing -- they are only limited by your
  imagination," said the bank's general manager, electronic data processing,
  Mr Peter Martin.

      [And I get only an occasional complaint that the KL mailer crashed in
      the middle of sending out RISKS, with the result being that you got
      TWO copies.  Sorry I can't do anything more exciting for you.  PGN]

------------------------------

Date: Tue, 01 Mar 88 18:56:04 -0800
From: Nancy Leveson <nancy%murphy.ics.uci.edu@ROME.ICS.UCI.EDU>
Subject: Psychological Aspects of Safe Systems           

In Risks 6.34, Scott Preece asks:

  > I'd be interested in knowing if the FAA has any research on the number of
  > accidents avoided because of stall warnings and ground-approach warnings as
  > opposed to the number happening because the relied-upon warning failed to
  > happen.

I have a little information, although not necessary exactly answering this
question.  An article that appeared in a magazine for commercial pilots warned
against complacency and overtrust in computers by pilots.  It describes several
serious incidents that occurred because pilots put too much confidence in
automatic control systems, even to the extent of rejecting their own external 
evidence that the system was wrong. Complacency and inattention appeared to 
cause them to react to failures and errors in the automatic controls much more
slowly than they should have. (see: Ternhem, "Automatic Complacency," 
Flight Crew, Winter 1981).

Also, in Normal Accidents, Charles Perrow reports on a government study
of thousands of mishaps reported voluntarily by aircraft crews and support
personnel that concluded that the altitude alert system (an aural signal) had
resulted in decreased altitude awareness by the flight crews.  It claimed that
there were more incidents of "altitude busts" when the system was used than 
when it was not used.  The study recommended that the device be disabled for 
all but a few long-distance flights.   

None of this means that such systems should not be built and used, only that 
we need to understand when they can be useful and when they can be dangerous 
and to design them very carefully according to principles of cognitive 
psychology.  It may be easier to change the way the systems are designed 
than to try to change human nature.  The important choice may not be between
using such systems or not using them but between building them with or without
careful consideration of the humans who will be interacting with them.  If we 
do not yet know enough about the way that humans interact with machines, then 
perhaps this is as important a research topic as studying the technological 
aspects of design.

------------------------------

Date: Tue, 1 Mar 88 20:16:24 PST
From: Steve Philipson <steve@ames-aurora.arpa>
Subject: Safe Systems [RISKS-6.34]

Scott E. Preece (preece%fang@gswd-vms.Gould.COM) asks:
  >I'd be interested in knowing if the FAA has any research on the number of
  >accidents avoided because of stall warnings and ground-approach warnings as
  >opposed to the number happening because the relied-upon warning failed to
  >happen.

   Accidents that don't happen don't make it into FAA statistics.  
Sometimes though, the crews report to the NASA Aviation Safety 
Reporting System (ASRS) on things that went wrong and why.  

   The most frequent type of report filed concern "altitude busts", which are
unauthorized deviations from an assigned altitude.  The most commonly
mentioned factor in these cases is too great a dependence on the "altitude
alerter", a device that sounds an audio and visual alarm when an altitude
deviation occurs.  This device was intended to be a backup system; pilots are
supposed to monitor altitude as part of their primary duty.  The alerter is
only supposed to catch those events that escape the pilot's attention.  What
happens though is that this "backup system" becomes the primary method of
verifying altitude.  If the alerter is not set correctly or malfunctions,
there is no backup, and the deviation will escape detection.

   Here is a short description of two fatal crashes where alerting systems did
not do the job as intended.  One involved a Mexican airliner.  The aircraft
was in a descent (possibly unknown to the crew) that unchecked would lead to
ground contact.  The alerter sounded it's preliminary warning ("glide slope")
followed by the imperative "PULL-UP, PULL-UP".  The warning was viewed as
erroneous by the flight crew; the last words on the cockpit voice recorder
were "aw, shut-up gringo."

   The second story is more well known.  The Northwest crash in Detroit likely
involved failure of the flap position warning system.  Preliminary evidence
indicates that the flaps were not set for takeoff.  Application of full power
with flaps not set should produce a warning horn alert.  This alert was not
heard on the cockpit voice recorder.  It was noted from the cvr that the crew
did not execute the checklists properly.  The checklist is typically regarded
as the primary means of verifying that all systems are set.  If a crew elects
not to use checklists and relies on warning systems, these systems become
primary systems and their failure becomes critical.

   We also have cases where crews received some warning, could not find the
cause, assumed warning system failure and proceeded. The crew later finds that
the warning was correct, but they did not discover its nature until after the
fact.

   We still don't know how many accidents are prevented by warning systems.
Research is performed (usually in simulators) to evaluate the effects of these
systems, but we don't know how well these simulations reflect the real world.
A senior airline pilot recently told me about some of his observations: a crew
will operate informally when there's only another pilot in the jumpseat, but
when joined by an FAA inspector, all procedures are by-the-book.

   One other thing to consider (not a high-tech risk, but important none the
less).  Recent legislation in California (prop. 65) mandates the posting of
warning messages on every building where "detectable levels" of harmful or
carcinogenic substances may be found.  These warning messages are
information-free; they don't tell you that there is a safety problem or that
the area should be avoided.  The NORMAL (and correct) response is to ignore
the warning.  The real danger is that this conditions people to ignore warning
signs.  This could be very dangerous in a building where there are real
warning signs with real dangers to be avoided.

   Good luck in the real world.

------------------------------

Date: Tue Mar  1 12:36:23 1988
From: microsof!lenp@uunet.UU.NET
Subject: Disappearing skills  [Re: Matt Bishop, RISKS-6.33]
Organization: Microsoft Corporation, Redmond, WA

I've often heard people say this sort of thing, but I have never been
comfortable with the argument.  It sounds a bit like, "Kids these days don't
know nuthin'.  When I was a young 'un, I had to get up before I went to bed
and walk 25 miles through the snow to milk the bull."  I mean, when I
learned penmanship in elementary school (poorly), we weren't taught how to
inscribe cuneiform symbols into wet clay, or how to trim the end of a quill
pen.  And, you know, I don't notice the lack at all.

I'm not saying that schools should stop teaching multiplication next
September, but I am saying that the skills that we need to live from day to
day are changing, and always have been.  We should be worried if we're not
teaching our kids what they need (or want) to know, but if my great-
grandchildren never see a pencil and paper, they probably don't need to be
taught pencil-and-paper arithmetic in grade school.

What do you think? Is technology weakening us by causing important skills to
atrophy?  Or is our educational system "irrelevant"?  Where does one draw
the line?
                                        Len Popp

------------------------------

Date:           Wed, 2 Mar 88 00:39:38 PST
From:           Bob English <lcc.bob@SEAS.UCLA.EDU>
Subject:        Re: Slippery slopes and the legitimatization of illegitimacy

There are a few points to make here on David Thomasson's article in RISKS-6.33.

1) The point of this list is as much to identify possible risks as it is
    to identify likely risks.  The truth is that we don't know what is
    likely or unlikely, yet, but if no one even thinks of the possibilities,
    we're unlikely (BIG risk) to notice the problems until it's too late to
    prevent real damage.

2) All of the things you mention as "subject to abuse" are abused everyday.
    The primary restraint on their abuse is the existence of laws and
    penalties discouraging those abuses.  If no one bothers to identify
    possible abuses, then those penalties will not exist.
      Police agencies have a long history of going after anything and
    everything they can get unless specifically prohibited by law.  Sometimes
    their purposes are legitimate, but sometimes they are not.  But if the
    activities are legal, what would stop them?

3) It isn't enough to look at the world around us and see how this one change
    would effect things.  The world is a fluid place, and laws, ethics, and
    modes of behavior change over time.
      Suppose, for example, that we had a national data-base capable of
    tracking all but the smallest purchases and transactions, and suppose
    that the data-base was dedicated to a single purpose, with legal bar-
    riers to keep it from being misused.  As long as the legal barriers
    were sound, we would have nothing to fear from it (well, most of us,
    anyway).
      But suppose the mood of the country were to swing, and people got so
    tired of urban crime, etc. that they were willing to do anything to
    combat it.  That legal barrier could suddenly become very tenuous.  If
    the system had not been built, then it might take several years to con-
    struct, time in which people might come to their senses.  But if it
    already existed, it might be put to use immediately.
                                                               --bob--
------------------------------

Date: Wed, 2 Mar 88 07:46:32 PST
From: Robert_Slade@mtsg.ubc.ca
Subject: Sins of RISKS and Risks of SINs

Re: the submissions from
   Risks of Believing in Technology (Matt Bishop)
   Slippery slopes and the legitimatization of illegitimacy (David
	Thomasson)
   File matching (Brint Cooper)
   More double troubles (Peter Capek)
in RISKS-FORUM volume 6 number 33

My first reaction on reading the initial announcement of the like to assure
him that we are not quite the neo-Luddites he suspects.  Yes, there are
benefits to the use of such a system, and yes, it should not be killed out
of hand because of the potential (possible?)  risks (problems?).  However,
it is in the nature of RISKS that such an announcement be made.  Others will
follow.  I well remember the furor that raged when I first started reading
RISKS regarding "drive by wire" (and we are now seeing it again in the
Airbus 320.)  Many important points were raised, but the most telling was
the fact that most of the concerns raised *were* being addressed by current
manufacturers in that reliable mechanical "fall-back" had been built in.
Contributions such as David's are, of course, part of the same process as
well; keeping us honest and on track.

Indeed, I found his example of phone company bills most interesting.  I
would, however, say that such an occurence *is* a risk of having a phone,
and one should be aware of it in order to take precautions.  In my case, I
have on my desk as I write a bill from the B*nk *f C*mm*rce V*S* that my
wife and I checked through last night.  We do this regularly, as our answer
to the risk of having the bank be less careful with my money than they
insist I be with theirs.  (In our case, this is the second definite error in
three months, this bill having finally shown the reversal of charges and
interest from their last mistake.  One would be tempted to make attributions
of neglect and lack of intelligence to the data entry operators and their
supervisors, but of course to do so would be to run the "possibility" of a
suit for libel, so I shan't.)  This practice I maintain in spite of the
improbablity of the occurence of an error, as is demonstrated by the history
of my B*nk *f M*ntr**l M*st*rC*rd which has not had a false charge in more
than ten years.

In the case of "Lookout", the initial announcement may well serve as a
springboard to a valuable discussion unforeseen in its inauspicious
beginning.  Who could have predicted that the announcement of a computer
virus, seemingly isolated in Israel, could have sparked a discussion
covering paranoia, terrorism, the dangers of real value in the discussion
will be the assessment of the "actual" level of risk, and the steps that can
be taken (such as the possibility of turning the thing off) to mediate that
risk.  (Can it be turned off?  Should it have an off switch?  Should the off
switch be a combination as in "drunk testing" ignition systems?  Or should
that be the way you turn it on?)

Regarding the storm over social security numbers, we had a case in Canada
(where the term is Social Insurance Number) some years back of a man who had
had his number "stolen" by another who was wreaking all manner of havoc with
it.  Taxes on the "crook's" earnings were being assessed to the original
holder and so forth.  In spite of the fact that this situation was widely
(nationally) known, the government for the longest time would not issue the
first man a new number, and at one point suggested he change his name in
order to get a new one. (The comics of the day predictably had a field day
with alternate suggestions, including quick trips to clinics in Denmark...)
 
Disclaimer: My employers completely repudiate my, or any other, opinions.

------------------------------

Date: 2 Mar 88 09:25:33 EST (Wednesday)
From: MJackson.Wbst@Xerox.COM
Subject: Dumb Digital Leap Year Madness

In Volume 6 : Issue 34 Mark Brader writes:

> All right now -- how many people reading this *haven't yet realized* that 
> their watches have to be set back 1 day, because they went from February 28 
> directly to March 1?

while in reference to the "illegal account expiration date generator"
problem at CMU Michael Wagner identifies

> Two different representations/algorithms for dates, (the 'date 
> interpreter', whatever that is, and the 'account creator') with different 
> handling for unusual cases.

as one of the contributory design mistakes.

An odd example of the intersection of these:  I glanced at my digital watch on
*Tuesday* and saw that it was incorrectly displaying March 2 as the date, so I
reset it.  But all day Monday it had been, presumably, incorrectly displaying
March 1; why had I not noticed the error earlier?

I believe the reason is that I *knew* Monday was "leap year day" and never
needed my watch to tell me it was February 29.  I doubtless checked the time on
numerous occasions without "seeing" the incorrect date, even though it is
continuously displayed.

But despite "knowing" it was leap year day I never thought to reset my watch!
"Two different representations/algorithms for dates" indeed.
                                                                    Mark

    [Last time we talked about calendar algorithms, someone commented that
    we should rather be talking about important problems.  First, in some
    critical systems little things like this could become devastating.  Second,
    if we can't get the simple stuff right, then what about the complicated
    stuff.  Sure, we try harder on the complicated stuff.  Phooey.  PGN]

------------------------------

Date: Wed, 2 Mar 88 11:44:54 PST
From: Matthew_Kruk@mtsg.ubc.ca
Subject: Risks of Leap Years and Dumb Digital Watches

Had no problems with mine. It's a Phoenix (who?) Quartz that I bought at
Sears for $20 (it came with a pocket calculator) about 5 years ago. I have
never had to bother adjusting it (except for daylight saving time) since I
initially set it and merely need to buy a battery once a year. Beats the
hell out of any watch that I ever had, paid more for or had some popular
"designer" name on it.
 
Moral: $20 and a pocket calculator are sometimes worth more than a $100 watch
       that flew out the window along with it's time.
 
------------------------------

Date:     Wed, 2 Mar 88 11:44:37 EST
From:     Brint Cooper <abc@BRL.ARPA>
Subject:  Re: Risks of Leap Years and Dumb Digital Watches

Well, I had to set back my watch one day but only because it thought the
year was 1901 rather than 1988.  I forgot to reset the year when I had
the battery changed!

Perhaps this is a risk of not paying attention to technology?

------------------------------

Date: Wed, 2 Mar 88 09:06:06 PST
From: Robert_Slade@cc.sfu.ca
Subject: Leap years, watches and portables

Our brand new Sharp 4501 laptop thinks it is Wed., February 31, 1988.  This
must be a problem in the machine itself, as I have not yet booted the system
disk.

------------------------------

Date: 1 Mar 88 02:27:36 CST (Tue)
From: sewilco@datapg.mn.org (Scot E. Wilcoxon)
Subject: Re: Virus security hole
Organization: Data Progress, Minneapolis

In RISKS 6:31, Kevin Driscoll mentions that data can escape from a secure
area in unexpected ways.  With all the vandal viruses on the loose, an
obvious way of leaking data is by modulating the frequency or flow of
reloads from backup.  If "scout" virus got into an installation, it may
slowly provide information to anyone who can observe the reload efforts.

If the scout virus simply needs to emit one signal (meaning "There's Something
Very Interesting Here!"), it can force a reload large enough to be detectable.
The signal can be detected by listening carefully to any of the resulting
frustrated staff.  "Computer was down today" doesn't seem to carry any
information.

If the secret is more important than the workers, could a failure that is
suspected of being caused as a signal cause people to pretend normal
activity on a crashed machine?  What a tangled net we weave...

Scot E. Wilcoxon	sewilco@DataPg.MN.ORG	ihnp4!meccts!datapg!sewilco

------------------------------

End of RISKS-FORUM Digest
************************
-------