[mod.risks] RISKS-3.27

RISKS@CSL.SRI.COM.UUCP (07/30/86)

RISKS-LIST: RISKS-FORUM Digest,  Tuesday, 29 July 1986  Volume 3 : Issue 27

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Whoops!  Lost an Area Code! (Clayton Cramer)
  Comet-Electra (RISKS-3.25) (Stephen Little)
  Comparing computer security with human security (Bob Estell)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
(Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM)
  (Back issues Vol i Issue j available in CSL.SRI.COM:<RISKS>RISKS-i.j.
  Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

Date: Mon, 28 Jul 86 11:29:10 pdt
From: voder!kontron!cramer@ucbvax.Berkeley.EDU (Clayton Cramer)
Subject: Whoops!  Lost an Area Code!
To: voder!sri-csl.arpa!risks

I had an interesting and aggravating experience this last Saturday.  The 707
area code billing system failed.  Completely.  For over five hours.

During that time, you could not dial into the 707 area code, dial out
of it, make local calls billed to a credit card, or get an operator.  
The ENTIRE area code.  Fortunately, the 911 emergency number doesn't 
go through the billing system, so I doubt any lives were lost or
threatened by this failure, but I shudder to think of how this could
happen.  My guess is someone cut over to a new release of software
and it just failed.

No great philosophical comments, but one of those discouraging examples
of the fragility of highly centralized systems.

Clayton E. Cramer

------------------------------

Date: Tue, 29 Jul 86 15:14:30 est
From: munnari!gucis.oz!edsel@seismo.CSS.GOV (S Little)
To: munnari!RISKS@CSL.SRI.COM
Subject: Comet-Electra (RISKS-3.25)

Initial design studies for a trans-atlantic turbo-jet powered mail plane
were begun during World War II by de Havilland.  Eventually a much larger
airliner, the DH-106 Comet prototype flew in 1949, so that computer
involvement in the design is not an issue.  The test program involved may
have been adequate for forties technologies, but the jet-based mileages and
altitudes obviously revealed a new range of problems which have resulted in
the more stringent certification procedures now applied.

Whatever the source of the disastrous crack propagation (said in one case to
be possibly a radio antenna fixing), the design change to rounded windows
was in response to this danger.  The only square window Comets remained in
RAF service without pressurization for many years (Air International vol.12
no.4, 1977).

Given that computer representation is limited by our understanding of a
design situation, is there a general concern with the performance of, inter
alia, flight simulators, which may accurately represent an inadequate
understanding of the behaviour of the system modelled.  I have been told of
one major accident in which the pilot followed the drill for a specific
failure, as practiced on the simulator, only to crash because a critical
common-mode feature of the system was neither understood, or incorporated in
the simulation.  I highly reccommend Charles Perrow's "Normal Accidents" for
an analysis of the components of complexity in such situations.

I understand that the Shuttle auto-pilot is the source of re-appraisal
including expert systems derivation of responses to the large number
of relevant variables.  What are people's feelings about the induction
of knowledge in such areas, is it felt to increase or decrease risk
via computer ?

Stephen Little, Computing & Information Studies,
                Griffith Uni, Qld, Australia.

------------------------------

Date: 29 Jul 86 08:29:00 PST
From: "143B::ESTELL" <estell%143b.decnet@nwc-143b.ARPA>
Subject: Comparing computer security with human security
To: "risks" <risks@csl.sri.com>

The question has been raised: Are there significant differences in the
quality of security in computer system, based on elaborate software models
[passwords, access lists, et al], versus having human guards at the door; 
e.g., humans can be bribed, computers can't; but computers can fail.

Hmmmmm... First let me admit a bias: I think the "MIT rule" applies: 
 No system can be better than the PEOPLE who design, build, and operate it.
[I call it that because that's where I first heard in in '68.]

Aside from that bias, there seems to be some assumptions:
(1) People don't "fail" [at least not like computers do]; and
(2) Computer can't be "diverted" in the manner of a bribe.

Seems to me that people DO FAIL, somewhat like computers; i.e., we have
memory lapses [similar perhaps to incorrect data fetches?]; and we make
perception errors [similar perhaps to routing replies to the wrong CRT?]

And computers can be diverted.  Examples:

(1) A malicious agent, only wanting to deny others service on a computer,
    rather than gain access himself, can often find ways to exploit the
    priority structure of the system; e.g., some timesharing systems give 
    high priority to "login" sequences; attacking these with a "faulty 
    modem" can drain CPU resources phenominally.

(2) There are some operating systems/security packages that fail in a com-
    bination of circumstances; I'm going to be deliberatly vague here, in
    part because the details were shared with me with the understanding
    that I not broadcast them, and in part because I've forgotten them,
    and in part because the exact info is not key to the discussion;
    but to continue:

	If the terminal input buffer is overrun [e.g., if the user-id or
	password is VERY long], and if the "next" dozen [or so] bytes
	matches a "key string" then the intruder is allowed on; not only
	that, but at a privileged level.

    In other words, the code gets confused.  But isn't that what a person
    suffers when he trades his freedom, his honor, and all his future earn-
    ings [hundreds of thousands of dollars?] for a few "easy" tens of thous-
    ands of dollars now for one false act?  I'm saying that most "bribes"
    aren't nearly large enough to let the "criminal" relocate somewhere
    safe from extradition, and live a life of luxury ever after; instead,
    most bribes are only big enough to "buy a new car" or pay a overdue
    mortgage or medical bill.

------

OR is the real risk in both cases [human and computer] that the most potent
penetrations are those that never come to light; e.g., the computer "bug"
that is so subtle that it leaves no traces; and the "human bribe" that is
so tempting that authorities [and victims] don't talk about it - precisely
because they don't want folks to know how much it can be worth?

Discussion and comments, please.             Bob

------------------------------

End of RISKS-FORUM Digest
************************
-------