[mod.risks] RISKS-1.34

RISKS@SRI-CSL.ARPA (RISKS FORUM, Peter G. Neumann, Coordinator) (01/04/86)

RISKS-LIST: RISKS-FORUM Digest,  Saturday, 4 Jan 1986  Volume 1 : Issue 34

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
     under the auspices of the Association for Computing Machinery
                   Peter G. Neumann, moderator
      (and Chairman, ACM Committee on Computers and Public Policy)

Contents:
  C&P Computer Problems Foul 44,000 D.C. Phones (Mike McLaughlin)
  Putting the Man in the Loop; Testing SDI; Independent Battlestations
    (Jim McGrath)
  Failure probablities in decision chains... independence (Edward Vielmetti)
  Pharmacy prescription systems (Normand Lepine)
  Masquerading (Paul W. Nelson)

Summary of Groundrules:
  The RISKS Forum is a moderated digest.  To be distributed, submissions should
  be relevant to the topic, technically sound, objective, in good taste, and 
  coherent.  Others will be rejected.  Diversity of viewpoints is welcome.  
  Please try to avoid repetition of earlier discussions.

(Contributions to RISKS@SRI-CSL.ARPA, Requests to RISKS-Request@SRI-CSL.ARPA)
(FTP Vol 1 : Issue n from SRI-CSL:<RISKS>RISKS-1.n)      

----------------------------------------------------------------------

Date: Sat, 4 Jan 86 13:52:34 est
From: mikemcl@nrl-csr (Mike McLaughlin)
To: RISKS@SRI-CSL.ARPA
Subject:  C&P Computer Problems Foul 44,000 D.C. Phones

(Excerpted from Washington Post, Friday, 3 Jan 86, pp D1 & D4)
	By Elizabeth Tucker, Washington Post Staff Writer

	Up to 44,000 business and residential phone lines in the District 
(of Columbia) did not work or worked intermittently (Thursday, 2 Jan 86) 
because of computer problems at Chesapeake & Potomac Telephone Co. (C&P)
	C&P... said the ... company had equipment trouble in its central 
office... between 2:20 and 4 pm.  The problem was fixed when (the company)
shut off connections to the 44,000 lines for a split second, and then 
turned the connections back on.  
	C&P has more than 780,000 phone lines in (DC). 
	(For) nearly two hours... customers often were unable to make or 
receive calls... The telephone company had not diagnosed the precise 
cause of the problem late yesterday....
	Neither the White Hourse nor the General Services Administration...
reported problems...
	(GWU) Hospital experienced a delay in getting dial tones, but only 
for about 10 minutes... 
	...the Associated Press... could receive calls but not make them 
between 2 and 4 pm.... 
	"You don't know what's going on in terms of news... I thought 
someone cut the cables.  I was worried." (AP spokesman)  
	The Washington Post Co. also experienced problems... 
	One State department official ... "... heard corridor gossip 
[that people] weren't getting calls in or out."  
	The DC police... reported no problems in receiving 911 emergency 
calls, and sid there was no appreciable drop off in calls... C&P... said 
some people may have experienced problems reaching 911... "It could be 
that no one had problems with 911."...
	The problem is not considered usual... "They don't know what caused
the problem, but it's up and working fine . . . For all intents and purposes
they reset the system, turned off all the connections and then turned them 
back on again -- LIKE RESETTING A COMPUTER." (EMPHASIS supplied) 
	"They are researching and analyzing the tapes to see what caused 
the problem."... such problems can occur when heavy calling is taking place
... but that such was not the case (2 Jan 86).  
	"We ruled it out . . . A lot of people aren't working downtown... 
calling volumes are down dramatically."  The telephone system "sometimes
can get confused," and think there is heavy calling when there isn't...
			# # # 
(parentheses and close up ... are mine) 
[brackets and spaced . . . are the Post's, as are "quotes"] 
I restrained myself from editorial comments, except where I just had to go 
all caps for emphasis.   Mike McLaughlin

------------------------------

Date: Thu 2 Jan 86 21:43:54-EST
From: "Jim McGrath" <MCGRATH@OZ.AI.MIT.EDU>
Subject: Putting the Man in the Loop
To: risks@OZ.AI.MIT.EDU

I found the calculations involving SDI reliability interesting.  As
well the debate on SDI software.  But it appears as if people may be
making some aspects of the problem too hard.  Hoping that I have not
missed this part of the conversation....

Obviously some problems (precise aiming of weapons for instance)
demand computer control.  And the time constraints involved in boost
phase interception may require computer control.  But other aspects
(such as initial activation of weapons for mid-course and terminal
phase interception, target discrimination, neutralization of
counter-measures) could be made with substantial human input.  Thus no
need for monster AI programs to cope with all possible contingencies -
humans are ready made for that purpose.

The model to think of is a sophisticated computer game.  The human
operator(s) would take care of truly strange cases (rising moons,
flocks of interplanetary geese) and either determine strategy and/or
provide input parameters for the actual computer controllers (e.g.
"Looks like they are using dummy decoys of the DUMDUM class - better
change certain probabilities in your expert systems target
discriminator in the following manner").  The trade off here is
decreased reliance on sophisticated AI programs that we all concede
that state of the art is not capable of producing and increased
reliance on software that provides an excellent interface to the human
operator.  That would seem to be the easier task (we already have
experience in designing control systems for high performance jet
fighters).

Of course, this increases the problems associated with real time
secure communications, but you were going to have to face them anyway.

Jim

------------------------------

Date: Thu 2 Jan 86 21:45:01-EST
From: "Jim McGrath" <MCGRATH@OZ.AI.MIT.EDU>
Subject: Testing SDI
To: risks@OZ.AI.MIT.EDU

From Risks, Volume 1 : Issue 33:

> From: horning@decwrl.DEC.COM (Jim Horning)
> - The systems that you cite, and that he cited, are all ones where each
> component is in routine use under the exact circumstances that they
> must be reliable for. No matter how many independent subsystems the
> Lipton SDI is divided into, NONE of them will get this kind of routine
> use under conditions of saturation attack where reliability will be
> most critical. Thus there is a high probability that each of them would
> fail (perhaps in independent ways!).

This seems to be a common problem with any modern weapon system (or
even not so modern - it took WWI for the Germans to realize that the
lessons of the 1880's concerning rapid infantry fire (and thus the
rise of infantry over calvary) did not take artillery development
adequately into account).  But this might be easier to manage than
most.

What if, after suitable advance notice, the SDI system was fully
activated and targeted against one of our periodic meteor swarms?
While not perfect targets, they would be quite challenging (especially
with respect to numbers!), except for boost phase, and CHEAP.  If the
system was regenerative (i.e. you only expended energy and the like),
then the total cost would be very low.

Meteors are just a casual example.  My point is that the costs of
partial (but system wide) testing does not have to lie with the
targets (which many people seem to assume) as much as with weapons
discharge - which may be quite manageable.

Jim

------------------------------

Date: Thu 2 Jan 86 21:45:43-EST
From: "Jim McGrath" <MCGRATH@OZ.AI.MIT.EDU>
Subject: Independent Battlestations
To: risks@OZ.AI.MIT.EDU

From Risks, Volume 1 : Issue 33:

> From: Herb Lin <LIN@MC.LCS.MIT.EDU>
>> From: horning at decwrl.DEC.COM (Jim Horning)
>> More generally, I am interested in reactions to Lipton's proposal that
>> SDI reliability would be improved by having hundreds or thousands of
>> "independent" orbiting "battle groups," with no communication between
>> separate groups (to prevent failures from propagating), and separately
>> designed and implemented hardware and software for each group (to
>> prevent common design flaws from affecting multiple groups). 
> That is absurd on the face of it.  To prevent propagation of failures,
> systems must be truly independent.  To see the nonsense involved,
> assume layer #1 can kill 90% of the incoming threat, and layer #2 is
> sized to handle a maximum threat that is 10% of the originally
> launched threat.  If layer 1 fails catastrophically, you're screwed in
> layer #2.  Even if Layers 1 and 2 don't talk to each other, they're
> not truly independent.

True but his solution WOULD reduce the probability of the propagation
of "hard" errors (i.e. corrupting electronic communications), and the
whole independence approach should lead to increased redundancy so as
to deal with "soft" propagation of errors such as you cite.

Remember, you do not need to PREVENT the propagation of errors, just
reduce the probability enough so that your overall system reliability
is suitably enhanced.  I think the approach has merit, particularly
over a monolithic system, and should not be shot down out of hand.

Jim

------------------------------

Date: Thu, 2 Jan 86 21:32:32 EST
From: Edward_Vielmetti%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
To: risks@sri-csl.arpa
Subject: Re: Failure probablities in decision chains and decision independence

> * IF the overall decision is correct if and only if all five
  sub-decisions are correct, and
> * IF the sub-decisions are statistically independent, and
> * IF the probability that each sub-decision is correct is 0.9,
The weak link in the derivation of failure rates in decision chains is
the assumption that failure probablities are statistically independent.
I think that it could be argued that the failure probabilities are
corelated; that is, if sub-system A fails because of event X, sub-
systems B, C, D, and E will be more likely to fail than if A survives
the event.   This corellation could come about as a result of proximity,
similar hardware or software, or a general design likeness.  The effect
would be to increase the probability that the overall decision is
correct.  In the case where B is 9 times as likely to fail if A fails,
the probability of the system failing is 11%, not 19% :

           ! B fails ! B survives ! total !
-----------!---------!------------!-------!
A fails    !   0.09  !    0.01    !  0.10 !   P(A & B) = 0.89
A survives !   0.01  !    0.89    !  0.90 !   P(A) * P(B) = 0.81
-------------------------------------------
 total     !   0.10  !    0.90    !  1.00 !

Edward Vielmetti
University of Michigan
Edward_Vielmetti%UMich-MTS.Mailnet@MIT-Multics.ARPA

------------------------------

Date: Friday,  3 Jan 1986 09:54:49-PST
From: lepine%why.DEC@decwrl.DEC.COM  (Normand Lepine 225-6715)
To: risks@sri-csl.ARPA
Subject: Re: Pharmacy prescription systems

Dave Platt raises a number of issues concerning the use of automated systems
in pharmacies for checking drug-interactions, counter-indications, etc.
Most of the points he has raised need careful consideration.  Any automated
medical or pharmaceutical system provides a useful tool to physicians,
pharmacists, and other health-care workers.  The usefulness of the tool is
directly related to the construction and maintenance of the data base (or
knowledge base in expert system implementations) as Dave correctly notes.
The awareness that such a system is a tool must also be recognized by those
using it to avoid the unthinking dependency that can develop, but provision
of tools that enable a pharmacist to check on items that might otherwise be
overlooked is valuable.  And the area of liability, reliablity, etc. must be
further studied.

I do however take exception to Dave's statement that he is concerned even
more by such systems than by MYCIN et.al. because they are one step closer
to the consumer.  Medical expert systems, diagnosis aids, etc. are no
further removed from the consumer than a pharmacy system is.  Each of the
two types of systems are used as aids by trained practitioners (the issue of
competancy aside).  Pharmacists currently provide an extra step in the
presciption chain by manually doing what the automated systems assist with.
Their education and experience is a valuable adjunct to the prescribing
physician and is only supplemented by these systems.  The consumer is no
more involved as a user of these systems than as a user of the medical
expert systems.

Normand Lepine

------------------------------

Date: Fri, 3 Jan 86 13:31:39 pst
From: ssc-vax!ssc-bee!nelson@uw-beaver.arpa
To: uw-beaver!RISKS@SRI-CSL
Subject: Masquerading

   In course of the password issue being discussed in RISKS, the
question of faking login procedures was raised.  This really
brings up the issue of non-system or untrusted software
masquerading as system software.  One method around this problem
is to implement a trusted communication path between the system
and the user.

   Trusted path communications is described in the DoD Trusted
Computer System Evaluation Criteria, (CSC-STD-001-83).  A few
definitions are in order.  The evaluation criteria defines a
trusted computing base (TCB) as "The totality of protection
mechanisms within a computer system ...".

A trusted path is defined as:
      "A mechanism by which a person at a terminal can
   communicate directly with the Trusted Computing Base. This
   mechanism can only be activated by the person or the Trusted
   Computing Base and cannot be imitated by untrusted software."

An implementation of a trusted path could have a terminal driver
look for a combination of keystrokes from the user that would be
defined to mean "enter trusted path".  When the keystrokes are
received the handler would switch to a trusted function that
would process user requests such as logging in.  In this case the
terminal driver would be considered part of the trusted computing
base.  When users want to log in they enter the keystrokes for
the trusted path and could then log in with some greater
assurance that they are not being duped.

   This approach does not eliminate possibilities for
masquerading but makes it more difficult.  The bad guys would
most likely need some hardware installed in the terminal link to
steal passwords, making the trusted path ineffective ( or any
other mechanism for that matter ).  Installing masquerading
hardware on single systems should be a difficult proposition if
the systems are physically protected.  However, when networking
is involved the possibilities are enormous.  Any node in a path
could have bad guys duping unsuspecting users.  Users would be
asked to trust an awful lot more hardware and software.

   I saw a TV show where the bad guys attempt to steal funds
electronically transferred via satellite link by masquerading as
the receiver after disabling the real receiver.  We were asked to
believe that they used a small dish antenna and a Radio Shack
portable computer, but still...  Masquerading is definitely a
risk to society when the bad guys are determined enough.

                Paul W. Nelson (..ssc-vax!ssc-bee!nelson)

------------------------------

End of RISKS-FORUM Digest
************************
-------