[mod.risks] RISKS-1.35

RISKS@SRI-CSL.ARPA (RISKS FORUM, Peter G. Neumann, Coordinator) (01/06/86)

RISKS-LIST: RISKS-FORUM Digest,  Monday, 5 Jan 1986  Volume 1 : Issue 35

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
     under the auspices of the Association for Computing Machinery
                   Peter G. Neumann, moderator
      (and Chairman, ACM Committee on Computers and Public Policy)

Contents: 
  SDI --
    Meteors as substitutes for nuclear war (Jim Horning, Dave Parnas)
    Putting a Man in the Loop (Jim McGrath, Herb Lin, JM again)
    Testing SDI (Herb Lin, Jim McGrath, HL again)
    Independent Battlestations (Herb Lin, Jim McGrath, HL again)
    The Goal of SDI; Politicians (Jim McGrath)
  Pharmacy prescription systems (Rodney Hoffman)
  How to steal people's passwords (Roy Smith)

          [*** For those of you who read BOTH ARMS-D and RISKS, there is much
               overlap in this issue.  There seem to be so few of you who do
               read both, that the annoyance is justified.  SDI clearly is
               relevant to RISKS, and thus I have resisted some suggestions
               that SDI material be confined to ARMS-D.  I have pruned
               somewhat, but it is hard to linearize the debate.  PGN ***]

Summary of Groundrules:
  The RISKS Forum is a moderated digest.  To be distributed, submissions should
  be relevant to the topic, technically sound, objective, in good taste, and 
  coherent.  Others will be rejected.  Diversity of viewpoints is welcome.  
  Please try to avoid repetition of earlier discussions.

(Contributions to RISKS@SRI-CSL.ARPA, Requests to RISKS-Request@SRI-CSL.ARPA)
(FTP Vol 1 : Issue n from SRI-CSL:<RISKS>RISKS-1.n)      

----------------------------------------------------------------------

From: horning@decwrl.DEC.COM (Jim Horning)
Date:  4 Jan 1986 1536-PST (Saturday)
To: RISKS@SRI-CSL
Cc: MCGRATH@OZ.AI.MIT.EDU, Horning@decwrl.DEC.COM
Subject: Meteors as substitutes for nuclear war

Jim,

Thanks for your comments. However, I do have to disagree with your
cheerful assessment of the chances for REALISTIC testing of any SDI
system or major subsystem. The problem is not that there is no testing
that could be done, but that any substantial change in the system's
environment is likely to provoke a new set of unexpected behaviors.

Certainly, an SDI that failed to shoot down a meteor swarm could be
judged faulty. (And certainly any experienced software engineer would
predict that it WOULD fail to shoot down its first meteor swarm.) But
what reason is there to believe that an SDI that had shot down every
meteor swarm since its deployment would act in the intended manner when
faced with a full-scale nuclear attack, which would certainly be
accompanied by both attacks on the SDI system itself and by extensive
counter-measures?

It is in the nature of counter-measures that you cannot be sure
in advance that you have full knowledge of the counter-measures that
your opponent will throw at you--especially in a first engagement.
Thus, there is in principle no way to adequately test your system's
response to counter-measures.

It is in the nature of distributed and real-time systems that the most
catastrophic failures tend to come in periods of heaviest load. Thus
the results small-scale testing can't be extrapolated with confidence.

It is in the nature of destructive testing (which a full-scale nuclear
attack on an SDI certainly would be) that you can't test the thing that
you will ultimately rely on. However, it had better be something so
assuredly similar to the ultimate system that you can be confident in
the extrapolation.

Finally, one brief non-software point:  I read a column the other day (Tom
Wicker?) that pointed out that every technology that would be effective
against ballistic missiles would be far more effective, far sooner, as an
anti-satellite system, since the behavior of satellites is more predictable,
and the attacker can pick his moment. Even without SDI, the US is far more
dependent on satellites for both defense and civilian uses than the USSR.
And any space-based SDI would make a very tempting ASAT target. So
developing the technology needed for the system INCREASES the risk of
relying on the system.

Jim H.

------------------------------

Date: Sat, 4 Jan 86 16:50:35 pst
From: vax-populi!dparnas@nrl-css.arpa (Dave Parnas)
To: nrl-css!RISKS@SRI-CSL.ARPA
Subject: Re: Meteors as substitutes for nuclear war
Cc: nrl-css!horning@decwrl.ARPA

  The comments that you reported on testing SDI, proposing that we test
it by shooting at periodic meteor swarms make me wonder how many of 
the people in our profession have trouble discriminating between 
real problems and arcade games.  Shooting at an easily predictable
non-evasive meteor has about the same relation to the real job of 
SDI as shooting at a target in a booth at a county fair has to
shooting at woods in heavy brush from a moving airplane.  

	If I had a computer program that had been tested by controlling a 
weapon at a county fair, I would not have much confidence in its performance
the first time that the B.C. Government tried to use it in its periodic 
wolf kills from light planes.  In fact, I hope someone sells that idea
to them.
	
Dave

------------------------------

Date: Sat 4 Jan 86 19:24:35-PST
From: Jim McGrath <J.JPM@Epic>
Subject: Re: Putting a Man in the Loop
To: "arms-d@mc"@Sushi
cc: "lin@mc"@Sushi
Reply-to: mcgrath%mit-oz@mit-mc.arpa

    From: Herb Lin <LIN@MC.LCS.MIT.EDU> [Date: Sat, 4 Jan 86 18:52:04 EST]
                                        [Herb's message is embedded.  PGN]

        From: Jim McGrath <MCGRATH at OZ.AI.MIT.EDU>
        The model to think of is a sophisticated computer game.  The 
        human operator(s) would take care of truly strange cases
        (rising moons, flocks of interplanetary geese)..

    But the major problem is not the things that the computer isn't
    sure about, but rather the things that it is sure about that are
    not true.  How would the human ever know to intervene?

I thought a bit about that, and have a suitable elaboration.
Basically, you require a "two key" system, with the computer holding
one key and a human operator/monitor another.  This is primarily for
the "go/no go" decision.  After an attack is acknowledged, you concede
the possibility of overkilling by the computer (taking out third party
satellites and the like) in return for the more immediate response to
attack provided by the computer.

This takes care of the computer going off half cocked.  If you are
worried about the computer missing an actual attack, you can now set
the sensitivity low, trusting to the human monitor to not activate
when appropriate.

Actually, this is too simple.  What you really want is to have the
hardware/software under a set of human operators, perhaps partitioned
to provide zone coverage.  The humans act as before, mainly as
checkpoints for activation decisions, overseeing strategy, sending
expert information to the computers as the situation unfolds so that
the software does not have to be a tactical genius.  Now a set of
human supervisors sit on top of the operators.  They have another
"key," and so can break ties on activation decisions (or even override
lower level decisions).  Their other missions are to advise operators
on developing strategy, keep the command authorities informed, and to
act as "free safety."  That is, they will have the authority to
override operator commands so that targets that find seams in the
zones (or similarly defy the operator/computer teams) will be
targetted for attack.  Normally they will access information at a much
higher level than an operator (the former will have to deal with
thousands of targets - the latter tens of low hundreds).

Other concepts can be advanced: advance/retard the ease of a go/no go
decision according to alert status and the like.  The main point is
that a man in the loop is a big win, since you get a proprogrammed
general purpose computer which can take care of those "higher level"
decisions.  Response time is not a concern - seconds are not vital if
you have 20 minutes.  Only for boost phase interception do you run
into difficulties.

Jim

------------------------------

Date: Sat 4 Jan 86 21:47:31-PST
From: Jim McGrath <J.JPM@Epic>
Subject: Re: Putting a Man into the Loop
To: "lin@mc"@Sushi
cc: "arms-d@mc"@Sushi, "Risks@sri-csl"@Sushi, "mcgrath%oz@mc"@Sushi

    From: Herb Lin <LIN@MC.LCS.MIT.EDU>
    ...
        "go/no-go"

    You mean fire/don't fire?

I mean weapon activation.  Firing decisions for specific targets will
be made by computer, but the weapons themselves will be inert until
activated.

        After an attack is acknowledged, you concede the possibility
        of overkilling by the computer (taking out third party
        satellites and the like) in return for the more immediate
        response to attack provided by the computer.

    So your solution is that you kill everything, and don't do
    discrimination?

No.  I meant exactly what I said.  You concede that you might make a
mistake in firing (which was your original objection).  You do not aim
for making a mistake.  I explicitly said in the same message that one
of the jobs of human operators is to assist in real time parameter
adjustment so that the computer controlled weapons would be able to
discriminate better.

As I said earlier, boost phase poses a particular problem.  The only
thing I can see to do now is to trust in AI to give you a good initial
screen, and to argument this with a human authorized to override the
problem in a few seconds.  This could work well for limited periods of
time (such as alerts), but I have problems with it for extended
periods.

Jim

------------------------------

Date: Sat,  4 Jan 86 18:54:43 EST
From: Herb Lin <LIN@MC.LCS.MIT.EDU>
Subject:  Testing SDI
To: MCGRATH@OZ.AI.MIT.EDU
cc: LIN@MC.LCS.MIT.EDU, "RISKS-LIST:"@MC.LCS.MIT.EDU,
    ARMS-D@MC.LCS.MIT.EDU, risks@OZ.AI.MIT.EDU

    From: Jim McGrath <MCGRATH at OZ.AI.MIT.EDU>

    This [full-scale system testing -- HL] seems to be a common problem with 
    any modern weapon system (or
    even not so modern - it took WWI for the Germans to realize that the
    lessons of the 1880's concerning rapid infantry fire (and thus the
    rise of infantry over calvary) did not take artillery development
    adequately into account).

And there have been disasters.  Only here, the disaster is bigger.

    What if, after suitable advance notice, the SDI system was fully
    activated and targeted against one of our periodic meteor swarms?
    While not perfect targets, they would be quite challenging (especially
    with respect to numbers!), except for boost phase, and CHEAP.  If the
    system was regenerative (i.e. you only expended energy and the like),
    then the total cost would be very low.

Interesting example, but problematic.  No kill assessment for one,
under some circumstances.  Entirely different signatures for another.

    Meteors are just a casual example.  My point is that the costs of
    partial (but system wide) testing does not have to lie with the
    targets (which many people seem to assume) as much as with weapons
    discharge - which may be quite manageable.

But if the tests are to be realistic, then the right targets are
essential, especially since a counter-measure is to try to fool with

------------------------------

Date: Sat,  4 Jan 86 22:45:25 EST
From: Herb Lin <LIN@MC.LCS.MIT.EDU>
Subject:  Testing SDI
To: mcgrath@OZ.AI.MIT.EDU
cc: LIN@MC.LCS.MIT.EDU, ARMS-D@MC.LCS.MIT.EDU, risks@SRI-CSL.ARPA
In-reply-to: Msg of Sat 4 Jan 86 19:36:57-PST from Jim McGrath <J.JPM at Epic>
Message-ID: <[MC.LCS.MIT.EDU].773136.860104.LIN>

    From: Jim McGrath <MCGRATH at OZ.AI.MIT.EDU>

    remember that the major cost of the target simulation is in
    the boost phase.  Once the targets are in sub-orbit, it makes no
    difference whether they were fired independently by hundreds of
    expensive boosters or were accelerated from orbital velocity, after
    having been place there originally through more economical means.
    Terminal phase tests are especially easy to do this way.  Only boost
    phase is intrinsically expensive.

I agree with your technical point.  But successful boost phase is what
SDI is all about.  The technology for dealing with mid-course and
terminal is ALREADY here.  You need boost phase so that you can thin
out the midcourse and terminal.

------------------------------

Date: Sat 4 Jan 86 19:36:57-PST
From: Jim McGrath <J.JPM@Epic>
Subject: Re: Testing SDI
To: "arms-d@mc"@Sushi, "risks@sri-csl"@Sushi
cc: "lin@mc"@Sushi, "mcgrath%oz@mc"@Sushi
Reply-to: mcgrath%mit-oz@mit-mc.arpa

    From: Herb Lin <LIN@MC.LCS.MIT.EDU>
       
        From: Jim McGrath <MCGRATH at OZ.AI.MIT.EDU>

        What if, after suitable advance notice, the SDI system was fully
        activated and targeted against one of our periodic meteor swarms?

    Interesting example, but problematic.  No kill assessment for one,
    under some circumstances.  Entirely different signatures for another.

It would test some aspects of the system on a system wide level (such
as detection and tracking), and would even provide good kill estimates
in some cases (KE weapons and small targets).  But as I said:

        Meteors are just a casual example.  My point is that the costs of
        partial (but system wide) testing does not have to lie with the
        targets (which many people seem to assume) as much as with weapons
        discharge - which may be quite manageable.

    But if the tests are to be realistic, then the right targets are
    essential, especially since a counter-measure is to try to fool
    with the targets that the defense sees.

True, but remember that the major cost of the target simulation is in
the boost phase.  Once the targets are in sub-orbit, it makes no
difference whether they were fired independently by hundreds of
expensive boosters or were accelerated from orbital velocity, after
having been place there originally through more economical means.
Terminal phase tests are especially easy to do this way.  Only boost
phase is intrinsically expensive.

(That's two messages where I've come up with approaches to problems
that work on all phases except boost phase.  Although initially
attractive, perhaps concentrating more on mid-course and terminal
defense will ultimately prove more beneficial.)

Jim

------------------------------

Date: Sat,  4 Jan 86 18:59:03 EST
From: Herb Lin <LIN@MC.LCS.MIT.EDU>
Subject:  Independent Battlestations
To: MCGRATH@OZ.AI.MIT.EDU
cc: LIN@MC.LCS.MIT.EDU, "RISKS-LIST:"@MC.LCS.MIT.EDU,
    risks@OZ.AI.MIT.EDU

    > From: Herb Lin <LIN@MC.LCS.MIT.EDU>
    >> From: horning at decwrl.DEC.COM (Jim Horning)
    >> More generally, I am interested in reactions to Lipton's proposal that
    >> SDI reliability would be improved by having hundreds or thousands of
    >> "independent" orbiting "battle groups,"..
    > That is absurd on the face of it.  To prevent propagation of failures,
    > systems must be truly independent.  To see the nonsense involved,
    > assume layer #1 can kill 90% of the incoming threat, and layer #2 is
    > sized to handle a maximum threat that is 10% of the originally
    > launched threat.  If layer 1 fails catastrophically, you're screwed in
    > layer #2.  Even if Layers 1 and 2 don't talk to each other, they're
    > not truly independent.

    True but his solution WOULD reduce the probability of the propagation
    of "hard" errors (i.e. corrupting electronic communications), and the
    whole independence approach should lead to increased redundancy so as
    to deal with "soft" propagation of errors such as you cite.

    Remember, you do not need to PREVENT the propagation of errors, just
    reduce the probability enough so that your overall system reliability
    is suitably enhanced.  I think the approach has merit, particularly
    over a monolithic system, and should not be shot down out of hand.

This is the fundamental point of disagreement.  If SDI is just another
defensive system, then all that you say is right.  But it isn't.  I
will stop beating the perfect system horse when the SDI supporters
acknowledge that large-scale population defense can never be made
certifiably reliable.  

------------------------------

Date: Sat 4 Jan 86 19:52:46-PST
From: Jim McGrath <J.JPM@Epic>
Subject: Re: Independent Battlestations
To: "risks@mc"@Sushi, "arms-d@mc"@Sushi
cc: "lin@mc"@Sushi, "mcgrath%oz@mc"@Sushi

What do you mean by "certifiably reliable?"  While politicians may
talk about 100% reliability, we are all scientists and engineers here
- we know that nothing, including such things as the sun rising, is
100% reliable.  You must really mean X% reliable, where X is a high
number (perhaps high enough so as to reduce to a very low probability
the chance of a single warhead getting through).  In that case,
independent battlestations, and other measures, might give you the
number you need.  I submit that it is too early to dismiss these
approaches out of hand, since you are really talking about a
quantative difference and we do not have good numbers yet.

Anyway, I am arguing for a highly reliable, but by no means perfect,
system.  My X would probably be lower than yours.  I really do think
that there is a difference between a few million dead (horrible, on
the scale of WW II) and hundreds of millions dead (utterly
unprecedented).  And while I am certain that we all, including the
public, would like as high an X as possible, they would agree that
losing a city or two and some missile bases/airfields would be a lot
better than losing everything.

Besides, complaints that politicians are lying do not sit well with
me.  Of course they are lying.  WE WANT THEM TO LIE.  Politicians who
tell the truth get kicked out of office.  Our entire posture of
extended deterrence is a joke, since we do not have the capability to
creditably back it up.  But you try to get someone elected promising
to reinstate the draft, raise the defense budget further, or pull back
our troops and cut Europe/Japan loose.

We have to make do with what we have.   

Jim

     [Do we have to have it just because we can make do with it?  PGN]

------------------------------

Date: Sat,  4 Jan 86 23:03:58 EST
From: Herb Lin <LIN@MC.LCS.MIT.EDU>
Subject:  Independent Battlestations
To: mcgrath@OZ.AI.MIT.EDU
cc: ARMS-D@MC.LCS.MIT.EDU, LIN@MC.LCS.MIT.EDU,
    RISKS@MC.LCS.MIT.EDU

    From: Jim McGrath <J.JPM at Epic>

    What do you mean by "certifiably reliable?"

A system whose performance is known in advance to be adequate to the
task.  I don't care if the number for reliability isn't 100%, just
high enough so that no one dies.

    We all, including the
    public, would like as high an X as possible, they would agree that
    losing a city or two and some missile bases/airfields would be a lot
    better than losing everything.

But that is not the goal of the SDI.

    Besides, complaints that politicians are lying do not sit well with
    me.  Of course they are lying.  WE WANT THEM TO LIE.  [...]

    We have to make do with what we have.

So you condone lying to the public as a tool of public policy?  How
would you like to acknowledge that publically in a letter to the NY
Times?  Don't forget to add that you support SDI, and that truth
doesn't matter when you try to justify a weapon system -- never mind
what it actually does.  We can say that we will spend millions of
dollars on AIDS research since that will save lives, and spend the
money instead on nerve gas, which will also help to eliminate AIDS (by
killing homosexual soldiers).

Sorry; I believe that elected leaders have a responsibility to tell
the truth to the public, and to educate them away from fairy tales.  I
would rather see precious defense dollars go to create good anti-tank
weapons; that would have some chance of improving extended deterrence.

------------------------------

Date: Sat 4 Jan 86 21:52:13-PST
From: Jim McGrath <J.JPM@Epic>
Subject: The Goal of SDI
To: "lin@mc"@Sushi
cc: "arms-d@mc"@Sushi, "risks@sri-csl"@Sushi, "mcgrath%oz@mc"@Sushi

    From: Herb Lin <LIN@MC.LCS.MIT.EDU>

        We all, including the public, would like as high an X as
        possible, they would agree that losing a city or two and some
        missile bases/airfields would be a lot better than losing
        everything.

    But that is not the goal of the SDI.

Which does not mean it should not be supported for that reason.  Most
government programs have consequences (sometimes good, sometimes bad)
never conceived of in their initial purpose.  That does not mean you
ignore them when evaluating the program.

I simply do not follow your logic at all.  Do you want to score points
against Reagan and Company?  Or do you want to discuss strategic
defense, and SDI as it is developing?  I'm not interested in defending
Reagan, just developing defense and seeing that it is done the best
way possible.

Jim

------------------------------

Date: Sat 4 Jan 86 22:12:56-PST
From: Jim McGrath <J.JPM@Epic>
Subject: Politicians
To: "lin@mc"@Sushi
cc: "arms-d@mc"@Sushi, "risks@sri-csl"@Sushi, "mcgrath%oz@mc"@Sushi

    From: Herb Lin <LIN@MC.LCS.MIT.EDU>

        Besides, complaints that politicians are lying do not sit well
        with me.  Of course they are lying.  WE WANT THEM TO LIE.
        Politicians who tell the truth get kicked out of office....

    So you condone lying to the public as a tool of public policy? [...]

You are arguing from emotion (almost hysterically), not reason, which
I do not expect of you. I stated a fact: public officials must lie on
many (not all) issues in order to retain office.  (I could have said
"evade," or just "keep quiet about" if the word "lie" hits you so hard
- I see no functional distinction.)  This is one thing that everyone,
no matter what their policy perspective, agrees on (this comes from
several graduate seminars, and personal experience).  I did not say
that I liked that state of affairs much.  But I do not find it
reasonable to blame the politicians.  Rather, the fault lies with the
voters.

Unlike many of my friends in the social sciences, I do not
concentrates on the "oughts" of the world.  I focus on the empirical
evidence.  Perhaps it is the scientist in me.  So when I observe a
political system that punishes frank and honest talk about some issues
(usually those, like nuclear war and taxes, that are too horrible to
contemplate), I acknowledge this as a fact, and do not waste time
decrying it.  My decrying it is not (to the first approximation) going
to change human nature.  Thus my comment "we have to make do with what
we have."

    Sorry; I believe that elected leaders have a responsibility to
    tell the truth to the public, and to educate them away from fairy
    tales.  I would rather see precious defense dollars go to create
    good anti-tank weapons; that would have some chance of improving
    extended deterrence.

Come on now.  Leaders can only lead where people are, ultimately,
willing to go.  Just look at the nuclear freeze movement.  This is the
level at which the public thinks of nuclear war when it is forced to
think.

Finally, your last sentence shows that you missed my entire point.
Congress (i.e. the people) will not budget for the necessary increases
in conventional weapons (let alone the Europeans).  Ultimately it does
not matter what you or I like, it is what the people will accept.  And
if they act "irrationally," then I feel we cannot just sit back and
demand that out "leaders" make them change their minds, or that the
people change their stripes.  Instead we should focus on the possible
- which is, afterall, what politics is all about.

Jim

----------------------------------------------------------------------

Date: 5 Jan 86 17:52:32 PST (Sunday)
From: Hoffman.es@Xerox.ARPA
Subject: Re: Pharmacy prescription systems
To: RISKS FORUM (Peter G. Neumann, Coordinator) <RISKS@SRI-CSL.ARPA>
cc: Hoffman.es@Xerox.ARPA

Not speaking legally, but just morally, I think any professional who
relies on a single source of advice when more are available is derelict.
In any life-critical situation, information should always be tested or
cross-checked whenever possible.

Would you exonerate a pharmacist (or a physician, for that matter) who
relied completely on a particular reference book which had a critical
error in it?  Or on the tape-recorded lecture of some (human,
less-than-perfect) instructor?  Or, in the case under consideration, on
some expert computer system?  I certainly wouldn't.

The pharmacist does not shoulder all the moral burden in this
hypothetical case.  Some of it belongs to the information source
(publisher, human instructor, or expert computer system).  Just not all
of it.  As the other commenters on the subject have noted, the user must
be carefully trained against unthinking dependency.  That's the single
most important factor.  

This is another instance of the urgent necessity to debunk the popular
myth of computer infallibility.  People are all too eager to stop
thinking and let others (parents, teachers, priests, politicians,
computers) make all their decisions.  And the others usually love the
role, too.  In the case of computer systems, we their builders are in
this seductive position and we must remember the inherent perils.

--Rodney Hoffman  

----------------------------------------------------------------------

Date: Sun, 5 Jan 86 16:56:19 est
From: allegra!phri!roy@seismo.CSS.GOV (Roy Smith)
To: allegra!seismo!sri-csl.arpa!risks@seismo.CSS.GOV
Subject: How to steal people's passwords
Cc: roy@seismo.CSS.GOV

	In Volume 1, Issue 34, nelson@uw-beaver.arpa talked about faking
login procedures and non-system software masquerading as system software.
Well, an interesting thing just happened to me along those lines.

	We've got several modems used for both dial-in and dial-out.  To
place an outgoing call, you disable logins on a line (a program, acucntrl
does this for you) and use kermit to talk to the modem.  I had just done
this when somebody called in.  The modem, of course, didn't know logins
were disabled and answered the phone.

	At this point, nothing much prevented me from pretending to be a
login process and prompting the unsuspecting user for his login name and
password, then faking a burst of noise and breaking the connection so s/he
wouldn't suspect anything out of the ordinary.

	Our PBX allows you to forward all your calls by dialing "12 NNN".
Until canceled, all incoming calls get routed to xNNN.  I could have had
the modem forward all its calls to my office phone.  Plug an Apple-II
w/modem into my phone line, and I'm all set to steal passwords for a night.
Fix it up in the morning, and nobody would suspect anything worse than a
modem temporarily going bonkers, which happens often enough.

Roy Smith <allegra!phri!roy>
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

------------------------------

End of RISKS-FORUM Digest
************************

-------