[mod.risks] RISKS-3.64 DIGEST

RISKS@CSL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (09/24/86)

RISKS-LIST: RISKS-FORUM Digest Wednesday, 24 September 1986 Volume 3 : Issue 64

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Sane sanity checks /  risking public discussion (Jim Purtilo)
  More (Maybe Too Much) On More Faults (Ken Dymond)
  Re: Protection of personal information (Correction from David Chase)
  Towards an effective definition of "autonomous" weapons
      (Herb Lin, Clifford Johnson [twice each])

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
(Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM)
  (Back issues Vol i Issue j available in CSL.SRI.COM:<RISKS>RISKS-i.j.
  Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

Date: Tue, 23 Sep 86 12:54:10 EDT
From: Jim Purtilo <purtilo@brillig.umd.edu>
To: RISKS@csl.sri.com
Subject: Sane sanity checks /  risking public discussion

  [Regarding ``sanity checks'']

Let us remember that there are sane ``sanity checks'' as well as the other 
kind. About 8 years ago while a grad student at an Ohio university that 
probably ought to remain unnamed, I learned of the following follies:

The campus had long been doing class registration and scheduling via
computer, but the registrar insisted on a ``sanity check'' in the form of
hard copy.  Once each term, a dozen guys in overalls would spend the day
hauling a room full of paper boxes over from the CS center, representing a
paper copy of each document that had anything to do with the registration
process.  [I first took exception to this because their whole argument in
favor of "computerizing" was based on reduced costs, but I guess that should
be hashed out in NET.TREE-EATERS.]

No one in that registrar's office was at all interested in wading through
all that paper. Not even a little bit.

One fine day, the Burroughs people came through with a little upgrade to the
processor used by campus administration.  And some "unused status bits"
happened to float the other way.

This was right before the preregistration documents were run, and dutifully
about 12,000 students preregistration requests were scheduled and mailed
back to them.  All of them were signed up "PASS/FAIL".  This was
meticulously recorded on all those trees stored in the back room, but no one
wanted to look.

I suppose a moral would be ``if you include sanity checks, make sure a sane
person would be interested in looking at them.''


  [Regarding break-ins at Stanford]

A lot of the discussion seems to revolve about ``hey, Brian, you got what
you asked for'' (no matter how kindly it is phrased).  Without making
further editorial either way, I'd like to make sure that Brian is commended
for sharing the experience.  Sure would be a shame if ``coming clean'' about
a bad situation will be viewed as itself constituting a risk...

               [I am delighted to see this comment.  Thanks, Brian!  PGN]

------------------------------

Date: 23 Sep 86 09:18:00 EDT
From: "DYMOND, KEN" <dymond@nbs-vms.ARPA>
Subject: More (Maybe Too Much) On More Faults
To: "risks" <risks@csl.sri.com>

The intuitive sense made by Dave Benson's argument in RISKS 3.50, that

  >We need to understand that the more faults found at any stage to
  >engineering software the less confidence one has in the final product.  
  >The more faults found, the higher the likelihood that faults remain.

seems to invite a search for confirming data because there are also counter-
intuitive possibilities.  For example there is the notion that the earlier
in the life cycle errors are detected, the cheaper to remedy them.  There is
a premium on finding faults early.  And the further notion that with tools
for writing requirements in some kind of formal language that can be checked
for syntactic and semantic completeness and consistency, it's possible to
detect at least some errors at requirements stage that may not have been
caught till later.  So SE projects using these and similar methods for other
stages in the life cycle would tend to show more errors earlier.  Would the
products from these projects be therefore less reliable than others made
with, say, more traditional, less careful, design and programming practice ?

Dave makes the further argument in RISKS 3.57:

  >Certain models of software failure place increased "reliability" on
  >software which has been exercised for long periods without fault. [...]

The models of software reliability exist to order our thinking about
reliability and to help predict behavior of software systems based on
observation of failure up to the current time.  The models that show
failures clustered early in time and then tapering off later do indeed model
an intuition but maybe not the one that more faults mean yet more faults.
Hence the need for data.  I suspect that the reality as shown by data, if it
exists, would be more complex than intuition allows.  More errors discovered
so far may just mean better software engineering methods.  As far as other
engineering fields, the failure vs time curve in manufactured products is
often taken to be tub-shaped, not exponentially decaying.  So more failures
are expected at the beginning and near the end of the useful life of a
"hard" engineered product.  Of course, "an unending sequence of irremediable
faults" should be the kiss of death for any product, whether from hard
engineering or soft.  But the trick is in knowing that the sequence is
unending.  The B-17, I seem to remember reading, had a rather rocky
development road in the 1930s, yet was not abandoned.  Was it just that the
aeronautical engineers at Boeing then had in mind some limit on the number
of faults and that this limit was not exceeded?  It might be easy to say in
hindsight.  On the other hand, sometimes foresight, in terms of spotting a
poor design at the outset, makes a difference, as in the only Chernobyl-type
power reactor outside the Soviet block.  It was bought by Finland (perhaps
this is what "Finlandization" means ?).  However the Finns also bought a
containment building from Westinghouse.

Ken Dymond

------------------------------

From: David Chase <rbbb@rice.edu>
To: Andy_Mondore%RPI-MTS.Mailnet@MIT-MULTICS.ARPA
Cc: risks@csl.sri.com
Date: Tue, 23 Sep 86 08:56:18 EDT
Subject: Re: Protection of personal information
                       [The two participants requested this clarification 
                        be included for the record...  PGN]

You misinterpreted my message in a small way; I was writing about a
university attended by a friend, NOT Rice university.  To my knowledge, Rice
has been very good about protecting its students' privacy.  My student
number is NOT my social security number, though the university has that
number for good reasons.  I do not want anyone to think that I was talking
about Rice.       David

------------------------------

Date: Tue, 23 Sep 1986  18:00 EDT
From: LIN@XX.LCS.MIT.EDU
To:   risks@CSL.SRI.COM
Subject: Towards an effective definition of "autonomous" weapons

         [THE FOLLOWING DISCOURSE INVOLVING CLIFF AND HERB IS LIKELY
          TO CONTINUE FOR A WHILE ON ARMS-D.  PLEASE RESPOND TO HERB LIN, 
          NOT TO RISKS ON THIS ONE.  HERB HAS VOLUNTEERED TO SUBMODERATE,
          AND THEN SUBMIT THE RESULTS TO RISKS.  PGN]

    From: Clifford Johnson <GA.CJJ at Forsythe.Stanford.Edu>

    An "autonomous weapon" [should be] defined to be any weapons system
    which is de facto preprogrammed to take decisions which, under the law 
    of nations, require the exercise of political or military discretion.

It's not a bad first attempt, and I think it is necessary to get a
handle on this.  With the realization that you have done us a service
in proposing your definition, let me comment on it.

I don't understand what it means for a weapon to "take a decision".  Clearly
you don't intend to include a depth charge set to explode at a certain
depth, and yet a depth charge could "decide" to explode at 100 feet given
certain input.

What I think you object to is the "preprogrammed" nature of a weapon,
in which a chip is giving arming, targeting and firing orders rather
than a human being.  What should be the role of the human being in
war?  I would think the most basic function is to decide what targets
should be attacked.  Thus, one modification to your definition is

    An "autonomous weapon" [should be] defined to be any weapons 
    system which is preprogrammed to SELECT targets.

This would include things like roving robot anti-tank jeeps, and
exclude the operation of LOW for the strategic forces.

But this definition would also exclude "fire-and-forget" weapons, and
I'm not sure I want to do that.  I want human DESIGNATION of a target
but I don't want the human being to remain exposed to enemy fire after
he has done so.  Thus, a second modification is 

    An "autonomous weapon" [should be] defined to be any weapons 
    system which is preprogrammed to SELECT targets in the absence of
    direct and immediate human intervention.

But then I note what a recent contributor said -- MINES are autonomous
weapons, and I don't want to get rid of mines either, since I regard
mines as a defensive weapon par excellence.  Do I add mobility to the
definition?  I don't know.

------------------------------

Date: Monday, 22 September 1986  21:43-EDT
From: Clifford Johnson <GA.CJJ at Forsythe.Stanford.Edu>
To:   Arms-Discussion                       [FORWARDED BY HERB LIN]
Subject: Towards an effective defintion of "autonomous" weapons

There's great difficulty in defining "autonomous weapons" so as to separate
some element that seems intuitively "horrible" about robot-decided death.
But a workable definition is necessary if, as CPSR tentatively proposes,
such weapons are to be declared illegal under international law, as have
chemical and nuclear weapons.  (Yes, the U.N. has declared even the
possession of nukes illegal, but it's not a binding provision.)

The problem is, of course, that many presently "acceptable" weapons already
indiscrminately-discriminate targets, e.g.  target-seeking munitions and
even passive mines.  Weapons kill, and civilians get killed too, that's war.
Is there an element exclusive to computerized weapons that is meaningful?

I don't have an answer, but feel the answer must be yes.  I proffer two
difficult lines of reasoning, derived from the philosophy of automatic
decisionmaking rather than extant weapon systems.  First, weapon control
systems that may automatically target-select among options based upon a
utility function (point score) that weighs killing people against destroying
hardware would seem especially unconscionable.  Second, but this presumes a
meaningful definition of "escalation," any weapons system that has the
capability to automatically escalate a conflict - and is conditionally
programmed to do so - would also seem unconscionable.

Into the first bracket would conceivably fall battle management software and
war games, into the second would fall war tools that in operation (de facto)
would take decisions which according to military regulations would otherwise
have required the exercise of discretion by a military commander or
politician.  The latter category would embrace booby-trap devices activated
in peacetime, such as mines and LOWCs; and here there is the precedent of
law which prohibits booby traps which threaten innocents in peacetime.
Perhaps the following "definition" could stand alone as *the* definition of
autonomous weapons to be banned:

An "autonomous weapon" is defined to be any weapons system which is
de facto preprogrammed to take decisions which, under the law of
nations, require the exercise of political or military discretion.

This might seem to beg the question, but it could be effective - military
manuals and international custom is often explicit on each commanders'
degree of authority/responsibility, and resolving whether a particular
weapon was autonomous would then be a CASE-BY-CASE DETERMINATION.  Note that
this could, and would, vary with the sphere of application of the weapons
system.  This is reasonable, just as there are circumstances in which
blockades or mining is "legal" and "illegal."

Of course, a case-in-point would be needed to launch the definition.
Obviously, I would propose that LOWCs were illegal.  How about battle
management software which decides to engage seemingly threatening entities
regardless of flag, in air or by sea?  Any other suggestions?  Does anyone
have any better ideas for a definition?

------------------------------

Date: Tue, 23 Sep 1986  18:09 EDT
From: LIN@XX.LCS.MIT.EDU
To:   arms-d@XX.LCS.MIT.EDU, risks@CSL.SRI.COM
Subject: Towards an effective definition of "autonomous" weapons

In thinking about this question, I believe that ARMS-D and RISKS could
perform a real service to the defense community.  There is obviously a
concern among some ARMS-D and RISKS readers that autonomous weapons
are dangerous generically, and maybe they should be subject to some
legal restrictions.  Others are perhaps less opposed to the idea.

It is my own feeling that autonomous weapons could pose the same danger to
humanity that chemical or biological warfare pose, though they may be
militarily effective under certain circumstances.

I propose that the readership take up the questions posed by Cliff's recent
contribution:

    What is a good definition of an autonomous weapon?  

    What restrictions should be placed on autonomous weapons, and why?

    How might such limits be verified?

    Under what circumstances would autonomous weapons be militarily
    useful?

    Should we be pursuing such weapons at all?

    How close to production and deployment of such weapons are we?

Maybe a paper could be generated for publication?

------------------------------

Date: Tue, 23 Sep 86 18:16:46 PDT
From: Clifford Johnson <GA.CJJ@Forsythe.Stanford.Edu>
To: ARMS-D@XX.LCS.MIT.EDU                      [FORWARDED BY HERB LIN]
Subject:  Towards a definition of "autonomous" weapons

> I don't understand what it means for a weapon to "take a decision".
> Clearly you don't intend to include a depth charge set to explode at a
> certain depth, and yet a depth charge could "decide" to explode at 100
> feet given certain input.

My concept of "decision" embraces *all* manner of conditional
executions, delimited expressly by the customary law that recognizes
certain "special" decisions require human participation.  I know of
no precedent suggesting that "aiming" at a properly comprehended
target requires such discretion, which is the logical mechanism that
eliminates depth charges from my definition.

> What should be the role of the human being in
> war?  I would think the most basic function is to decide what targets
> should be attacked.

The question I'm concerned with is "What should/could be the role of
law in precluding the unconscionable automation of war?"  With the
basic thrust on target selection I agree, but to include it I would
amend my definition by referencing "fatal consequences"
(generalization of "targets"):

AN "AUTONOMOUS WEAPON" IS DEFINED TO BE ANY WEAPONS SYSTEM WHICH
IS DE FACTO PREPROGRAMMED TO TAKE DECISIONS WHICH, DUE TO THEIR
POTENTIALLY FATAL CONSEQUENCES, UNDER THE LAW OF NATIONS REQUIRE
THE EXERCISE OF HUMAN DISCRETION AT THE TIME THEY ARE TAKEN.

N.B. The "de facto" is important to exclude the excuse that a
human might "override" the weapons' decision, when, in practice,
he/she wouldn't be competent to do so.

> But then I note what a recent contributor said -- MINES are autonomous
> weapons, and I don't want to get rid of mines either, since I regard
> mines as a defensive weapon par excellence.  Do I add mobility to the
> definition?  I don't know.

It seems such confusions are inevitable when seeking a
primarily technological definition of "autonomous" weapons.  My
"trick" is to reverse the analysis, placing the burden upon the
human judgment that some kinds of decisions are unconsionably
automated.  The nature of and context of the particular decision
becomes the focus. This isn't a cop-out, but would build upon
established military/international custom.  As I have intimated, it
would seek precedents where particular decisions would "initiate" or
"escalate" a conflict, or be beyond the ordinary authority of the
officer operating the system.

                             [RESPONSES TO LIN@XX.LCS.MIT.EDU please.]

------------------------------

End of RISKS-FORUM Digest
************************
-------