[mod.politics.arms-d] Arms-Discussion Digest V7 #24

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (09/30/86)

Arms-Discussion Digest              Monday, September 29, 1986 10:42PM
Volume 7, Issue 24

Today's Topics:

                   knowledge and being co-opted...
           RE: TV aboard weapon, using fiber communication
        Re: 1st priority of gov't is defense or human rights??
               Proposed definition of autonomous weapon
               Autonomous Weapons (incl. neutron bomb)
        Autonomous weapons - source material and observations
                      Autonomous weapons - ROEs
                          Autonomous weapons
                            Administrivia

----------------------------------------------------------------------

Date: Sun, 28 Sep 1986  02:30 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: knowledge and being co-opted...


For the sake of discussion, here are two questions I have been
wondering about.  

1) What knowledge should everyone have about nuclear arms, security
policy, etc?  I have asked this before, and ask it from time to time
to stimulate discussion.

2) A related question: does knowing about (or more strongly,
professionally participating in) matters related to defense make a
person part of the problem rather than part of the solution?  I have
heard the following argument: If you understand the minutae of defense
and military issues, you are accepting the ground rules of the debate
(e.g., there is a threat, force is the best way to handle disputes,
and so on).  That makes you part of the problem, because the solution
lies with people NOT believing those things.

Comments welcome.

------------------------------

Date: Sun, 28 Sep 86 00:16:56 PDT
From: ihnp4!utzoo!henry@ucbvax.Berkeley.EDU
Subject:  RE: TV aboard weapon, using fiber communication

> As far as NIH causing delays in deployment:  Ten years isn't a delay,
> the thing is still in it's infancy.  Normal deployment cycles are
> often more than 10 years.  One missile system I worked on was 5 years old
> when I joined the team, It's now seven years later and production is just
> now cranking up ...

Whether this *should* be considered "normal" is another story.  It's worth
remembering that it doesn't have to be this way.  To take an extreme case,
the Thor IRBM went from design sketches to initial deployment in 3 years.
(With a design/development team smaller than the team NASA now uses just
to *launch* the Thor-derived Delta...)

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: Sun, 28 Sep 86 00:17:07 PDT
From: ihnp4!utzoo!henry@ucbvax.Berkeley.EDU
Subject:  Re: 1st priority of gov't is defense or human rights??

>  Recently somebody on ARMS-D said the primary purpose, top priority, of
>  any government is defense (actually several people said it).
>  But in a speech recently, Ronald Reagan seemed to contradict this,
>  he claims the main purpose of a government, the justification for
>  its existance, is human rights.  Is he speaking without thinking again?

Even discounting the issues of what a government's priorities *should* be
vs. what they *are* vs. what the campaign rhetoric says, the two positions
are not incompatible.  Providing one's population with human rights such
as life and liberty requires defending it against hostile groups that would
deny one or the other.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: Sun, 28 Sep 86 02:28:24 pdt
From: weemba@brahms.Berkeley.EDU (Matthew P Wiener)
Subject: Proposed definition of autonomous weapon

A weapon whose usage is not necessarily subordinate to political ends.

------------------------------

Date: Sun, 28 Sep 1986  09:00 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Autonomous Weapons (incl. neutron bomb)


    From: Clifford Johnson <GA.CJJ at Forsythe.Stanford.Edu>

    > ...  The neutron bomb has less blast (so it is less lethal to
    > structures) and more radiation (so it is more lethal to people).  No
    > discrimination involved.

    That surely *is* discrimination.  It doesn't matter that the device
    is non-digital.  This leads to a high autonomy rating in terms of
    "condition space" structure (e.g. IF HUMAN THEN DESTROY).  However,
    the weapon doesn't decide where or whether or how to explode, e.g.
    the *range* of the outcome space is small, given determinate arming,
    firing, and target acquisition processes.  Consequently, the
    autonomy rating is not high overall.

I think that the question of autonomy should involve what a machine
does, not what goes into its design by human beings.  Thus, the
condition space rule "If Human, then Destroy", though true, shouldn't
enter into its autonomy rating at all, because *people* have made that
decision.  It's not "tactical" discrimination taking place at the
operational level being done by machine, but rather "strategic"
discrimination taking place at the design level being done by people. 

Maybe the tactical/strategic distinction is useful?  I'm most
uncomfortable with things being highly automated when they involved
*operations*.

    ...  With regard to LOW, I have developed a substantial lexicon.
    Some of this will apply to autonomy in general.  For example, I
    distinguish between "manned" (positive human decision required),
    "monitored" (override capability provided), "tended" (human role
    limited to machine checks), "randomized," and many other varieties.

I think this would be useful.  Please send me (Herb) a copy, not for
inclusion in the DIGEST.

Tnx.

------------------------------

Date: Sun, 28 Sep 1986  09:11 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Autonomous weapons - source material and observations


    From: jon at june.cs.washington.edu (Jon Jacky)

    This theme of using fewer troops runs throughout.  It is being
    promulgated to the naive public.

I'd like to understand why is it bad to imagine war using fewer troops
and more automation.  I can think of two reasons.  (1) More automation
means more chances for computer screwups, and therefore more losses
among innocent bystanders. (2) Fewer troops mean it is easier to
commit troops to war, giving the image of a "sanitized" battlefield. 

These two seem to be related.  

    Echoing Wood, the exhibit included the quote from Carl Sandburg's
    poem, "The
    People Yes," that says "Sometime they'll give a war and nobody will come."
    This was labelled, "New meaning in the 1980's"

    I find this last exhibit, especially, rather sleazy.

Yes, I agree.  But is your point that this military exhibit is falsely
turned into a peace exhibit by this commentaty?

    In any situation, in which the
    possibility of war exists, many who favor war argue that it will not
    be too expensive, it can be gotten done quickly, it won't cost too
    much, etc.  Robot weapons provide fuel for this argument -- in the
    absence of any operational experience that confirms the argument, I
    might add.

So is the problem that these robot weapons will work, or that they
will not work, or that they are inherently flawed even if they work
perfectly?  (The same questions could be asked about SDI.  Are you
opposed to SDI because it is unfeasible, or because it is
unnecessary?)

------------------------------

Date: Sun 28 Sep 86 13:17:26-ADT
From:  Don Chiasson <CHIASSON@DREA-XX.ARPA>
Subject: Autonomous weapons - ROEs


     In discussing autonomous weapons, we must keep in mind what humans do.
Military commanders do not have unlimited autonomy.  They are guided by a
set of procedures called rules of engagement, or ROE's.  These rules will
vary depending on the location, the level of tension, available forces, and
so on.  I assume that commanders would want autonomous weapons to behave in
accordance with existing ROEs, and so the problem becomes one of deciding
if the ROEs are appropriate and, if so, implementing these existing rules.
     Three further comments:
     -  ROE's are context dependent, e.g. are we war (and with whom!), is
        there a state of tension (what level?), or are we at peace.  How are
        these facts/assessments passed to the system without risk of
        sabotage?  Or are autonomous weapons to be deployed only in
        wartime?  
     -  To me a troubling aspect of the KAL007 incident, and one which
        did not seem to receive any discussion, was the detail of what ROEs
        the Soviets were using.  I would be very surprised if any western 
        bloc nations would allow shoot to kill procedures in peacetime.
     -  Standard procedures exist to cover most situations, military and
        otherwise.  It is my feeling that in many discussions of
        computerization, the debate sometimes tends to ignore existing
        practices and start out as if the problem had never been
        approached before.  Or am I just suffering from old-fogyism? 

                -Don
Disclaimer: I am not an employee of DREA.  These comments are not presented
        as those of DREA or my employer.

------------------------------

Date: Sun, 28 Sep 86 17:54:05 PDT
From: Clifford Johnson <GA.CJJ@Forsythe.Stanford.Edu>
Subject:  Autonomous weapons

> I think that the question of autonomy should involve what a machine
> does, not what goes into its design by human beings...
> Maybe the tactical/strategic distinction is useful?  I'm most
> uncomfortable with things being highly automated when they involved
> *operations*.

We don't really disagree here.  Let me recapitulate.  I defined an
autonomous weapon as "A SET OF DEVICES PRECONFIGURED TO
CONDITIONALLY EXECUTE A BELLIGERENT ACT."  It seems you wish to
exclude the "conditions" that the weapon is designed to interpret as
DIRECT ORDERS from its operator, and which it is designed to respond
to in a deterministic fashion.  The latter point corresonds to a
"compact" outcome space, and we agree it qualifies for a low
autonomy rating.  When the condition space is wholly of the
"presumably-operator-generated type", and the outcome space compact,
I agree this represents a kind of "degenerate" or "baseline"
autonomy, but on logical grounds would include it in the definition,
and not "artificially" restrict the definition to digitized
conditional execution occurring without further operator
instructions -- though, when it comes to analysing matters of
current alarm, this *degree* of autonomy is what is of greatest
concern.

The problem with your appraoch is that in trying to exclude
consideration of "degenerate" autonomies, you are open to valid
criticism that you have constructed an improper distinction.  For
example, your first sentence could be attacked as wholly circuitous
and otherwise empty, for even robotic tanks are supposed to do only
what they are designed to do, and no more.  Yet, by your own
(implied) definition, a simple gun designed to be fired by humans
must be construed as autonomous when accidentally triggered by a
dog, which is not what it was designed to do.

My analysis would place "IF SAFETY OFF AND TRIGGER PULLED" in the
weapon's consition set, with a flag saying that this condition is
supposedly indicative of a direct operator instruction.  For more
autonmous weapons, such as the robot tank, there would be equivalent
(though more complex) operator generated conditions, plus a bunch of
"sensor-generated" conditions.

What you're saying is that A WEAPON IS AUTONOMOUS TO THE DEGREE THAT
IT IS PRECONFIGURED TO EXECUTE BELLIGERENT ACTS ACCORDING TO SENSOR
(NON-OPERATOR DRIVEN) CONDITIONS, right?  I recognize that this adds
greatly to the degree of autonomy.

These discussions, though abstract, are highly important.  Thus, the
Fletcher commission suggested that the sensor input comprising the
signatures of Soviet missiles could by a "verbal trick" be construed
as Presidential orders to fire.

To:  ARMS-D@XX.LCS.MIT.EDU

------------------------------

Date: Mon, 29 Sep 1986  11:12 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Administrivia

==>> Someone please let this guy know he hasn't been getting the
Digest for weeks now.

	After 5 days (104 hours), your message has not yet been
    fully delivered.  Attempts to deliver the message will continue
    for 2 more days.  No further action is required by you.

	Delivery attempts are still pending for the following address(es):

	    Pfeiffer@NMSU (host: nmsu.csnet) (queue: nmsu)

------------------------------

End of Arms-Discussion Digest
*****************************