[mod.politics.arms-d] Arms-Discussion Digest V7 #22

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (09/27/86)

Arms-Discussion Digest              Friday, September 26, 1986 11:24PM
Volume 7, Issue 22

Today's Topics:

       Viking Landers worked the first time and met the specs.
     role of simulation - combat simulation for sale (from RISKS)
"Friendly" missiles and computer error -- more on the Exocet (from RISKS)
        Autonomous weapons - source material and observations
               Autonomous Weapons (incl. neutron bomb)
                 Looking for Arms Control Information
                             Phil and SDI

----------------------------------------------------------------------

Date: Wed, 24 Sep 86 18:01:49 pdt
From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Subject:  Viking Landers worked the first time and met the specs.

Both Viking Landers worked in their first (and only) operation.  The
pre-operation testing simply ups one's confidence that the actual
operation will be successful.  Since the Viking Landers were the first
man-made objects to land on Mars, Murphy's Law should suggest to any
engineer that perhaps something might have been overlooked.  In actual
operation, nothing was.

Both Viking Mars shots had specifications for the length of time they
were to remain in operation.  While I do not recall the time span,
both exceeded the specification by years.  I do recall that JPL had to
scrounge additional funds to keep the data coming in from all the
deep-space probes, including the Vikings, as the deep space mechanisms
were all working for far longer than expected.
	
Surely any engineered artifact which lasts for longer than its
design specification must be considered a success.  Nothing
lasts forever, especially that most fragile of all artifacts, software.
Thus the fact that the Viking 1 Lander software was scrambled beyond
recovery some 8 years after the Mars landing only reminds one that
the software is one of the components of an artifact likely to fail.
So I see nothing remarkable about this event, nor does it in any way
detract from judging both Viking Mars missions as unqualified engineering
successes.

------------------------------

Date: Thursday, 25 September 1986  20:10-EDT
From: jon at june.cs.washington.edu (Jon Jacky)
To:   risks at CSL.SRI.COM
Re:   role of simulation - combat simulation for sale

I came across the following advertisement in AVIATION WEEK AND SPACE TECHNOLOGY,
June 16, 1986, p. 87:

SURVIVE TOMORROW'S THREAT - <illegible> Equipment and Tactics Against Current
	and Future Threats

FSI's dynamic scenario software programs such as "War Over Land," "AirLand
Battle," and "Helicopter Combat" provide realistic simulation of a combat
environment.  These programs use validated threat data to evaluate the 
effectiveness of individual weapons or an integrated weapons system.  The 
easy-to-utilize programs are already in use by the Army, Navy, Air Force, and
many prime defense contractors.  Evaluate your system on a DoD-accepted model.
For more information, contact ... ( name, address, contact person).

(end of excerpt from ad)

The ad doesn't really say how you run this simulation, but kind of implies 
you can actually test real electronic warfare equipment with it.  Needless to
say, an interesting issue is, how comprehensive or realistic is this "validated
(by whom? how?) threat data?"  I checked the bingo card with some interest.
And this ad is just one example of the genre - p. 92 of the same issue 
advertises a product called "SCRAMBLE! Full mission simulators," showing 
several high-resolution out-the-window flight simulator displays of aerial
combat.

-Jonathan Jacky, University of Washington

------------------------------

Date: Thursday, 25 September 1986  21:23-EDT
From: Rob MacLachlan <RAM at C.CS.CMU.EDU>
To:   RISKS-LIST:, risks at CSL.SRI.COM
Re:   "Friendly" missiles and computer error -- more on the Exocet

   [We have been around on this case in the past, with the "friendly" theory
    having been officially denied. This is the current item in my summary list:
       !!$ Sheffield sunk during Falklands war, 20 killed.  Call to London
           jammed antimissile defenses.  Exocet on same frequency.  
           [AP 16 May 86](SEN 11 3)                 
    However, there is enough new material in this message to go at it once
    again!  But, please reread RISKS-2.53 before responding to this.  PGN]

    I recently read a book about electronic warfare which had some
things to say about the Falklands war incident of the sinking of the
Sheffield by an Exocet missile.  This has been attributed to a
"computer error" on the part of a computer which "thought the missile
was friendly."  My conclusions are that:
 1] Although a system involving a computer didn't do what what one
    might like it to do, I don't think that the failure can reasonably
    be called a "computer error".
 2] If the system had functioned in an ideal fashion, it would
    probably have had no effect on the outcome.

The chronology is roughly as follows:

The Sheffield was one of several ships on picket duty, preventing
anyone from sneaking up on the fleet.  It had all transmitters
(including radar) off because it was communicating with a satellite.

Two Argentinan planes were detected by another ship's radar.  They
first appeared a few miles out because they had previously been flying
too low to be detected.  The planes briefly activated their radars,
then turned around and went home.

Two minutes later a lookout on the Sheffield saw the missile's flare
approaching.  Four seconds later, the missile hit.  The ship eventually
sank, since salvage efforts were hindered by uncontrollable fires.

What actually happened is that the planes popped up so that the could
acquire targets on their radars, then launched Exocet missiles and
left.  (The Exocet is an example of a "Fire and Forget" weapon.  Moral
or not, they work.)  The British didn't recognize that they had been
attacked, since they believed that the Argentinans didn't know how to
use their Exocet missiles.

It is irrelevent that the Sheffield had its radar off, since the
missile skims just above the water, making it virtually undetectable
by radar.  For most of the flight, it proceeds by internal guidance,
emitting no telltale radar signals.  About 20 seconds before the end
of the flight, it turns on a terminal homing radar which guides it
directly to the target.  The Sheffield was equipped with an ESM
receiver, whose main purpose is to detect hostile radar transmissions.

The ESM receiver can be preset to sound an alarm when any of a small
number of characteristic radar signals are received.  Evidently the
Exocet homing radar was not among these presets, since there would
have been a warning 20 sec before impact.  In any case, the ESM
receiver didn't "think the missile was friendly", it just hadn't been
told it was hostile.  It should be noted that British ships which were
actually present in the Falklands were equipped with a shipboard
version of the Exocet.

If the failure was as deduced above, then the ESM receiver behaved
exactly as designed.  It is also hard to conceive of a design change
which would have changed the outcome.  The ESM receiver had no range
information, and thus was incapable of concluding "anything coming
toward me is hostile", even supposing the probably rather feeble
computer in the ESM receiver were cable of such intelligence.

In any case, it is basically irrelevant that the ESM receiver didn't
do what it might have done, since by 20 seconds before impact it was
too late.  The Sheffield had no "active kill" capability effective
against a missile.  Its anti-aircraft guns were incapable of shooting
down a tiny target skimming the water at near the speed of sound.

It is also poossible to cause a missile to miss by jamming its radar,
but the Sheffield's jamming equipment was old and oriented toward
jamming russian radars, rather than smart western radars which
wheren't even designed when the Sheffield was built.  The Exocet has a
large bag of tricks for defeating jammers, such as homing in on the
jamming signal.

In fact, the only effective defense against the Exocet which was
available was chaff: a rocket dispersed cloud of metalized plastic
threads which confuses radars.  To be effective, chaff must be
dispersed as soon as possible, preferably before the attack starts.
After the Sheffield, the British were familiar with the Argentinan
attack tactics, and could launch chaff as soon as they detected the
aircraft on their radars.  This defense was mostly effective.

Ultimately the only significant mistake was the belief that the
Argentinans wouldn't use Exocet missiles.  If this possibility was
seriously analysed, then the original attack might have been
recognized.  The British were wrong, and ended up learning the hard
way.  Surprise conclusion: mistakes can be deadly; mistakes in war are
usually deadly.

I think that the most significant "risk" revealed by this event is
tendency to attribute the failure of any system which includes a
computer (such as the British Navy) to "computer error".

------------------------------

Date: Fri, 26 Sep 86 10:45:23 PDT
From: Clifford Johnson <GA.CJJ@Forsythe.Stanford.Edu>
Subject:  Autonomous Weapons (incl. neutron bomb)

> I read with interest your proposed definition of proscribed autonomous
> weapons.  Unfortunately, I have a hard time grasping its consequences.
> Can you elaborate (either in a mail response to me, or a general posting)
> on what weapons would and would not be covered by your definition?

I wish I were free to research this one, there's a number of
avenues along which I'd proceed.  What I've done is suggest a
canonical framework for categorization, but much needs to be done to
develop it.  The general thrust is to broadly recognize that all
weapons are automata, and then show how some designs and
capabilities exceed legal/decent bounds more than others by virtue
of the nature of their translation of condition codes into
outcomes.  If I get time, I'll follow through.  Meanwhile, I would
note that I'd include a special subcatagorization on "conditions"
which the device is preconfigured to construe as human instructions
(in the condition space description).  Thus, a gun may autonomously
fire if a dog steps on the trigger.

Leaving these notty considerations aside, one case-in-point is
the neutron bomb:

>     I don't remember what the current situation with regard to the deployment
>     of the neutron bomb (sorry, enhanced radiation weapon) is, but doesn't that
>     'automatically' discriminate against human beings rather than hardware?
>
> No.  The neutron bomb has less blast (so it is less lethal to
> structures) and more radiation (so it is more lethal to people).  No
> discrimination involved.

That surely *is* discrimination.  It doesn't matter that the device
is non-digital.  This leads to a high autonomy rating in terms of
"condition space" structure (e.g. IF HUMAN THEN DESTROY).  However,
the weapon doesn't decide where or whether or how to explode, e.g.
the *range* of the outcome space is small, given determinate arming,
firing, and target acquisition processes.  Consequently, the
autonomy rating is not high overall.

If the neutron bomb itself "decided" (conditionally evaluated)
whether circumstances warranted its use, the range of the outcome
space would be vast, and the weapon deemed highly autonomous.
Moreover, there would have to be some point-score utility function
(the mapping function) upon which the conditional execution would
hinge.  This computation, which weighs the value of life against
hardware, might be judged unconscionable.  (Even where the utility
is weighed by humans, the calculated outcome has been protested as
unconscionable.)

The matter of classifying weapon autonomies obviously needs much
work - I'd much appreciate any citations anyone has on work already
done on this score.

P.S.  With regard to LOW, I have developed a substantial lexicon.
Some of this will apply to autonomy in general.  For example, I
distinguish between "manned" (positive human decision required),
"monitored" (override capability provided), "tended" (human role
limited to machine checks), "randomized," and many other varieties.
It's quite lengthy, so anyone interested should write me for a copy.

------------------------------

Date: Fri, 26 Sep 86 11:22:09 PDT
From: jon@june.cs.washington.edu (Jon Jacky)
Subject: Autonomous weapons - source material and observations

I think it is important to report some statements by current and former 
US government officials regarding autonomous weapons.  The discussion here
is getting a bit too abstract, I fear.  In particular, observations that mines
and depth charges are autonomous are true in some trivial sense, but are clearly
not what these officials are talking about.  These statements also show that
that robot weapons are motivated not so much by military needs as by domestic
political pressures of various kinds.

First, to dispose of the land-mine, depth-charge analogy.  In Spring 1983 DARPA
director Robert Cooper testified to the House Armed Services Committee, trying
to raise $600M for Strategic Computing (he got most of it).  He justified the
program in these words:

	In the early 1990's Autonomous systems such as advanced
	cruise missiles or undersea vehicles will be needed. Systems
	like these will require almost human-like capabilities to sense,
	reason, plan and navigate to fulfill their missions.  Above all, 
	they must be able to react effectively in the face of unexpected or
	unforseen circumstances ...

Cooper's testimony goes on in this vein for quite a while, as does the 90-page
document, STRATEGIC COMPUTING, released by DARPA in October 1983.

Motivation: the nakedest statement appears in a very interesting 1981 paper
by Lowell Wood, titled "Conceptual basis for defense applications of super-
computers."  In this paper, Wood proposes applying S-1 architecture computers
built with wafer-scale integration technology in various battlefield weapons.
He opens the paper with the provocative question, "What if they gave a war and
no American had to come?"  He continues:

	Not only does the political cost of large armed forces continue to 
	climb, but the political toll of deploying them in harm's way has
	become almost unbearably high.  The economic consequences of large 
	American armed forces are nearly as daunting...  These considerations
	suggest that the US move to alter its defense posture toward one 
	involving substantially fewer men under arms and far smaller casualty
	rates and total casualties in the event of hostilities, while
	simultaneously attaining sufficiently greater overall force effectivenes
	... Is this possible?  It is suggested here that this is possible by
	aggressive use of battlefield robotics.

This theme of using fewer troops runs throughout.  It is being promulgated to the
naive public.  A few years ago I attended an exhibit at the local science 
museum, called "Chips and changes," about the impact of computers on everyday
life (this exhibit had many corporate sponsors and toured the country, including
San Francisco's exploratorium and other museums).  There was an exhibit titled
IN THE ARMY NOW.  HEre are some excerpts:

	ATTENTION!  CHIPS MAKE GOOD SOLDIERS...  AND WHAT'S A SMART WAR?
	In computer lingo, smart means having some built-in computerized
	ability to recieve input, "think it through," and use it to direct
	action.  A home dishwasher can be smart.  So can a bomb.  The next
	generation won't be smart, they'll be brilliant, and they'll be here
	within the decade.

	...Drones may conduct remote-controlled wars.  ... Fewer American 
	citizens would be needed to fight such a war ...

Echoing Wood, the exhibit included the quote from Carl Sandburg's poem, "The
People Yes," that says "Sometime they'll give a war and nobody will come."
This was labelled, "New meaning in the 1980's"

I find this last exhibit, especially, rather sleazy.  Sorry if I'm overreacting,
but my crap detector starts ringing whenever I find some official pronouncement
that suggests a war might not be too bad, really.  I think this is the crux of 
the attraction of autonomous weapons, and their real danger.  In any situation,
in which the possibility of war exists, many who favor war argue that it will not
be too expensive, it can be gotten done quickly, it won't cost too much, etc.
Robot weapons provide fuel for this argument -- in the absence of any o
operational experience that confirms the argument, I might add.

Another factor that encourages the development of robot weapons, apart from 
any considerations of military utility, is that most computer science funding
is controlled by DoD, so computer scientists end up assisting with these 
projects simply to get the work done that they feel is important.  For example,
the Connection Machine, Butterfly, Warp, and Non-Von (innovative parallel
computers) were all partially funded by DARPA Strategic Computing.  From what
I can tell, the scientists working on these are not particularly interested in
robot weapons, but the agency motivated their funding to Congress on the grounds
that their work would make these weapons possible.  The fact that these eminent
scientists are in some sense participating in the effort may lend it more 
credibility than if the scientists were able to get the money from NSF and the
Pentagon had to justify the weapons on their own merits, rather than by 
spin-offs.

Another point: it has been implied in this digest (and elsewhere) that
robot weapons are somehow a "better" alternative to nuclear weapons.
Again, this is true in some trivial sense, but is mostly specious,
because in most cases where robot weapons would be used, nuclear
weapons would never be considered anyway.  Nuclear war is (I hope) a
fairly unlikely occurence; on the other hand, wars equipped with
American-made armaments are being fought virtually continously.  Face
it, the preferred mode of superpower conflict is through proxies in
relatively insignificant arenas.  A frequent need in these wars is
means of clearing large numbers of enemy soldiers and other people
from territories of interest, and for this robot weapons seem to fit
the bill.  Such campaigns can be quite brutal, and I believe robots
could make it even worse.  By promising to make such campaigns
"cheap," I belive robot weapons could encourage or prolong conflicts
which otherwise might not begin, or would be finished more quickly.

-Jonathan Jacky
University of Washington

------------------------------

Date: Fri, 26 Sep 1986  23:00 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Looking for Arms Control Information


    From: Nancy Breen <njbreen at cch.bbn.com>

    Does anyone know of a good book or series of articles dealing with
    recent arms control issues?

Two good books, recently available, are "Nuclear Arms Control:
Background and Issues", by the National Academy of Sciences, National
Academy Press, and "International Arms Control: Issues and
Agreements", Coit Blacker and Gloria Duffy, Stanford University Press.

------------------------------

Date: Fri, 26 Sep 1986  23:15 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Phil and SDI

    From: Richard A. Cowan <COWAN>

    ; There are also quite a large number of people who think [SDI] is the
    ; only morally and ethically concievable way of defending ourselves.

    There is quite a large number of people who will make a lot of money if
    SDI funding continues.  It is only human nature for these people to
    justify what they do by claiming it will defend "us."  

These two aren't inconsistent.  It's a red herring to claim that the
only ones who think the U.S. should buy weapons are those who will
profit from it.

    ; I can't accept a world in which defense means threatening to immolate
    ; millions of Russian men, women and children who have little say in the
    ; aggressive policies of 'their' government. 

    But American people also have little say.  They have a say through elected
    representatives, but they have no say in the Council on Foreign Relations,
    the Defense Science Board, the Institute for Defense Analysis, and other
    groups which act as a conduit for policy decisions based on economic
    considerations which are LARGELY REMOVED FROM THE DEMOCRATIC PROCESS.

I think that's a red herring.  I've interviewed informally with two of
the groups mentioned, and I can tell you that they are no more removed
from the democratic process than any other institution that hires and
fires people in this country.  They demand a certain kind of
expertise, but they don't ask about your political affiliations.  You
may say that being able to speak their language and understanding
their world view is itself a political statement.  Perhaps, but that
is a different argument, which I will be glad to take up with you if
you so desire.

    The
    US system may be more democratic, but the US population is also controlled,
    when you consider how the mass media (especially television!) narrows the
    debate and ratifies the existing distribution of power by trying hard to
    avoid giving credence to "controversial" positions.

My limited experience with electronic mass media (having been
interviewed twice for broadcast) is that they go out of the way to
accomodate controversial positions.  Indeed, my criticism is that they
have tried to polarize the debate even MORE than is justified.  They
were reluctant (but ultimately willing) to accept all of my
qualifiers, looking instead for journalistic "punch".

    ; There has got to be a better way to protect our right to be left
    ; alone, and it is worth trying to make it real.

    Finally, the idea that the US is merely trying to be "left alone" and is
    leaving the affairs of other countries alone is also absurd.  
    ... There is a better way to
    protect our right to be left alone, and that is to leave others alone!

These two statements are not inconsistent.  We do need military force
to protect our right to be left alone, and we also should not use
military force to the extent that we do to bother others.  Force
should be the option of last resort, not the option of first resort
and not an unacceptable option.

------------------------------

End of Arms-Discussion Digest
*****************************