[mod.politics.arms-d] Arms-Discussion Digest V7 #40

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (10/28/86)

Arms-Discussion Digest                 Monday, October 27, 1986 4:43PM
Volume 7, Issue 40

Today's Topics:

                            administrivia
                           Editorial on SDI
                              Soviet SDI
           Trusting scientists  (Response to Henry Spencer)
           Don't shoot until the database is authenticated
                     SDI assumptions (from RISKS)
                    SDI impossibility (from RISKS)
                     Questions about Military AI

----------------------------------------------------------------------

Date: Sat, 25 Oct 1986  08:41 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Administrivia

This guy is off the list, since CSNET has been bouncing him for many
weeks; if anyone out there knows him, he should be informed.

	8440827@WWU (host: wwu.csnet) (queue: wwu)

This guy has been bouncing for a shorter, but non-trivial, time.

	uci-arms-d@ICS.UCI.EDU (host: ics.uci.edu) (queue: smtp)

This address used to be good, but my mailer now won't recognize TOR as
a vald site.

                  si_mac_eki%vax.nr.uninett@tor.ARPA

------------------------------

Date: Sat, 25 Oct 1986  10:00 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Editorial on SDI

    From: decvax!utzoo!henry at ucbvax.Berkeley.EDU

    Hmmm.  If a group of aerospace and laser engineers were to express an
    opinion on, say, the mass of the neutrino, physicists would ridicule them.
    But when Nobel Laureates in Physics and Chemistry express an opinion on a
    problem of engineering, well, *that's* impressive.

I simply point out that the Manhattan Project was run by a bunch of
physicists.  The H bomb was transformed from an 80 ton clunker to a
practical device by physicists.  These were "mere" engineering
problems too.

------------------------------

Date: Sat, 25 Oct 1986  10:08 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Soviet SDI

    >Date: Sun, 19 Oct 86 14:58:50 EDT
    >From: David_S._Allan%UB-MTS%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
    >Subject: Soviet SDI--Some facts, please
    >
    >     I have heard on numerous occasions that the Soviets are developing
    >their own SDI-type system, but I have not seen any facts to back this
    >claim. 

    From: decvax!seismo!prometheus!pmk at ucbvax.Berkeley.EDU

    The Soviet SDI program is classified, a sort of "Project Slavic Empire",
    and it is spread over military research institutes throughout the
    SU....

    Russians are doing it, and the Americans are talking about it.  At
    K.P. (Krasnaya Pachra) they are working on a pulsed fusion device 
    to drive pulsed DEW devices efficiently, and since it is quite
    compact and fires repeatibly it would be a dandy item for a
    "space based" system.

Soviet military R&D has to be separated into two components -- one
designed for technology investigation and the other for actual
incorporation into specific weapon systems.  The Soviet military R&D
analysts I have spoken to believe that Soviet SDI-like activities (not
traditional ABM activities) are much more the first than the second.
One person characterized it as being able to put a laser from Edmund
Scientific ito orbit and having the U.S. say they have an "operational
capability" .  

There's a big difference between a demonstrator and something that
presents a real military threat.

    We, on the other hand, have decided to
    deploy "now" (soon) and to hell with what is a few years down 
    stream that might really work and work cheaply.  

Hardly.  The primary criticism of the SDI from the pro-SDI people is
that they are not doing things fast enough (i.e., not deploying), and
are focusing TOO MUCH on the far term (as far as research is concerned).

------------------------------

Date: Sat, 25 Oct 86 23:34:12 pdt
From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Subject: Trusting scientists  (Response to Henry Spencer)

Hmmm.  Seems to me that physicists ought to know something about
lasers and such...  Since the undrlying concepts are based on physical
principles one would hope that the physicists could do the
calculations necessary to decide something about the feasibility of
SDI.  Do recall that there are scientists on both sides of the SDI
debate.  (As opposed to building a perpetual motion machine.  The
physicists all come down on one side, there is no debate, and there is
no US government project funded to try to build one.)

Would you trust astronomers?  Please read:
	
	George Field and David Spergel
	Cost of Space-Based Laser Ballistic Missle Defense
	Science (AAAS) v.231 (21 March 1986), 1341-1480.

While nobody in SDIO has admitted it publicly, so far as I know,
this article by two Harvard astronomers killed the idea of the
orbiting laser battle stations.

Now you might think that with billions to spend, SDIO might have
itself done the rather crude calculations necessary to show the
foolishness of "death stars".  No, nobody in that shop, including all
the scientists and engineers in contracting organizations bothered to
ask the question.

So, you must ask: Who is asking the right questions and who has the
expertise to determine the answers?  Sometimes it is an engineer
(Parnas), sometimes it is a scientist (Field and Spergel), and
sometimes a little getto child crying itself to sleep for want of
supper in the Land of the Free and the Home of the Brave.

------------------------------

Date: Sun, 26 Oct 86 13:06:59 PST
From: Clifford Johnson <GA.CJJ@forsythe.stanford.edu>
Subject:  Don't shoot until the database is authenticated

Following on Sam Wilson's and Gary Chapman's reflections on the
loss of immediacy and therefore of culpability and accountability
in remote, mechanized killing...  Would a programmer be held
responsible were a software bug to kill hundreds of people years
later?  Obviously not.  Gary is right, if responsibility is to
be preserved in such arenas (if there's anything to preserve at
all, that is) it must be through legislation against the devices
and prohibit certain modes of operation and application.  The
"whites of their eyes" argument is passee, and I think misses the
critical point that it is from an internal judgment and not from an
external perception that human conduct *regulates* itself.

That is to say, it is the application of rules of engagement, or
some such "abstract" function, whose automation is legally
challengable, and I think the "whites of their eyes" element is,
legally speaking, red herring. A "rule of engagement type" standard
is applied re the irresponsibility of setting lethal booby traps,
even when they kill an intruder whose actions may have warranted
lethal self-defense, for a decision-to-shoot is regarded as
necessarily requiring the real-time exercise of human judgment:

"The user of a device likely to cause death or serious bodily harm is
not protected from liability merely by the fact that the intruder's
conduct is such as would justify the actor, were he present, in
believing that his intrusion is so dangerous or criminal as to
confer upon the actor the privilege of killing or maiming him to
prevent it. ...  Even though the conduct of the intruder is such as
would have justified the actor in mistakenly believing the intrusion
to be of this character, there is the chance that the actor, if
present in person, would realize the other's situation.  An intruder
whose intrusion is not of this character is entitled to the chance
of safety arising from the presence of a human being capable of
judgment."  (Restatement (2nd) of Torts, Section 85, comment d;
see also West's Ann.Pen.Code, Section 197, subds. 1,2.)

It isn't so much that the intruder has a right to have his killer
see the whites of his eyes, as that there's a procedural requirement
to be met in opening fire, that requires real-time exercise of
human judgment. In fact, that procedure could be confused rather
than led by "whites of the eyes" sensitivity.

FYI, in writing a footnote on autonmous weapons, the definition
I plumped for was "An autonomous weapon is a set of devices
preconfigured to execute a belligerent act according to digitally
evaluated conditions."  Restrictions on autonomy are then developed
in terms of the character of the conditional evaluation, rather than
of its physical consequences, which are approximately always death
plus a chance of innocent fatalities.

To:  ARMS-D@XX.LCS.MIT.EDU

------------------------------

Date: Saturday, 25 October 1986  16:35-EDT
From: prairie!dan at rsch.wisc.edu (Daniel M. Frank)
To:   RISKS-LIST:, mod-risks%seismo.css.gov at rsch.wisc.edu
Re:   SDI assumptions
Organization: Prairie Computing, Madison, Wisconsin

   It seems to me that much of the discussion of SDI possibilities and
risks has gone on without stating the writers' assumptions about the
control systems to be used in any deployed strategic defense system.

   Is it presumed that SD will sit around waiting for trouble, detect
it, fight the war, and then send the survivors an electronic mail
message giving kill statistics and performance data?  Much of the
concern over "perfection" in SDI seems to revolve around this model
(aside from the legitimate observation that there is no such thing as
a leakproof defense).  Arguments have raged over whether software can
be adaptable enough to deal with unforseen attack strategies, and so
forth.

   I think that if automatic systems of that sort were advisable or
achievable, we could phase out air traffic controllers, and leave the
job to computers.  Wars, even technological ones, will still be fought
by men, with computers acting to coordinate communications, acquire
and analyze target data, and control the mechanics of weapons system
control.  These tasks are formidable, and I make no judgement on which
are achievable, and within what limits.

   Both sides of the SDI debate have tended to use unrealistic models of
technological warfare, the proponents to sell their program, the opponents
to brand it as unachievable.  The dialogue would be better served by
agreeing on a model, or set of models, and debating the feasability of
software systems for implementing them.

    Dan Frank,  uucp: ... uwvax!prairie!dan,  arpa: dan%caseus@spool.wisc.edu

------------------------------

Date: Saturday, 25 October 1986  14:54-EDT
From: David Chase <rbbb at rice.edu>
To:   RISKS-LIST:, risks at csl.sri.com
Re:   SDI impossibility

I don't know terribly much about the physics involved, and I am not
convinced that it is impossible to build a system that will shoot down most
of the incoming missiles (or seem likely enough to do so that the enemy is
less likely to try an attack, which is effective), but people seem to forget
another thing; SDI should ONLY shoot down incoming missiles.  This system
has to tread the fine line between not missing missiles and not hitting
non-missiles.

I admit that we will have many more opportunities to evaluate its behavior
on passenger airplanes, the moon, large meteors and lightning bolts than on
incoming missiles, but we eventually have to let the thing go more or less
on its own and hope that there are no disasters.  How effective will it be
on missiles once it has been programmed not to attack non-targets?  To
avoid disasters, it seems that we will have to publish its criteria for
deciding between targets and non-targets (how much is an international
incident worth?  One vaporized weather satellite, maybe?  If I were the
other side, you can be sure that I would begin to try queer styles of
launching my peaceful stuff to see how we responded).

I think solving both problems is what makes the software hard; it's easy
to shoot everything if you have enough guns.  We could always put
truckloads of beach sand into low orbit.

David

------------------------------

Date: Mon, 27 Oct 86 13:36:43 PST
From: toma@Sun.COM (Tom Athanasiou)
Subject: Questions about Military AI


It seems quite clear that, in the civilian sector, commericialization
has been a force for rationality in the development of AI.  The market
simply will not tolerate the kind of hype that has so long
characterized AI research.

There remains a lot to be said about the kinds of hype that the
market will tolerate.  And there remains a bunch of important
question about AI R&D in the military.  If market forces are the
source of a new realism in commercial AI, and if those forces are
missing in the military, can we not expect that military AI will
continue to exhibit gonzo traits to a degree that are no longer
easily supportable in the commercial sector?

Obviously, this question goes beyond AI.  It seems that there are
probably MANY military technologies that don't work as well as the are
represented as working by their proponents and, in particular, by the
officers that have tied their careers to them.  I recall an article
about AWACs by Andrew Colburn in which he claimed that a major AWACS
demonstration has been faked (the data was played back from test
tapes).  And there's the case of the Pershing II.  Didn't it fail lots
of tests?  Didn't the US go ahead with its deployment because it was
politically necessary to do so, prehaps in the hope that the weakest
components could subsequently, and quietly, be upgraded in place?

The matter is further complicated by the fact that systems which
fail to meet their initial design goals can indeed work well
enough to find some role in America's ever expanding military
project.  SDI, for example, may find use as a light-speed ASAT
system.

My question is this:  What are the specific dynamics of
technological boondoggle in the military sector, and how do they
relate to the boondoggle dynamics of the civilian sector?  These
later dynamics are, of course, fascinating in their own right --
everyone who's worked in the corporate sector knows how often the
dynamics of innovation are hobbled by incompetence, politics and
ideology.  The Pershing story is not without its analogs in
dozens of corporate MIS departments.

Still, in the civilian sector, the market will eventually make
itself felt.  There are exceptions, of course, and lots of room
for scams -- especially in the fraud-rife world of knowledge
engineering.  What about the military?  To what extent can we
expect that computer technologies that work poorly, if at all,
will find their way into the technological infrastructure of
war?  SDI may be the simple case here.  What about SCI, with its
"battle management systems"?  Does anyone really believe this
horseshit?  If so, why?  Are there interdepartmental rivalries
and other institutional dynamics implicated?  Does the military
bureaucracy predispose an acceptance of engineering myths --
myths about the extent to which complex and chaotic systems can
be captured within prestructured formal systems -- which are
loosing their hold in the commercial sector?  To what extent are
institutional conflicts within the military -- for example,
conflicts between the "seat of the pants" guys and the military
engineers -- exacerbated by the introduction of speculative and
myth-laden technologies like expert systems?

On final aspect to the question.  It seems like military planners
have some reason for faith in speculative research.  The
Manhatten Project worked, after all, and it was probably seen by
many, at the time, as a pretty blue sky business.  It's at least
possible that, for example, SCI's defenders know what they're
doing, and plan only to use the target applications as spurs to
relevent research.  But won't the whole business still take on a
life of its own?

Finally, who would know more about these and related question?  
Military sociologists?  Technology officers?  Books?

------------------------------

End of Arms-Discussion Digest
*****************************