[mod.politics.arms-d] Arms-Discussion Digest V7 #45

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (11/04/86)

Arms-Discussion Digest                 Monday, November 3, 1986 5:19PM
Volume 7, Issue 45

Today's Topics:

                              test bans
  Professionals and Social Responsibility for the Arms Race (from AILIST)
          The nondelegation of the subdelegation of *first* use
                      Boost phase interceptions
                              Soviet SDI
                           SDI Assumptions
            The Military and Automatic Humans (from RISKS)
           Unequivocal confirmation of detonation (4 msgs)

----------------------------------------------------------------------

Date: Sat, 1 Nov 86 20:56:53 EST
From: campbell%maynard.UUCP@harvisr.HARVARD.EDU
Subject: test bans
Reply-To: campbell%maynard.UUCP@harvisr.HARVARD.EDU (Larry Campbell)

>                                           ...  Granted, such treaties would
>      be verifiable, but they would be unenforceable.  What would we do if they
>      tested a few missiles?  Stop selling grain to them?  ...

Uh, how about testing a few missiles of our own?

>                      ...  Your proposed test bans would leave us stuck
>      with outdated and unreliable technology (read, expensive).  ...
>
>						Phil
>						prm@j.cc.purdue.edu

That sword cuts both ways -- the Soviets would also be "stuck with
outdated and unreliable technology".

Let's not forget a very important point.  You may not like the Soviets'
politics, but they are human beings.  It is in the self-interest
of BOTH nations to end the arms race.  In a nuclear war, everyone loses.
-- 
Larry Campbell       MCI: LCAMPBELL          The Boston Software Works, Inc.
UUCP: {alliant,wjh12}!maynard!campbell      120 Fulton Street, Boston MA 02109
ARPA: campbell%maynard.uucp@harvisr.harvard.edu     (617) 367-6846

------------------------------

Date: Saturday, 1 November 1986  23:26-EST
From: "dave brewer,  SD Eng, PAMI " <brewster%watdcsu.waterloo.edu at CSNET-RELAY.ARPA>
To:   ARMS-D
Re:   Professionals and Social Responsibility for the Arms Race

;;The Hagey Lectures at the University of Waterloo provide an
opportunity for a distinguished researcher to address the
community at large every year.  This year, Dr. Weizenbaum of
MIT was the chosen speaker, and he has just delivered two
key note addresses entitled; "Prospects for AI" and "The Arms
Race, Without Us".

The important points of the first talk can be summarized as :
  1) AI has good prospects from an investment prospective since
     a strong commitment to marketing something called AI has
     been made.
  2) the early researchers did not understand how difficult
     the problems they addressed were and so the early claims
     of the possibilities were greatly exaggerated.  The trend
     still continues but on a reduced scale. 
  3) AI has been a handle for some portion of the US military
     to hang SDI on, since whenever a "difficult" problem
     arises it is always possible to say , " Well, we don't
     understand that now, but we can use AI techniques to
     solve that problem later."
  4) the actual achievements of AI are small.
  5) the ability of expert systems to continuously monitor
     stock values and react has led to increased volatility
     and crisis situations in the stock markets of the world
     recently.  What happens if machine induced technical trading
     drops the stock market by 20 % in one day , 50 % in one day ?

The important points of the second talk can be summarized as :
   1) not all problems can be reduced to computation, for
      example how could you conceive of coding the human
      emotion loneliness.
   2) AI will never duplicate or replace human intelligence
      since every organism is a function of its history.
   3) research can be divided into performance mode or theory
      mode research.  An increasing percentage of research is
      now conducted in performance mode, despite possible
      desires to do theory mode research, since funds (mainly
      military), are available for performance mode research. 
   4) research on "mass murder machines" is possible because
      the researchers (he addressed computer scientists
      directly although extension to any technical or
      scientific discipline was implied), are able to
      psychologically distance themselves from the end use
      of their work. 
   5) technical education that neglects language, culture,
      and history, may need to be rethought.
   6) courage is infectious, and while it may not seem to be
      a possibility to some, the arms race could be stopped cold
      if an entire group of professions, (ie computer scientists),
      refused to participate. 
   7) the search for funds has led to an increased rate of
      performance mode research, and has even induced many
      institutions to prostitute themselves to the highest bidder.
      Specific situations within MIT were used for examples. 
      Weizenbaum had the graciousness to ignore related (albeit
      proportionally smaller), circumstances at this 
      university.
   8) every researcher should assess the possible end use of
      their own research, and if they are not morally comfortable
      with this end use, they should stop their research.  Weizenbaum
      did not believe that this would be the end of all research,
      but if that was the case then he would except this result.
      He specifically referred to research in machine vision, which he
      felt would be used directly and immediately by the military for 
      improving their killing machines.  While not saying so, he implied
      that this line of AI should be stopped dead in its tracks. 

Posters comments :
  1) Weizenbaum seemed to be technically out of date in some areas,
     and admitted as much at one point.  Some of his opinions 
     regarding state of the art were suspect.
  2) His background, technical and otherwise, seems to predispose
     him to dismissing some technical issues a priori. i.e. a machine
     can never duplicate a human, why ?, because !.  
  3) His most telling point, and one often ignored, is that 
     researchers have to be responsible for their work, and should
     consider its possible end uses.  
  4) He did not appear to have thought through all the consequences
     of a sudden end to research, and indeed many of his solutions
     appear overly simplistic, in light of the complicated
     world we live in. 
  5) You have never seen an audience squirm, as they did for the
     second lecture.  A once premier researcher, addresses his
     contemporaries, and tells them they are ethically and morally
     bankrupt, and every member of the audience has at least some
     small buried doubt that maybe he is right. 
  6) Weizenbaum intended the talks to be "controversial and 
     provocative" and has achieved his goal within the U of W
     community.   While not agreeing with many of his points, I
     believe that there are issues raised which are relevant to
     the entire world-wide scientific community, and have posted 
     for this reason. 

The main question that I see arising from the talks is : is it time
to consider banning, halting, slowing, or otherwise rethinking 
certain AI or technical adventures, such as machine vision, as was
done in the area of recombinant DNA.

Disclaimer : The opinions above are mine and may not accurately
	     reflect those of U of Waterloo, Dr.Weizenbaum, or
	     anyone else for that matter.  I make no claims as
	     to the accuracy of the above summarization and advise
	     that transcripts of the talks are available from some
	     place within U of W, but expect to pay for them because
	     thats the recent trend. 

UUCP  : {decvax|ihnp4}!watmath!watdcsu!brewster
Else  : Dave Brewer, (519) 886-6657

------------------------------

Date: Sun, 2 Nov 1986  12:31 EST
From: LIN@XX.LCS.MIT.EDU
Subject: The nondelegation of the subdelegation of *first* use

    From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>

    I agree that the President can order "Fire back if they fire first!"
    but he cannot order "Fire back if they *might* have fired first!"
    With the delegation implicit in today's LOWC, the stark choice would
    de facto be, for the decisionmaker, to "verify OR respond."  But,
    after unequivocal confirmation of nuclear detonations (and that
    doesn't just mean the code word "NUDET" appearing in message traffic),
    then firing back is legal, albeit suicidal.

Let's push on this one a bit.  How would a President know that nuclear
weapons had in fact detonated?  By whatever process you say, how would
anyone know if there had been an error in that process?  What makes
that process less inherently unreliable than a sensor report?

What counts as verification of an attack warning?  What would *you*
say should be the criteria by which an attack is "confirmed"?

------------------------------

Date: Sun, 2 Nov 1986  12:41 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Boost phase interceptions

    From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>

    I don't know of any scenarios
    for it that flunk boost phase and get an acceptable shootdown rate
    thereafter.  Do you?

Another concept, discussed in candidate architectures, is one in which
you do very effective mid-course discrimination without doing
boost-phase intercept, perhaps using interactive discrimination with
neutral particle beams.  Then you only multiply the targets by a
factor of 10, rather than 100 or 1000.

------------------------------

Date: Sun, 2 Nov 1986  12:52 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Soviet SDI

    From: nike!rutgers!seismo!prometheus!root at cad.Berkeley.EDU

    >Date: Sat, 25 Oct 1986  10:08 EDT
    >From: LIN@XX.LCS.MIT.EDU
    >Subject: Soviet SDI

    > The Soviet military R&D analysts I have spoken to believe that 
    > Soviet SDI-like activities (not traditional ABM activities) are 
    > much more the first than the second. One person characterized 
    > it as being able to put a laser from Edmund Scientific into orbit
    > and having the U.S. say they have an "operational capability" .  

    Hmmmm, I find that "strange" since my experience is that Russians are
    more like "Texans" in descriptions of their assets and accomplishments
    (and throw weights).  

Sorry, I was not clear.  I was referring to people in the U.S. that
study and analyze the Soviet military R&D process.

    ...  However, I don't think the Soviet SDI Research Program is, by 
    any stretch of the imagination, a paper tiger (either Edmund Scientific 
    or "Toys Ya Us"). 

It isn't a paper tiger.  On the other hand, it isn't the enormous
threat that it's blown up to be.  It IS true that they do many things
"first", in the sense of demonstrating an initial operating
capability, but the U.S. is quite often (has usually been?) the first
to transition from that initial operating capability to a militarily
significant capability.

    Dividing the score board into "pro and anti" SDI almost brings tears to
    my eyes.  The real danger of "over polarization" is that the "anti-SDI 
    folks" are effectively on the "outs" and so they have a greatly diminished
    capacity to influence the "internal policy" of the SDIO.  That's where 
    the "REALLY CRITICAL Concerns (Nuclear/Non-nuclear, Costs, Safety, etc.)" 
    issues are.  Listen up Democrats!

In the view of critics, that is giving up the major argument, which is
that SDI, as currently construed, should not be going on at all.
Everyone, critics included, supports research.  Critics have concluded
that debates over which technology to use in a deployed system are
irrelevant, because NO technology offers the prospect of fulfilling
the President's stated goal of population defense.  Many critics DO
assert that there is technology available to meet other goals, but to
meet those goals, you don't need SDI.

------------------------------

Date: Monday,  3 Nov 1986 08:37:57-PST
From: jong%derep.DEC@decwrl.DEC.COM  (Steve Jong/NaC Pubs)
Re: SDI Assumptions

	[prairie!dan at rsch.wisc.edu]
	Do you think there are scenarios where the defense
	will fail absolutely?  That's quite an assumption!

By way of analogy, there have been many cases where some other
defense has failed more or less absolutely, and in ways that
represent our worst fears for SDI:

     o  The Maginot Line failed the French in 1939.  Hitler
	went over it (unplanned) and around it, through
	the Low Countries (politically unthinkable!), leaving
	French forces trapped, -- er, impotent and obsolete.

     o	In turn, Hitler's Fortress Europe didn't keep out the
	Allies in 1944 (though that was hardly an absolute
	failure).

     o	The cliffs surrounding Quebec in the French and Indian
	War were thought impassible.  The attacking troops
	simply climbed them.

     o  The Trojans thought their city walls were impregnable.
	They were, but the Trojans were greedy.

     o  The British thought the jungles surrounding Singapore
	were impassible, and built their guns to point out
	to sea.  The Japanese drove their tanks through the
	rubber plantations without opposition.

     o  At one time, naval planners thought steel warships were
	invulnerable to air attack.  General Billy Mitchell
	proved them wrong by direct example.

     o  The HMS Hood was very heavily armored, but a lucky
	shell from the Tirpitz penetrated to the magazine.
	There were three survivors.

     o  The Poles felt protected from the Germans in 1939
	by the finest calvalry in the world (or so I heard
	in a Polish documentary).

The last, perhaps, is a cheap shot, though I mean no insult.
My point is that once you build the defense, it's hard to change
it, and you're stuck with it.  The attacker can take his* time
about being clever and devising a workaround.  Sometimes it works
spectacularly.

	*Would anyone like me to rewrite this word to
	 "his or her"?  Or would women like to be kept out
	 of being mentioned as world-destroyers?

Also, I believe it was the Bismark, not the Tirpitz, that sank
HMS Hood.

If you want to get deeper (:-) into science fiction,  there's
always the unexpected vulernability of the Death Star (Star Wars),
the USS Reliant (Star Trek II), etc.

------------------------------

Date: Wednesday, 29 October 1986  12:49-EST
From: nike!caip!uw-beaver!ssc-vax!wanttaja at cad.Berkeley.EDU (Ronald J Wanttaja)
To:   RISKS@CSL.SRI.COM, arms-d
Re:   The Military and Automatic Humans

After graduating about ten years ago, I entered the Air Force as a
Satellite Systems Engineer.  I was assigned to a unit operating a
particular NORAD satellite system...no names, no mission statements,
please.  A buddy DID almost start World War III one night, though.

My job was real-time and non-real-time analysis of mission data
from the spacecraft; the end result of my analysis was to advice the NORAD
Senior Director of the validity of the data.  A lot of factors had to be
incorporated in my analysis...in "N" seconds, I had to take into account
which spacecraft had reported, its health and status, DEFCON level, and
"numerous other mission critical elements."  Nudge, nudge...

Anyway, the job was highly dependent upon the experience of the analyst,
as well as his intuition...we had to have a FEEL for what was right.

Three years after I joined the squadron, the unit was reassigned from the
Aerospace Defense Command (ADCOM) to the Strategic Air Command (SAC).  Now,
SAC is the largest producer of automatic humans in the free world.  In a
word, SAC is checklist crazy...every task is broken down to the largest
number of subtasks.

SAC treats its checklists as a way to eliminate the human element.  Training
two people to work as a team is unecessary...all they have to be able to do
is call off the proper steps from the checklist.  SAC uses simulators to
allow its people to practice every step, and to handle every contingency.
For instance, a missile launch officer has gone through the launch procedure in
the simulator dozens of times before he is placed in an actual control
room.  The opening sequence in WAR GAMES is an example of what SAC is trying
to avoid:  The crew must automatically perform its tasks, spending no time
thinking about what the consequences are.  The crew must not bring their
emotions into play, nor even any additional knowledge they must have.
Every action must be governed by a checklist step.

You can see what our problem was...how to you place "intuition" and "gut feel"
onto a checklist?  Our job could not be performed by an automaton; we had to
call on experience and a deep understanding of system operation in order to
provide our assessment.  We argued, to no avail.  We had to have a checklist.
So we thought and thought, and broke the analysis task into as many
subelements as we could.  The last subelement was OPERATOR INTUITION.

Did SAC complain?  Nahhhhh...they never read the thing.  Occasionally
they'd show up for Operational Readiness Inspections.  During the
simulation, their checklist called for them to verify that we had our EVENT
ASSESSMENT checklist open.  Their checklist didn't call for them to
actually read our checklists...

------------------------------

Date: Monday, 3 November 1986  12:17-EST
From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>
To:   LIN, arms-d
Re:   Unequivocal confirmation of detonation

REPLY TO 11/03/86 01:31 FROM LIN@XX.LCS.MIT.EDU: The nondelegation of the
subdelegation of *first* use

>      Unequivocal confirmation of nuclear detonations
>      doesn't just mean the code "NUDET" appearing in message traffic,
>
>  Let's push on this one a bit.  How would a President know that nuclear
>  weapons had in fact detonated?

Might well not be the President who makes this determination.
In my concept, multiple eye-witness accounts must be awaited.
Voice confirmations from *several* sources, besides agreement from
working sensors.

This confirmation should take time.  No retaliatory decision by until
24 hrs. after the detonation is good sense, in the nuclear context.
I think a 30-minute break mandatory.

------------------------------

Date: Mon, 3 Nov 1986  12:27 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Unequivocal confirmation of detonation

    From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>

    Might well not be the President who makes this determination.
    In my concept, multiple eye-witness accounts must be awaited.

You mean of nuclear damage, right?  

OK.  How can the President know that the information he receives in
the way you propose is reliable and correct?

    Voice confirmations from *several* sources, besides agreement from
    working sensors.

Sensors?  You mean the nuclear detonation detection satellite?  Or what?

    This confirmation should take time.  No retaliatory decision by until
    24 hrs. after the detonation is good sense, in the nuclear context.
    I think a 30-minute break mandatory.

So you propose that under no circumstances should any nuclear
retaliation occur until 24 hours after an attack on the US?

Actually, the waiting part of your proposal is endorsed by McNamara,
who told LBJ that in the event of receiving a report of a nuclear
attack on the US, he (LBJ) should do nothing until LBJ received a
personal report from McNamara, who would fly to the alleged site of
the attack and report back on an eye-witness basis.

I don't think that is such a bad idea myself.

------------------------------

Date: Monday, 3 November 1986  13:00-EST
From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>
To:   LIN
Re:   Unequivocal confirmation of detonation

Defining "unequivocal confirmation of nuclear detonation" is
something I'd be happy to leave to the experts, with the
understandings that multiple eye-witness accounts of damage et
alia, and sensors from geiger counters to IONDs satellites, be
required.  Something a lot more substantial and cumulative than
sudden electronic messages.  The problem lies in the fact that no
central body to confirm detonation might survive.  The "confirmation"
might well have to be performed by isolated military commanders.

------------------------------

Date: Mon, 3 Nov 1986  15:07 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Unequivocal confirmation of detonation

    From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>
    Defining "unequivocal confirmation of nuclear detonation" is
    something I'd be happy to leave to the experts.

But these are the same guys that say that with the present system is
OK.

Your proposal is the same as what we have now, with the very important
addition that there be real live people saying "I saw the blast"
reporting over the proper channels too.  I agree that such an addition
is a good thing to have.

My only complaint to you is that people also make mistakes, and that
no chain is error-free.  I think you would be on stronger intellectual
ground if you don't push on the unreliability of computers so much,
and do push on the need for human confirmation. The way you present
your case makes the whole business seem that you don't want to allow
the President to do *anything*.  

How would you feel about LOW if you had trained people monitoring and
interpreting the real-time data coming into NORAD, rather than these
data fusion computers?  Would that satisfy your objection?  Logically,
it should.

------------------------------

End of Arms-Discussion Digest
*****************************