[mod.politics.arms-d] Arms-Discussion Digest V7 #57

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (11/11/86)

Arms-Discussion Digest               Tuesday, November 11, 1986 2:11PM
Volume 7, Issue 57

Today's Topics:

                       Meteorite as A-explosion
                     First Strikes, Verifiability
             Launch on warning / nuclear victory metrics
       Eager retaliation ("prompt response", isn't that a home
                          Launch on warning 
             Yet more on SDI (Star Wars flawed #8-of-10)
                                 SDI
           Corrections on my factual uncertainties (2 msgs)

----------------------------------------------------------------------

From: hplabs!pyramid!utzoo!henry@ucbvax.Berkeley.EDU
Date: Mon, 10 Nov 86 19:42:14 pst
Subject: Meteorite as A-explosion

> ... Besides, the
> conversion of 1 gram of anti-matter (a cube less than 1 cm on a side)
> to energy would produce 9 x 10^20 ergs of energy, which is probably
> enough to split the earth in two...

Nothing so drastic; the erg is a pretty small unit.  1 gram of antimatter
plus 1 gram of normal matter would give about a 40 kiloton explosion, as
I recall.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

From: rutgers!meccts!meccsd!mvs@seismo.CSS.GOV
Date: Tue, 11 Nov 86 03:36:03 EST
Subject: Re: Arms-Discussion Digest V7 #51
Reply-To: meccsd!mvs@seismo.CSS.GOV (Michael V. Stein)

In article <8611092151.AA18053@ucbvax.Berkeley.EDU> ARMS-D@XX.LCS.MIT.EDU writes:
>
>Subject: on the perfectability of SDI
>Date: 07 Nov 86 19:45:33 EST (Fri)
>From: dm@bfly-vax.bbn.com
>	2) What is a likely Soviet reaction to our building SDI AND
>	retaining our 40,000 warheads aimed at them?

The US has approximately 26,000 nuclear warheads.  Of these about
13,000 could hit the Soviet Union.  The Soviet arsenal is thought to
consist of somewhere between 22,000 and 33,000 warheads.  (Data from
"Nuclear Battlefields").

>Put yourself in the shoes of a Russian leader.  There is the USA, with
>an SDI system that is clearly only partially effective, and will
>almost undoubtedly let several hundred of your warheads through.  Yet
>they also have a lot of MX and Trident missiles that really look like
>they're first strike weapons, plus those Pershing-2 missiles just 10
>minutes from your capital.  

A first strike will almost certainly involve a counter-force strike.
The goal of a first strike is to eliminate the enemies ability to
fight back with his military.  Trident missles are simply not accurate
enough for this sort of mission and shouldn't be included in this
list.  

>...Is it cheaper
>and more reliable than the options?  Considering that one of the
>options is a verifiable treaty reducing arms, I don't think so.

Great, but name just *one* time that we have had a verifiable treaty
with the Soviet Union in regards to arms control.   A true verifiable
treaty must imply on-site verification, something the Soviets have
always rejected.  They have explicitly rejected the concept in the 
Baruch plan, in Eisenhower's Open Sky's proposal, etc.  

Like just about all major US nuclear programs, SDI was started mostly
because of actual or suspected Soviet work in the field.  And because 
there is no true verification, this is the way the world works.
---
Michael V. Stein
Minnesota Educational Computing Corporation - Technical Services

UUCP	ihnp4!meccts!mvs

------------------------------

Date: Tuesday, 11 November 1986  00:32-EST
From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>
To:   LIN
Re:   Launch on warning / nuclear victory metrics

REPLY TO 11/10/86 19:49 FROM LIN@XX.LCS.MIT.EDU: Re: nuclear victory metrics

   I suppose I would risk everything for fear that we'd get nuked out.
   Look, that is the problem with nukes in the first place.  If you
   believe in deterrence, it means that that you have to be willing to
   promise that you will do something irrational *if* deterrence fails
   and war breaks out.

But before any (other) first-use, deterrence hasn't failed?  After
first-use, I agree my case becomes weaker (though I still hold to
it).  But I'm adamantly opposed to operating any LOWC prior to
first-use.  More precisely, I see the operation of a LOWC as a form
of first-use.

What's your position on the first-use argument?

To:  LIN@XX.LCS.MIT.EDU

------------------------------

Date: 11 Nov 86 08:47:51 EST (Tuesday)
From: MJackson.Wbst@Xerox.COM
Subject: Eager retaliation ("prompt response", isn't that a home

From Herb Lin (V7 #54):

"It is precisely because. . .I judge the odds of such an occurrence to
be very low that I want to be very sure that I have solid and conclusive
evidence.  If nuclear attack were a very likely thing, I might require
less evidence."

But of course the danger is that, to all appearances, at least some of
the individuals most intimately involved with strategic nuclear
command-and-control do *not* view a Soviet attack as a low-probability
event, and that they *might* require less evidence than would otherwise
be reasonable.  Apropos of this whole issue, does anyone know if delayed
response scenarios are "losers" in the Pentagon's strategic wargaming?

(This would seem to bear on the observation made a while back by someone
that LUCA [Launch Under Confirmed Attack] was promptly subsumed under
simple LUA [Launch Under Attack].  This was not explained at the time; I
took the comment to mean that the use of the word "confirmed" only
served to emphasize the ambiguity likely to be present in a real-world
situation, hence to call the concept into question.  How about LUAWRRSA
[Launch Under an Attack We're Really Really Sure About]?)

Mark

------------------------------

Date: Tue, 11 Nov 1986  08:49 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Launch on warning 


    From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>

       I suppose I would risk everything for fear that we'd get nuked out.
       Look, that is the problem with nukes in the first place.  If you
       believe in deterrence, it means that that you have to be willing to
       promise that you will do something irrational *if* deterrence fails
       and war breaks out.

    But before any (other) first-use, deterrence hasn't failed?  After
    first-use, I agree my case becomes weaker (though I still hold to
    it).  But I'm adamantly opposed to operating any LOWC prior to
    first-use.  More precisely, I see the operation of a LOWC as a form
    of first-use.

That's why I want to maintain an OPTION for LOW, but not a POLICY.
The two ARE different; having an option means that the other guy must
worry that you might, but making it not a policy means that you can
avoid most of the risks.  

I believe deterrence to have failed when missiles are on the way to
the U.S. -- and I don't want to get into the argument that sensors
could be faulty.  HOWEVER I make the judgment that missiles are
coming, that is sufficient for the purposes of this argument.

I don't see the operation of LOWC at all as a form of first use.  You
could argue (I would not) that it is a form of threatened first use,
but that's not the same thing.

------------------------------

Date: Monday, 3 November 1986  08:06-EST
From: Jane Hesketh <jane%aiva.edinburgh.ac.uk at Cs.Ucl.AC.UK>
To:   ARMS-D
Re:   Star Wars flawed #8-of-10

                     The Limitations of Artificial Intelligence
                             and the Distinction between
                     Special-Purpose and General-Purpose Systems

                                      Alan Bundy

             Abstract

             The Battle Management System (BMS) for SDI will  be  a  huge
             computer  program which can never be tested in a real battle
             but only in a simulated one.  Critics have  argued  that  it
             will,   therefore,   be  inherently  unreliable  -  reacting
             unpredictably to unforeseen circumstances.

                 Can this limitation be overcome by the incorporation  of
             artificial  intelligence  (AI)  technology,  so that the BMS
             will react sensibly to the unforeseen? We argue that such  a
             facility  is  so  far  beyond the capabilities of current AI
             technology as to be unrealisable in the  development  period
             of SDI.

             A Short History of AI

             The areas of AI that one might hope would provide techniques
             for  coping  automatically with unforeseen circumstances are
             the representation of knowledge and automatic reasoning with
             this knowledge.

                 When it comes to automating the processes of  reasoning,
             AI  has  had most success in the area of deduction, i.e. the
             kind of reasoning involved in proving mathematical  theorems
             in  a  formal  logic.  Even here success has been limited to
             fairly straightforward  theorems  due  to  the  problems  of
             controlling  the  search for a proof through the explosively
             large space of legal steps. Forays have also been made  into
             the  areas  of:  uncertain  reasoning, analogical reasoning,
             default reasoning, and  a  few  other  kinds  of  `plausible
             reasoning' - but these areas are only in their infancy.

                 When it comes to representing knowledge, AI has had some
             success   in   the   representation  of  properties  of  and
             relationships between simple physical objects.  However,   a
             large number of tricky areas remain, e.g. the representation
             of: shape, time, liquids, beliefs of others, etc,  which  we
             are only beginning to understand.  In addition, there is the
             sheer scale of the problem.  No existing AI system  contains
             more  than  a  minute  fraction of the knowledge, especially
             common-sense knowledge, known to the average human.

                 This has limited AI  systems  to  small,  self-contained
             domains  in which fairly crude reasoning is adequate.  Early
             AI work concentrated on `toy' domains, the  most  famous  of
             which was the `blocks world' - a domain of children's bricks

             in which the principal action was stacking.  Until the  mid-
             70s AI researchers were pretty depressed about this.  It was
             felt that the initial  predictions  about  AI  progress  had
             proved  to  be  hyperbole,  that  we  had only scratched the
             surface of the problem, and that  major  breakthroughs  were
             required. This is still true.

                 What changed the climate of opinion from  depression  to
             enthusiasm  was  the  realisation, during the 70s, that many
             commercially  important  domains  were  `toy'  ones  in  the
             technical   sense   described   above.    This  led  to  the
             development of expert systems.  For the  most  part,  expert
             systems use automated reasoning techniques that were already
             well understood 10 to 20 years ago,  and  apply  them  to  a
             narrow  domain  of  specialised  expertise  in areas such as
             fault diagnosis.  They  are  not  capable  of  the  kind  of
             common-sense  reasoning  about  shape  and  time involved in
             navigating a car  in  a  busy  street,  nor  of  jumping  to
             plausible  conclusions about the intentions of an adversary,
             nor of using analogy to  apply  a  wide  range  of  previous
             experience  to  a  new problem.  Expert systems are special-
             purpose.

             Is Battle Management a `Toy' Domain?

             For an AI-based BMS to cope with unforeseen circumstances it
             would have to be a general-purpose system.  It might have to
             reason  about  the  nature,  purpose  and   destination   of
             previously  unknown  objects.  It might have to reason about
             the intentions of an  adversary,  taking  into  account  the
             general  political  situation.  It might have to use analogy
             to cope with an unforeseen situation by adapting an existing
             plan.   Thus  battle  management  is  not  the sort of `toy'
             domain that lends itself to expert systems technology.

                 This is not to say that a rule-based BMS  could  not  be
             built  -  it could, but it would be a special purpose system
             not able to cope with unforeseen circumstances.   One  could
             regard  the  problem  of detecting and reacting to a missile
             attack as one of fault diagnosis and correction and adapt an
             existing expert system shell to the task.  But the resulting
             BMS would be subject to precisely the same criticisms  about
             its reliability as a conventional BMS - in fact, more so.

                 The behaviour of an expert system  is  inherently   less
             predictable  than  that  of a conventional program.  Because
             the rules can be combined in  many  different  ways  by  the
             inference  mechanism  according  to  the  circumstances, the
             order of rule firing may be different from  any  anticipated
             by  the  programmer.  This  unpredictability is increased if
             certainty factors are used to influence the search  strategy
             (as they usually are).

                 There is some hope that by using logic-based  rules  and
             checking  the  meaning  of each rule, that the program would
             behave correctly whatever order its  rules  were  fired  in.
             Such  a program would serve as its own formal specification.
             But it would be no easier to get this specification  correct
             than  it  would to correctly specify a conventional program.
             In fact, it would be harder, since  this  specification  has
             the  additional  constraint  of  simultaneously serving as a
             (logic) program.

             Could a General-Purpose BMS Ever be Built?

             The  objections  raised  above  are  based  on  current   AI
             technology.   What  are the chances that new developments in
             AI will overcome them and make  it  possible  to  develop  a
             general-purpose BMS?

                 It is an article of faith among most AI workers that  it
             is  possible  to  understand  general-purpose, common-sense,
             human reasoning to the extent  that  it  can  be  automated,
             although  some  critics of AI (e.g. Dreyfus) argue that that
             can never be done.  What is universally agreed is that there
             are  some  major  theoretical  breakthroughs required in the
             development  of  knowledge  representation   and   plausible
             reasoning   techniques   before  it  can  be  achieved.   AI
             Researchers have been striving to make  these  breakthroughs
             since  the  50s.  Progress has been made, but it is slow and
             piecemeal.  The situation is not analogous  to  VLSI  (say),
             where  the  basic  theory  is understood and hence rapid and
             steady improvements  in  size  and  speed  can  be  reliably
             predicted, provided sufficient money is made available.  New
             theory is required and it is impossible to guarantee results
             by pouring in money (*) or by any other means.

                 Even were the breakthroughs to occur  and  the  AI-based
             BMS  to be built, how could we be sure that it would respond
             `correctly'  to  unforeseen  circumstances?   Thompson   has
             pointed  out that before we empower humans (e.g. a judge) to
             take decisions which affect other people we subject them  to
             tests  not  only  of  technical  expertise but also of basic
             humanity.   The  latter  is  usually  implicit  and  largely
             assumed  on  the  basis  of  a  shared  human  experience of
             upbringing and emotion.  That is, we tend to assume that the
             candidate   shares   a   common   morality,  responsibility,
             humanity, etc, unless this  is  undermined  by  his  or  her
             actions.   No  such  assumption  would  be  reasonable  in a
             computer program which did not share human  experience.  How
             could we test for it?
             ____________________
                (*) Money is necessary, of course, but it is  not  suffi-
             cient.

             Conclusion

             Under these circumstances it would be folly to predicate the
             success  of  a  multi-billion dollar programme on the timely
             occurrence of several major breakthroughs in AI.

             Information about the author

             Dr Alan Bundy is a Reader in the  Department  of  Artificial
             Intelligence  at  the University of Edinburgh. He has been a
             researcher in artificial intelligence since 1971 and  has  a
             major   international   reputation  in  the  field.   He  is
             currently the Chair of the Board of Trustees of  IJCAI  Inc,
             the  body  which  organises  the  major AI conference. He is
             Conference Chair of the next such conference, IJCAI-87.   He
             is also a member of the SERC Computer Science Sub-Committee,
             which reviews SERC research grants in  AI,  and  is  on  the
             editorial board of the foremost AI Journal. He has published
             many research papers and books in AI,  including  papers  on
             its nature and methodology.

------------------------------

Date: Tue, 11 Nov 1986  10:45 EST
From: LIN@XX.LCS.MIT.EDU
Subject: SDI

    From: cfccs at HAWAII-EMH

    The arguements against SDI seem to want the funding cut.  They want a
    nice slow progress that will assure no quantum jumps in technology.
    One that will leave plenty of money for the projects they can't seem
    to get funds for.

By what criterion would you decide what is the proper level of funding?

    If you aren't
    against SDI R&D (notice the D stands for developme), what are you
    arguing about?

I AM arguing against SDI D, to the extent that it violates the ABM
treaty.  I am arguing that SDI R should not be conducted at the level
that it currently enjoys.

    If your real concern is that money will be wasted on
    elaborate shows of outdated technology, then argue that!

It is not only that, though it is that.  It isn't "outdated"
technology, but rather meaningless demonstrations of technological
fluff -- i.e., technology that isn't meaningful but that looks good to
TV cameras.

------------------------------

Subject: Corrections on my factual uncertainties
Date: 11 Nov 86 12:26:42 EST (Tue)
From: dm@bfly-vax.bbn.com

Hank Walker (Hank.Walker@gauss.ece.cmu.edu) has kindly corrected some
of my factual errors in my recent postings on ``waiting a while'' and
the imperfectability of SDI.  

First, on waiting a while.  Someone had said that taking as long as 24
hours to confirm that the Soviets had attacked would be absurd, that
it would be the equivalent of unilateral disarmament on our part,
since the Soviets would be able to use that 24 hours to completely
destroy our forces, including tracking down and eliminating all of our
nulcear submarine force.  I pointed out that missing just ONE of our
Trident submarines would leave ~400 warheads for retaliation, and that
I presumed Pentagon planners had taken that thought into account and
had provided for it.  Hank sent me a message correcting my numbers (in
my defense, I made it clear that I was uncertain about the numbers in
my original posting -- I'm not a professional at this, I just do it as
a hobby):


    A Trident submarine carries 24 Trident I C4 missiles, each probably
    carrying 8 warheads.  The Trident II D5 missile under development
    would carry 10 warheads over a longer range.  A Poseidon submarine
    carries 16 missiles.  These missiles are either Poseidon missiles,
    which can carry up to 14 warheads, but probably carry only 10
    warheads, or C4 missiles with 8 warheads.  Poseidon missiles are
    gradually being replaced by C4 missiles on some fraction of the fleet.

    ...a Trident carries 24 missiles, currently with 8 warheads each, 10
    in the future.  I believe these are Mark 12A warheads with 300
    kilotons, same as the Minuteman III and MX.  If they aren't on the C4
    missile, they will be on the D5 missile.  The Poseidon missile carries
    10 warheads of about 80 kilotons each.  The US is allowed a total of
    656 SLBMs, or 656*8 = 5248 warheads.  The plan is for 20 Trident and
    11 Poseidon subs, out of the total of 41 allowed subs.  The D5
    missiles have the added advantage that their range is sufficient to
    hit targets from almost anywhere in the northern hemisphere or Indian
    Ocean, so the subs will be essentially invulnerable.

    I once read that the routine was that if a sub didn't hear the Moffet
    Field VLF stay-alive signal for three days, it would sample the air for
    radioactivity, and then launch against its target list for that
    situation.  This is the US's ultimate dead-man switch.  Of course all
    submarines have the capability of launching their missiles whenever the
    crew feels like doing it.

    There is of course the issue of civil defense evacuation by the
    Soviets, but if nuclear winter is true, it won't matter anyway since
    their first strike would probably be sufficient to cause it.

So, Tridents only carry 190-240 warheads, Poseidons about the same
number (hmm, I thought one of the arguments against the Trident was
that, under SALT-II limits, because the Trident carried more warheads
it meant we were limited to fewer platforms, meaning fewer subs for
the Soviets to hunt down.  I guess I was wrong.  Of course, if you
could put Poseidon missiles with their 14 warheads into a Trident
submarine, you'd get 320 warheads on a Trident...).  Nevertheless, I
stand by my original assertion that missing just one submarine would
result in unacceptable (even to the inventors of the Gulag) damage to
the Soviet Union (read: the end of the Soviet Way of Life).  After
all, once upon a time (mid-Sixties, before MIRV) Robert MacNamara
estimated that we'd only need a couple-hundred warheads, period.

All this goes to argue that the notion of holding back our retaliation
for a few days is a perfectly ``reasonable'' course of action.

The second correction Hank sent regarded the Pershing II missile,
which I had described as a threat to the Russian capital:

    Everything I have read indicates that the Pershing II missile does not
    have the range to hit Moscow from Germany.  It can reach into the
    western part of Russia or the Ukraine.

Oh well, so drop the Pershings as a first-strike threat.  That still
leaves us with the MX missile and the Trident missile, neither of
which would I want aimed at three quarters of my nuclear forces and my
command and control centers.  Particularly if the other side has an
SDI capable of mopping up what the Tridents and MXs missed.

------------------------------

Date: Tue 11 Nov 86 13:03:31-EST
From: Herb Lin <LIN@XX.LCS.MIT.EDU>
Subject: Re: Corrections on my factual uncertainties

From: <dm@bfly-vax.bbn.com>

    Of course, if you could put Poseidon missiles with their 14 warheads
    into a Trident submarine, you'd get 320 warheads on a Trident...).

My understanding is that you can load 14 warheads onto a Trident I or
a Trident II, but that we have chosen not to do so.

    The second correction Hank sent regarded the Pershing II missile,
    which I had described as a threat to the Russian capital:

	Everything I have read indicates that the Pershing II missile does not
	have the range to hit Moscow from Germany.  It can reach into the
	western part of Russia or the Ukraine.

    Oh well, so drop the Pershings as a first-strike threat.

Maybe.  The *Soviets* claim that the Pershing II *can* strike Moscow.
Maybe true, and maybe false, and maybe both.  I have heard that one
U.S. missile carried ballast to shorten its range in order to not
count against some range limit.

Besides, there are ICBM launch control centers in the Western S.U.

------------------------------

End of Arms-Discussion Digest
*****************************