[mod.politics.arms-d] Arms-Discussion Digest V7 #75

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (12/01/86)

Arms-Discussion Digest                Monday, December 1, 1986 10:50AM
Volume 7, Issue 75

Today's Topics:

                               TCI goal
                          Launch on warning
           !!! Kremlin is purging dimwitted scientists !!!
                      acetylene anti-tank weapon
                          Launch on warning
                  AI and the Arms Race (from AILIST)

----------------------------------------------------------------------

Date: 30 Nov 1986 14:41:48-EST
From: Hank.Walker@gauss.ECE.CMU.EDU
Subject: TCI goal

I should clarify.  The goal of the five-year TCI (Tau Ceti Initiative) is to
demonstrate the engineering feasibility of sending a crew of humans from
Earth to Tau Ceti and having them return alive.  At the end of the TCI,
engineering development will begin.  If you think this is just around the
corner (say 30-50 years), I'd like to hear your ideas on accomplishing it.
Describe any fundamental breakthroughs you plan on using.

------------------------------

Date: Sun, 30 Nov 1986  15:33 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Launch on warning


    From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>

    >   The USAF *implements* national policy, which is supposed to be
    >   done in a way that conforms to guidance provided by the National
    >   Command Authority.  If they are in fact subverting it, then I
    >   see the basis for your complaint, but then the NCA is not the
    >   target of your complaints; it should be the USAF.

    Weinberger is the target for my complaint -- the USAF reports to him,
    and other organizations responsible for the presently operated LOWC.
    His order could prohibit operation of the LOWC.

Then it is not the Air Force that sets policy.  In that case, you have
to look at the statements of the NCA.  What the USAF says on this is a
red herring, since they don't set policy.

    >   I'm still not coming through.  Does the phrase "LOW policy"
    >   mean: (1) "policy governing whether we will attempt to execute
    >   an LOW" or (2) "policy that we will in fact attempt to execute
    >   an LOW".

    According to my definition, it means (1).  According to your
    definition, it means (2).  That's my understanding.

OK.  Then you should not use the phrase "LOW policy" to mean #2.  

    >       ... we agree that any set of procedures that is preset to *attempt*
    >       LOW in some circumstances is fairly called having a policy of
    >       launch on warning?
    >
    >   No.  It is a policy *ON* (the subject of) LOW.  The specific
    >   circumstances are what constitute the policy.

    Hmm.  It seems you've just answered "yes", not "no".  I did mean
    that the circumstances would be predefined, so it fits your reply
    re policy.

A policy *OF* LOW means that when we receive warning, we will launch.
A policy *ON* LOW (identical to your usage of the phrase "LOW policy",
tough not mine) means when we recieve warning, we MIGHT (try to)
launch, depending on the circumstances.

    >   But then you have to deal with the fact that ALL systems for
    >   providing information to the NCA can be faulty and are not 100%
    >   reliable.

    With a war in full swing, of course decisions must
    be made on less than perfect information.

But during peacetime, decisions are also made on less than perfect
information.

    [LOWC] concern conflict *initiation*, or nuclear
    *escalation*.

When missiles are on the way, conflict has already started; there is
not a question of initiation any more, if you grant that the sensors
are correct.  I know you don't, but then the point I raised above must
be answered.

    if you know of any
    other such dangerous, conflict intiation/escalation (and therefore
    illegal) sets of procedures, I'd likely sue against them also,
    except the LOW issue has my hands full already.

Then you will be sueing against the entire doctrine of flexible
response, which posits that nuclear weapons can and will be used to
backstop a failed conventional defense.

The LOW issue is just a subset of the general escalation argument, and
in my opinion, you should put that point up front.

    the Atomic Energy Act does not give the Pres. any right to
    decide to use nukes in peacetime, any more than he can use any
    kinds of bombs in peacetime.

But the Constitution DOES give him the right to use bombs in peacetime,
to repel attack.

    >       Would you support a peacetime 30-minute timelock on MX/Minuteman?
    >
    >   I'd have to hear better arguments than I have heard now, but
    >   under some circumstances, I would be willing to support such
    >   locks.  However, I cannot support going any farther than that.

    What circumstances do you have in mind?

I'm not sure I have any particular circumstances in mind.  I just
haven't heard a convincing argument that I should exclude them.  I
have heard convincing arguments that it is desirable to maintain an
LOW option, and I haven't been convinced by your arguments that we
should not.

------------------------------

Date: Mon, 1 Dec 86 01:31:38 PST
From: weemba@brahms.berkeley.edu (M P Wiener)
Subject: !!! Kremlin is purging dimwitted scientists !!!

The following is shamelessly stolen from the 2 Dec 1986 edition of the
WEEKLY WORLD NEWS.  (You couldn't have missed that issue while shopping:
it had the banner headlines about the five-week long pregnancy [Bulgarian
natch] and a recipe for cooking Thanksgiving turkeys in the dishwasher.)
 -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
    << Lame-brained Russians try to fix computer -- with a hammer >>

	!!!	 Kremlin is purging dimwitted scientists      !!!

    Soviet official launched a massive investigation into the training
  of technical personnel after a repairman tried to fix a sophisticated
  missile guidance system with a hammer, a screwdriver and an oil can.
    A recent East German defector, Dr. Hermann Franz, blew the lid off
  the shameful state of Soviet technical know-how in a scathing letter
  to top science journals upon his arrival in the West.
    The computer scientist, who is now living in France, claims there
  is a very real danger that a poorly-trained Russian technician might
  accidently start World War 3.
    ``The repairman with the oil can is a glaring example of their in-
  eptitude,'' said the expert.  ``He was assigned to one of the most
  sensitive missile bases in the U.S.S.R.
    ``And yet, when he was called on to repair a circuit problem in a
  computer console, he showed up with carpenter's tools.
    ``First he walked over and kicked it.  Then he said, `Something is
  stuck.'  I thought he was joking until he started squirting oil and
  blew every circuit in the control center.
    ``It took six weeks to repair the damage -- six weeks to do a job
  that qualified technicians could have done in a matter of days.''
    Horrifyingly, the missile base near the foot of Ural Mountains is
  armed with some of the Soviet Union's most powerful intercontinental
  missiles and nuclear warheads, Dr. Franz said.
    A Soviet Air Force spokesman angrily denied the allegations, call-
  ing Soviet technicians ``the finest in the world.''
    One highly-placed military source conceded that Soviet training
  programs are being investigated.  But he insisted that the investi-
  gation was routine.
    Meanwhile, Dr. Franz has called on Western politicians and scien-
  tists to pressure the Soviets into monitoring the work of their tech-
  nicians more closely.
    ``The specter of nuclear holocaust is frightening enough,'' he
  said, ``without having to worry about some dimwit starting the war
  that would kill us all.''
						          -- Derek Clontz
 -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
I don't quite follow the logic of that last quotation.  Personally I'm
more worried about some of the "dimwits" at the other end of the nuclear
chain of command.

I have two questions:

Q1) Can anyone identify a quote top science journal unquote that is pub-
lishing Dr Franz' letter?  (Heck, while we're on a roll, can anyone con-
firm the "Clark Gable's our god, says lost island tribe" story?)

Q2) What is known/believed about Soviet failsafe mechanisms?

ucbvax!brahms!weemba	Matthew	P Wiener/UCB Math Dept/Berkeley CA 94720
  "S.H.I.E.L.D. trained me in HARD SCIENCE, Clint--not TIME TRAVEL--!"

------------------------------

From: wesm@mitre-bedford.ARPA
Subject: Re: acetylene anti-tank weapon
Date: Mon, 01 Dec 86 08:10:34 EST

>>From: Jef Poskanzer <unisoft!charming!jef@ucbvax.Berkeley.EDU>
>>
>>Very interesting.  Did the article say anything about what effect
>>acetylene would have on gas turbine engines?

	The article does not specifically mention gas turbine engines, but
goes on to say..

	"In binary munition form (calcium carbide and water), it can stop not
only most land vehicles, but most of the low-end-mix ships used by our
adversaries. It is capable of disabling all vessels in service with the Soviet
Bloc states of Albania, Algeria, Bulgaria (except for two Riga-class
frigates), Cuba, Ethiopia, East Germany, (except sail training ships), North
Korea, Lybia, Nicaragua, Poland (except one Kotlin-SAM and the odd trawler),
Romania, Syria, Vietnam, South Yemen, and Yugoslavia (except midget submarines
and sail training ships)."

	I would infer from this that not all engine types are effected, but
the vast majority are. This last piece was obviously a naval point of view.
the article also says that the gas is untraceable when properly dispensed, and
will cause no loss of life. The sabotage possibilities are endless.

	Wes Miller      wesm@mitre-bedford.arpa

------------------------------

Date: Sunday, 30 November 1986  19:21-EST
From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>
To:   LIN, arms-d
Re:   Launch on warning

Lin>  A policy *OF* LOW means that when we receive warning, we will
Lin>  launch.  A policy *ON* LOW (identical to your usage of the
Lin>  phrase "LOW policy", tough not mine) means when we recieve
Lin>  warning, we MIGHT (try to) launch, depending on the
Lin>  circumstances.

I think it's valuable for you to expand your first definition as
follows:  A policy *OF* LOW means that when we receive warning, we
will *try to* launch.

Lin>  But during peacetime, decisions are also made on less than
Lin>  perfect information.

But not a decision to initiate war, and not by the wrong people.

>    you will be sueing against the entire doctrine of flexible
>    response, which posits that nuclear weapons can and will be used
>    to backstop a failed conventional defense.
>    The LOW issue is just a subset of the general escalation
>    argument, and in my opinion, you should put that point up front.

I am suing against first-use, but I leave out other escalatory
steps, e.g., from demo-nuc to tac-nuc, and from tac-nuc to
IC-exchange.

>    But the Constitution DOES give him the right to use bombs in peacetime,
>    to repel attack.

Only to repel *sudden* attack, when there's not enough time for
Congress to act; and not to repel a non-attack.

>    I'm not sure I have any particular circumstances in mind
>    (for a 30-minute timelock on MX/Minuteman).  I just
>    haven't heard a convincing argument that I should exclude them.  I
>    have heard convincing arguments that it is desirable to maintain an
>    LOW option, and I haven't been convinced by your arguments that we
>    should not.

Are you convinced that the day-to-operation of a LOWC is
justified to protect against a bolt-from-the-blue decapitation
scenario, or conversely?

To:  LIN@XX.LCS.MIT.EDU

------------------------------

Date: Wednesday, 26 November 1986  12:58-EST
From: eugene at titan.arc.nasa.gov (Eugene Miya N.)
To:   AIList, arms-d
Re:   AI and the Arms Race

>Will Clinger writes:
>In article <2862@burdvax.UUCP> blenko@burdvax.UUCP (Tom Blenko) writes:
>>If Weizenbaum or anyone else thinks he or she can succeeded in weighing
>>possible good and bad applications, I think he is mistaken.
>>
>>Why does Weizenbaum think technologists are, even within the bounds of
>>conventional wisdom, competent to make such judgements in the first
>>place?
>
>Is this supposed to mean that professors of moral philosophy are the only
>people who should make moral judgments?  Or is it supposed to mean that
>we should trust the theologians to choose for us?  Or that we should leave
>all such matters to the politicians?
>
>Representative democracy imposes upon citizens a responsibility for
>judging moral choices made by the leaders they elect.  It seems to me
>that anyone presumed to be capable of judging others' moral choices
>should be presumed capable of making their own.
>
>It also seems to me that responsibility for judging the likely outcome
>of one's actions is not a thing that humans can evade, and I applaud
>Weizenbaum for pointing out that scientists and engineers bear this
>responsibility as much as anyone else.
>
>William Clinger

The problem here began in 1939.  It's science's relationship to
the rest of democracy and society.  Before that time science was
a minor player.  This is when the physics community (on the part of
Leo Szilard and Eugene Wigner) when to Albert Einstein and said:
look at these developments in nuclear energy and look where Nazi Germany
is going.  He turn as a public figure (like Carl Sagan in a way)
went to Roosevelt.  Science has never been the same. [Note we also
make more money for science from government than ever: note
the discussion on funding math where Halmos was quoted.]

What Tom did not point out is where or not scientists and engineers
have "more" responsibility.  Some people say since they are in the know,
they have MORE responsibility, others say, no this is a democracy
they have EQUAL responsibility, but judgments MUST be made by it's
citizens.  In the "natural world," many things are not democratic
(is gravity autocratic?)... well these are not the right words but
the illustrate the point that man's ideas are sometimes feeible.

While Weizenbaum may or may not weigh moral values, he is in a
unique position to understand some of the technical issues, and he
should properly steer the understanding of those weighing moral
decisions (as opposed to letting them stray): in other words, yes,
to a degree, he DOES weigh them and yes he DOES color his moral values
into the argument. [The moral equivalent to making moral judgments.]

An earlier posting pointed out the molecular biologists restricting
specific types of work at the Asolimar meeting years ago.  In the
journal Science, it was noted that much of the community felt it shot
its foot off, looking back, and that current research is being held back.
I would hope that the AI community would learn from the biologists'
experience and either not restrict research (perhaps too ideal)
or not end up gagging themselves.  Tricky issue, why doesn't someone
write an AI program to decide what to do?  Good luck.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  {hplabs,hao,nike,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

------------------------------

End of Arms-Discussion Digest
*****************************