[mod.politics.arms-d] Arms-Discussion Digest V7 #51

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (11/08/86)

Arms-Discussion Digest              Saturday, November 8, 1986 12:58PM
Volume 7, Issue 51

Today's Topics:

                       prompt response (2 msgs)
                     on the perfectability of SDI
                       24 hour waiting period?
                           SDI Assumptions
                       Mid-course Interceptions
                            Administrivia
   comments posted to Weizenbaum's recent speech (from Vision List)
          killing someone if we had to look them in the eye
                              test bans
      Professionals and Social Responsibility for the Arms Race

----------------------------------------------------------------------

Date: Friday, 7 November 1986  18:33-EST
From: cfccs at HAWAII-EMH
To:   LIN
Re:   prompt response

How would the "entire target set" be coordinated without
communiations?  What if a critical part of the target set was to have
been coverd by a submarine that was destroyed?  What about their
submarine force continuing to destroy our remaining cities and
scattered forces?  What about the homeland which is now a radioactive
wasteland?  What you are painting is a remote chance at a
come-from-behind victory in which the U.S. submarine forces save the
day by destroying the rest of the world, sealing their own fate.

Now if we go back to the 24-hour wait before launch...let's not let
the hope of Soviet victory exist so we won't have to try it to settle
an arguement.  To say "launch-on-warning" does not mean we *must*
launch when we receive a warning signal.  It simply means we have
warned that if we feel we are being attacked, we will retaliate.  It
also prevents an attack coming in the form of a nuclear test!

Gary Holt
CFCCS @ HAWAII-EMH

------------------------------

Date: Sat 8 Nov 86 12:57:25-EST
From: Herb Lin <LIN@XX.LCS.MIT.EDU>
Subject: prompt response

    From: cfccs at HAWAII-EMH
    
    How would the "entire target set" be coordinated without
    communiations?  What if a critical part of the target set was to have
    been coverd by a submarine that was destroyed?

Under the type of attack on the U.S. you posited, what matters is to
eliminate the Soviet Union as a functioning national entity.  Under
this type of response, there is no critical part of the target set.

    What you are painting is a remote chance at a
    come-from-behind victory in which the U.S. submarine forces save the
    day by destroying the rest of the world, sealing their own fate.

This statement says it all.  Under these circumstances, there is no
such thing as victory, for either us or them.  The whole point is
deterrence, which if it holds saves the day.  If it fails, the day is
lost, by whatever metric of "victory" you choose.
    
    To say "launch-on-warning" does not mean we *must*
    launch when we receive a warning signal.  It simply means we have
    warned that if we feel we are being attacked, we will retaliate.

I do believe that the U.S. should maintain an LOW option, for the
reason that it is a hedge against breakthroughs in ASW.

    It also prevents an attack coming in the form of a nuclear test!

Other things prevent that.  

------------------------------

Subject: on the perfectability of SDI
Date: 07 Nov 86 19:45:33 EST (Fri)
From: dm@bfly-vax.bbn.com

Proponents of SDI are often puzzled (or aggravated) that opponents
keep harping on how you can't make a ``perfect'' defense.  The defense
doesn't have to be perfect, it just has to be ``pretty good'', they
think.

Well, this opponent of SDI keep insisting on perfection for a couple
of reasons:

	1) That's what the President (and the Secretary of Defense)
	are promising the people.  The taxpayers think they're getting
	a perfect defense (remember the rainbow umbrella peace-shield
	commercials on TV two years ago?).  If they knew they were 
	getting something that was less than perfect, particularly 
	something that was only good for defending missile silos, 
	they might want to explore cheaper options, like disarmament.

Big deal, Presidents have always had hidden agendas, and have always
misled the public.  The electorate and Congress need to be informed
that they are being sold a bill of goods.

	2) What is a likely Soviet reaction to our building SDI AND
	retaining our 40,000 warheads aimed at them?

I want to explore this second issue in some detail a bit.  

Assume we build an SDI system, what do we do with OUR nuclear weapons?

Do we dismantle them?  We have SDI, so we won't need the nuclear
weapons to retaliate against a Russian attack, because we're
invulnerable.  But wait, you say -- the defense isn't perfect.  We
still need to retain our warheads in order to DETER the Russians from
attacking us.

Well, fine, except when Reagan was selling SDI, one of his arguments
was that it was a moral alternative to mutually assured destruction.
Now we've still got MAD, plus SDI.  Goodbye moral SDI.

Do we therefore NOT dismantle them?  Do we retain them?  Okay, let's
look at the consequences of that.

Put yourself in the shoes of a Russian leader.  There is the USA, with
an SDI system that is clearly only partially effective, and will
almost undoubtedly let several hundred of your warheads through.  Yet
they also have a lot of MX and Trident missiles that really look like
they're first strike weapons, plus those Pershing-2 missiles just 10
minutes from your capital.  Even worse, three-quarters of your
warheads are on land-based sitting duck ICBMs, and you have only a
handful of warheads on relatively secure submarines (the Russians have
only about 20 submarines at sea at a time, partly because they don't
trust their submarine commanders to come back).

Gosh, those Americans sure look like they're gearing up for a first
strike.  Maybe the only purpose of SDI is to protect them from YOUR
retaliation with the few weapons you'll have left after their first
strike.  It may not be any good against an all-out attack on your
part, but it's plenty good for mopping up any remnants of your nuclear
forces you might try to use for retaliation.  

(This is the obvious reply to the rhetorical question, ``If SDI's no
good, how come the Russians fear it so much?'' -- They fear it partly
BECAUSE it's no good as a defense against a first strike, but it sure
looks like it might work against a ragged retaliatory strike.)

You have two courses of action open to you:

	1) Maybe you should launch your weapons on warning of an 
	American attack, in hopes of getting some of your warheads 
	through the SDI to their targets in the US.  (The Russians 
	have announced that, in the face of deployment of SDI, they'll
	have to go to a launch-on-warning policy.)

	2) Maybe you should build more missiles, as well, to 
	be better assured of having more surviving weapons to
	use in retaliation to an American first strike, in hopes
	of getting some weapons through their SDI.

In sum, SDI plus missiles on our side means more Russian weapons on a
hair-pin trigger.  To paraphrase an anti-Pershing-2 pamphlet I once
saw, ``Are Russian computers any good?  You'll be betting your life
they are...''  I hope Russian computers are better at distinguishing a
flock of geese or a rising moon from a missile attack than ours have
been.

So SDI plus missiles on our side is a lose, because the situation is
much less stable and the likelihood of an accident or misjudgement
triggering a nuclear attack on the US is much greater.

To be an (expensive) IMPROVEMENT over the existing situation, we have
to be willing to trust SDI enough to give up most of our missiles.  In
particular, a President has to be willing to bet the life of every
American and the survival of the American Way on the reliability of
SDI, enough to give up the option of retaliation.

That's why opponents of SDI keep harping on the imperfectability of
the SDI system.  An imperfect SDI, in the presence of thousands of
missiles will just make things worse.

If you don't believe me, would you believe Freeman Dyson?  In
``Weapons and Hope'' he goes through very much the same argument I
have above, and reaches the same conclusion, that SDI in the face of
thousands of warheads is a horrible mistake.  

Dyson thinks that SDI in the face of a few hundred warheads is
probably a good thing, and recommends we reduce arms first, then
deploy SDI.  I might agree to this provided both sides eliminated
ICBMs (something that it is possible to verify with existing
technology).  With an SDI system, Soviet cheating on the ICBM ban
wouldn't be a serious problem.

With bombers and cruise missiles it is much harder to launch a first
strike.  Before your bomber or cruise missile got to Omaha SAC's B-52s
would be in the air.  This is an important fact (mentioned in the IEEE
Spectrum special issue on verification and arms control this year),
because limitations on bombers and cruise missiles (particularly
cruise missiles) are much harder to verify, so they'd be harder to
limit with a treaty.

Let's look at SDI in a cost-benefit analysis.  It is one of a number
of techniques we can apply in our search for security.  Is it cheaper
and more reliable than the options?  Considering that one of the
options is a verifiable treaty reducing arms, I don't think so.
Particularly since, as I've argued in this message, it appears that
for SDI to make us more secure, it must be PRECEDED by arms reduction.

------------------------------

Date:     Fri, 7 Nov 86 21:11 EDT
From:     "Paul F. Dietz" <DIETZ%slb-test.csnet@RELAY.CS.NET>
Subject:  24 hour waiting period?


Someone has advanced the position that the US should wait 24 hours
after a nuclear attack before retaliating.  A response was that
this would eliminate US bomber forces.  The proponent claimed US
bombers could stay up this long, pointing to the fact that NEACP (sp?)
can stay up three days.

Well, I looked that up; the command plane can stay up for *at most*
three days, given sufficient in-air refueling.  The three day time
limit comes from the engines running out of lubricant.  After a
large nuclear strike there would likely be little in-air refueling
capability left.

There *is* one strategic weapons system that could still be flying
three days after an attack: the nuclear airplane!  Too bad we
didn't build it... (:-)).

------------------------------

Subject: SDI Assumptions
Date: Fri, 07 Nov 86 18:45:59 -0800
From: crummer@aerospace.ARPA

This is a comment on Prarie Dan's note of October 27.

Engineers are people who build things.  No engineer except in his
spare time addresses the question " Can X ever, ever be done?".  He is
only interested in making the judgement as to whether he can do X NOW
or not.  A research engineer is interested in whether or not he can
figure out how to do X NOW.  A physicist is interested in how the
physical world works, not, as a physicist, how to build anything (that
is for the engineer to do).

A physicist will say to the engineer, "I just found out that the world
works like this: ... ." and the engineer says, "That's interesting.
That means that I should be able to make a widget to do X.  I am going
away to see if I can figure out how to build it or maybe I'll discover
that I can't build it NOW."

An example: According to the general theory of relativity an
anti-gravity effect would be produced by an accelerated mass.  A
device to utilize this effect would be a massive torus spinning with a
constant angular acceleration.  Such a device generating a field
strong enough to lift itself off the surface of the earth, however,
would have the cross-sectional area of a football field, be made of
collapsed matter, and would have to be accelerated at an angular rate
of the order of hundreds of thousands of radians per second per second
depending on the value of its large diameter.  After calculating these
"ballpark" figures, the engineer says, without fear of contradiction
by the serious engineering community, "Well, I can't build this device
NOW and I am not even interested in embarking on an engineering effort
to do any investigating of such a device.  It is at least 40 orders of
magnitude away from possibility."  Star Wars is not this far away from
reality.  The sensor technology, for instance, is maybe only 4 - 6
orders of magnitude away from reality, i.e. where thousands of 64 x 64
arrays of sensors of a specified sensitivity are needed not one, let
alone one array of them, has yet been made.

All it takes, however, to start such a project is for a popular
president to speak it into existence.  In place of the above outlined
process is: "Here's money, now DO IT!"  Feasibility cannot be spoken
into existence no matter HOW popular the president is!

  --Charlie

------------------------------

Subject: Mid-course Interceptions
Date: Fri, 07 Nov 86 19:22:41 -0800
From: crummer@aerospace.ARPA

>Date: Sun, 2 Nov 1986  12:41 EST
>From: LIN@XX.LCS.MIT.EDU
>Subject: Boost phase interceptions
>
>    From: Clifford Johnson <GA.CJJ at forsythe.stanford.edu>
>
>    I don't know of any scenarios
>    for it that flunk boost phase and get an acceptable shootdown rate
>    thereafter.  Do you?
>
>Another concept, discussed in candidate architectures, is one in which
>you do very effective mid-course discrimination without doing
>boost-phase intercept, perhaps using interactive discrimination with
>neutral particle beams.  Then you only multiply the targets by a
>factor of 10, rather than 100 or 1000.

With the miserable signal/noise ratio that a sensor would have looking
at a cool target against a star field, how would these particle beams
be pointed?  I'm sure the beam can't be fired and steered at the rates
necessary to "paint the sky" and find the objects even if interactive
discrimination would work.  Herb, do you have any more information on
the mid-course fantasy?

  --Charlie  

------------------------------

Date: Sat, 8 Nov 1986  08:07 EST
From: LIN@XX.LCS.MIT.EDU
Subject: Administrivia

    Message failed for the following:
    APRI1801%UA.BITNET@WISCVM.WISC.EDU: 550 Unknown Host 'UA.BITNET'
    FJOHNSO3%UA.BITNET@WISCVM.WISC.EDU: 550 Unknown Host 'UA.BITNET'
    RSHEPHE%UA.BITNET@WISCVM.WISC.EDU: 550 Unknown Host 'UA.BITNET'

These guys have been removed from the list until UA becomes better
known to my mailer.

------------------------------

Date: Sat, 8 Nov 1986  08:13 EST
From: LIN@XX.LCS.MIT.EDU
Subject: comments posted to Weizenbaum's recent speech

The following comments appearer on VISION-LIST regarding the
Weizenbaum speech I posted on Arms-d.]

                    ==============================

Jay Glicksman's comments:

The main question that I see arising from the talks is : is it time
to consider banning, halting, slowing, or otherwise rethinking
certain AI or technical adventures, such as machine vision, as was
done in the area of recombinant DNA.



Greg Orr's comments <orr@ads.arpa>:

both weizenbaum and wang have interesting arguments on the moral
and military implications of machine vision research, i would like
to add an economic theme to this discussion.

Assumptions

1. even without intense development in weapons technology
the WEST is PROBABLY safe from invasion or intimidation by the
Comunists for X years, if nominal force maintenance and
improvements in logistics, training, organization, etc are made.

2. the differential payoffs to doing machine vision
and other sorts of research for business purposes rather than for
military purposes would be significant (arguments about the
efficiency of spinoff vs directed development apply here.  as does
the argument for reverse spinoff ie. there are very few aspects of
research into remote sensing technology (for earth resources,
polution monitoring...) that could not be utilized by the
military).  the money not spent on weapons research would be spent
on civilian research or not spent and the deficit would
be reduced (improving our nations long term economic prospects).

3. while russia and usa have been building up large arsenals and
fighting wars for the last 20 years, japan and others have been
making progress towards economic control of the world.  they are
not the only ones making economic/technological progress, but they
are concentrating more on this aspect of international relations
than many other countries and are having quite a bit of success.

4. we may be able to threaten to bomb japan if they don't give us
favorable trade concessions, marketing rights, ... ; but this
seems about as unlikely as actually bombing them because our trade
deficit is too large.  aside from the spinoffs of military
research into commercial products (something the japanese pick up
on almost as redily as do the developers of the technology) and
support of research labs and training of scientists, the only
international economic benefits of our military research are our
weapons sales to other countries (and this may be a very costly
activity in the long term). 

5. we are losing an economic/technological competition while
maintaining a military stalemate (though not all the reasons for
this are military).  to win this sort of war we must use all our
resources efficiently (human research talent and R&D budgets).  

suggestion

	give up on the WEAPONS OF DEATH research for X years and
apply the resources of our country (and other countries in the
WEST) to strengthening our economic and social infra structures.
if the 50% or so of our research scientists and engineers that
work for the military are as good as they claim (well mabey not
that good, but adequate anyway), then perhaps in X= 10 or 20
years we would actually have the security our military
establishment is always telling us they are working towards,
and morally and econmically better lives as well.

soapbox commentary

there are many ways one can influence the political
decisions of a democracy (legally); voting, jury duty, supporting
a political party, running for office, writing, teaching and
leading a exemplary life.  a good citizen should be involved in
more than just one of these activities, and for more than just one
reason.  standing up for moral principles in research may make a
difference to a small community of researchers, but without some
coupling into the overall political process it is the sort of act
that satifies the conscience of a few without changing much in the
underlying social problems that lead to such moral crises.

pragmatically i would worry much more about how i can influence
others to adopt sane, responsible and humanitarian views about the
future of the world, than about improving the tracking efficiency
of an overpriced missile designed to take out an overvalued tank.
i guess i believe in the cliche' "guns don't kill people, people do."
to which i would add - smarter guns may kill more people more
efficienctly, but smarter people would need to kill fewer people.

now down from the soapbox and into the voting booth.

			\|/
		       >GLO<
			/|\

------------------------------

Date: Sat, 8 Nov 86 09:55:39 PST
From: ihnp4!utzoo!henry@ucbvax.Berkeley.EDU
Subject:  RE: killing someone if we had to look them in the eye

> Of course we'd kill them.  Modern (and not so modern) armies are desensitized
> to the violence they commit.

Also, when one is in the front lines, shooting at the opposition seems like
a good way to survive.  Soldiers do get desensitized to violence, but this
doesn't turn them into rabid killing machines, the apparent views of some
liberals notwithstanding.  Combat-experienced infantrymen, at least from
Western cultures, are generally pacifists; patriotism and the glory of war
cut little ice with them.  They fight because it's suicidal not to in such
an environment, and they stay in that environment because they don't want
to leave their buddies short-handed.  (The effectiveness of combat infantry
deteriorates badly if the men haven't trained together long enough to form
"buddy" relationships, which is why smart commanders put a high priority on
training together *as a unit* before going into combat, regardless of the
level of experience of the troops.)  This doesn't necessarily apply to
elite units or non-Western soldiers, mind you.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: Sat, 8 Nov 86 09:55:46 PST
From: ihnp4!utzoo!henry@ucbvax.Berkeley.EDU
Subject: test bans

> Let's not forget a very important point.  You may not like the Soviets'
> politics, but they are human beings.  It is in the self-interest
> of BOTH nations to end the arms race.  In a nuclear war, everyone loses.

Let's not forget another very important point:  an arms race and a nuclear
war are not the same thing.  Since the Soviets are human beings, few of
them will dispute that it is in their self-interest to avoid nuclear war.
The contention that ending the arms race is vital to achieving this goal
is a plausible contention, not an obvious fact.  There won't be nearly
such unanimity there.  (Personally I think this contention probably correct,
but there is room for debate and uncertainty.)

Also, even if we stipulate that it is in the self-interest of a nation
to end an arms race, nations as a whole do not make decisions; their leaders
do.  The self-interest of a nation's leaders does not necessarily coincide
with the self-interest of the nation.  Using the self-interest of a nation
to infer what that nation's leaders are likely to do is dubious.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: Sat, 8 Nov 86 09:55:53 PST
From: ihnp4!utzoo!henry@ucbvax.Berkeley.EDU
Subject:   Professionals and Social Responsibility for the Arms Race

> ... This year, Dr. Weizenbaum of MIT was the chosen speaker...
> The important points of the second talk can be summarized as :
>    1) not all problems can be reduced to computation, for
>       example how could you conceive of coding the human
>       emotion loneliness.

I don't want to get into an argument about it, but it should be pointed
out that this is debatable.  Coding the emotion of loneliness is difficult
to conceive of at least in part because we don't have a precise definition
of what the "emotion of loneliness" is.  Define it in terms of observable
behavior, and the observable behavior can most certainly be coded.

>   2) AI will never duplicate or replace human intelligence
>      since every organism is a function of its history.

This just says that we can't exactly duplicate (say) human intelligence
without duplicating the history as well.  The impossibility of exact
duplication has nothing to do with inability to duplicate the important
characteristics.  It's impossible to duplicate Dr. Weizenbaum too, but
if he were to die, I presume MIT *would* replace him.  I think Dr. W. is
on very thin ice here.

>    5) technical education that neglects language, culture,
>       and history, may need to be rethought.

Just to play devil's advocate, it would also be worthwhile to rethink
non-technical education that covers language, culture, and history while
completely neglecting the technological basis of our civilization.

>    8) every researcher should assess the possible end use of
>       their own research, and if they are not morally comfortable
>       with this end use, they should stop their research...
>       He specifically referred to research in machine vision, which he
>       felt would be used directly and immediately by the military for 
>       improving their killing machines...

I'm afraid this is muddy thinking again.  *All* technology has military
applications.  Mass-production of penicillin, a development of massive
humanitarian significance, came about because of massive military funding
in World War II, funding justified by the tremendous military significance
of effective antibiotics.  (WW2 was the first major war in which casualties
from disease were fewer in number than those from bullets etc.)  It's hard
to conceive of a field of research which doesn't have some kind of military
application.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

End of Arms-Discussion Digest
*****************************