[mod.politics.arms-d] Arms-Discussion Digest V6 #108

ARMS-D-Request@XX.LCS.MIT.EDU.UUCP (06/13/86)

Arms-Discussion Digest                  Thursday, June 12, 1986 8:11PM
Volume 6, Issue 108

Today's Topics:

                            Administrivia
                        Save 5%?  Why bother?
               Modelling disarmament and the arms race
                 Prisoner's Dilemma and the Arms Race
                          Sgt York software
                          Sgt. York software
                       Scorpions-in-the-bottle
                             debate style
                      Third World dictatorships

----------------------------------------------------------------------

Date: Thu, 12 Jun 1986  13:39 EDT
From: LIN@XX.LCS.MIT.EDU
Subject:  Administrivia

From time to time I get messages indicating bad addresses and the
like.  Due to the fact that ARMS-D is redistributed from various
points, I am often unable to track down these addresses.  IN the
future, I will place those that I can't find into the digest, and ask
the local redistribution sites to handle them.  I would make a
personal appeal, but often I don't know the right contact point.
Please cooperate.

The first of these is

DCSMITH@SRI-KL.ARPA: No such directory name

------------------------------

Date: 0  0 00:00:00 CDT
From: <mooremj@eglin-vax>
Subject: Save 5%?  Why bother?
Reply-To: <mooremj@eglin-vax>

Warning: this note contains no facts whatsoever, only personal opinions.

There have been several recent articles which imply that a defense (SDI
or otherwise) which saves 5% of the U.S. population from nuclear attack
is worthwhile.  It may sound heartless, but my response to that is:

			Why bother?

If the nuclear winter theory is correct, an attack large enough to kill
95% of the population will certainly trigger a nuclear winter, thereby
killing most or all of the remaining life on earth.  Even if the nuclear
winter theory is false, a 95% kill would leave our present society in ruins;
I believe that the many books and movies that depict post-holocaust life as
"nasty, brutish, and short" are close to the mark; if anything, they may be
too optimistic.  I have no desire to live in that kind of world.  Fortunately,
I'll probably be in the other 95% (and so will you, if you're reading this.)

Defense against massive nuclear attack must be highly successful, or else it
is pointless.  Talk about saving 5% (or 10%, or X%) is frightening.  Implicit
in the idea that saving 5% is worthwhile is the idea that losing 95% is worth
planning for, and hence "acceptable" in some sense.  If such a loss is 
considered "acceptable", nuclear attack will be a much more viable strategy 
than it would be otherwise.

				Martin Moore

------------------------------

Date: Thu, 12 Jun 86 09:02:06 pdt
From: weemba@brahms.berkeley.edu (Matthew P. Wiener)
Subject: Modelling disarmament and the arms race

I'm all in favor of careful studies of the consequences, both long and
short term, to be done *before* engaging in any disarmament, unilateral
or bilateral.

I'm also strongly in favor of careful studies of the consequences, both
long and short term, of an arms race *before* engaging in such.

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

PS-  I did not save a copy of my long article on the Prisoner's Dilemma
     that you just received, so please be careful!

------------------------------

Date: Thu, 12 Jun 86 08:39:23 pdt
From: weemba@brahms.berkeley.edu (Matthew P. Wiener)
Subject: Prisoner's Dilemma and the Arms Race

>>An interesting game theory question arises.  Is it better to cooperate
>>or confront in the long run?   [summary of some experiments]
>>              Over the long run, tit-for-tat produced better ourcomes
>>for both parties.
>
>This is in fact what is known as the Prisoner's Dilemma.  I believe
>that the nuclear arms race is in fact the Prisoner's Dilemma.
>Unfortunately, one or both sides do not seem to know the optimal
>long-term strategy (cooperation).

The above is far too simplistic.  I give only a tiny part of what can be
done when studying the arms race from a game theoretic point of view.

An analysis of the arms race, verification, and deterrence in general
via two player games is given in [1].  A more theoretical study of this
and other unstable two player games is in [2].

[Abbr's: PD == Prisoner's Dilemma, Ch == Chicken, t/t == tit-for-tat]

PD is not really an accurate model to the current arms race.  One diffi-
culty is that in the arms race one does not know what the other "player"
has in fact played.  [1] rejects PD on other grounds, although he uses it
to model other superpower crises, including the threat of nuclear war used
by Nixon near the end of the Yom Kippur War.  For the arms race itself, he
suggests Ch is the appropriate game.

		Prisoner's Dilemma             Chicken
	   \  2
	  1  \    Coop    Defect            Coop    Defect
	       \ ----------------          ----------------
	Coop    |  BB       DA            |  BB       CA
		|                         |
	Defect  |  AD       CC            |  AC       DD

		      Payoffs: A is best, D is worst.

Superficially the same, they are definitely distinct in operation and
strategies.  In PD, once one player has decided the other will defect, then
so will the second, but maybe or maybe not in Chicken.  Also, within each
model, it does make a difference just how far apart the payoffs A-D are.
Thus, just how much bigger is A over B, B over C, and C over D?

Actually, even within PD/Ch, it is not at all clear that cooperation is
the rational long-term strategy.  By rational here I do not mean *sane*,
of course, but only the technical he-who-has-the-most-toys-when-he-dies
sense of the term within game theory.  Indeed, game theoretic analyses
usually lead to probabilistic strategies.  In particular, it may very well
be rational to have a random number generator attached to the doomsday
button, but it certainly isn't *sane*.

Cooperation is definitely not a very rational one-shot PD/Ch strategy.  It
is unstable, in that both players are tempted to move to from cooperation
to non-cooperation.  Non-cooperation, unfortunately, is stable.

But what about long-run PD/Ch and the arms race and t/t?

Firstly, you have to be careful about what you mean by calling t/t the best
strategy.  It's the best when averaged against *all* competing strategies,
which is not really relevant to the actual arms race.  There exist strateg-
ies that uniformly defeat t/t, but fail spectactularly otherwise.  That is,
t/t owes its reputation due to its robustness (and its simplicity).

Secondly, the real world is better modelled probabilistically.  Thus, if
one pitted two realistic t/t's against each other, and not the pure ones
used in the referred to experiments, one of the two would at some point
randomly defect, leading to a bloody alternation of defection/cooperation
until one of them randomly called uncle and peace returns.  Not very op-
timal, although it may be realistic.

Thirdly, existing threats are of variable and not always accurately per-
ceived credibility.  Thus, your strategy might be very good in computer
simulation, but fail spectacularly if your opponent does not realize what
you are in fact doing and/or threatening.  This point is similar to the
mutual ignorance mentioned near the beginning.

Fourthly, the actual strategies and their expected value are usually
computed in terms of actual numeric values for payoffs A-D.  It should
be obvious that the evaluation of such is not objective, nor can one
expect them to be constant over time.

I could go on, but I think I've made it clear that the true situation
is very very muddled.  But I suppose we all knew that anyway.

So why study these models?  Do they tell us anything?

There's a preliminary caution that, like statistics, these simple models
can be manipulated to give any end result one wants, and then be used to
"justify" any policy recommendations.  Typical of such mental looseness
is D Hofstadter _Metamagical Themas_.  Nevertheless, [1] believes that
understanding the dynamics of PD/Ch can be used to suggest approaches to
arms control.  The author thinks that
	... the great anguish that the apparent irrationality
	of deterrence has caused can perhaps be partially al-
	leviated by an understanding that perilous games like
	Chicken need not be fixed in concrete but are, instead,
	subject ot manipulation that may enable the players to
	avoid humiliating subjugation or even more gruesome
	consequences.				 [1,pp 148-9]

References:
[1]	Brams, Steven J
	Superpower Games
	Yale University Press    1985
  ** Readable as long as you can handle high school math.

[2]	Campbell, Richmond ; Sowden, Lanning ; (editors)
	Paradoxes of Rationality and Cooperation
	University of British Columbia Press    1985
  ** Gets very technical quickly--not for the weak of heart.

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720
Any equilibrium--even if only an equilibrium of mutual exhaustion--would
make it easier to reach an enforceable settlement.	-Richard M Nixon

------------------------------

Date: Wednesday, 11 June 1986  01:52-EDT
From: decvax!bellcore!genrad!panda!wjh12!maynard!campbell at ucbvax.berkeley.edu
To: ARPA!RISKS at ucbvax.berkeley.edu, arms-d@xx.lcs.mit.edu
Re:   Sgt York software

In RISKS 3.4, Mike McLaughlin (mikemcl@nrl-csr) and Ken Laws (laws@sri-ai)
dispute the Sargent York latrine fan story. [...]

I quote from a story by Gregg Easterbrook in the November 1984 issue of
_The Washington Monthly_:

    During a test one DIVAD locked on to a latrine fan.  Michael Duffy,
    a report for the industry publication _Defense Week_, who broke this
    aspect of the story, received a conference call in which Ford officials
    asked him to describe the target as a "building fan" or "exhaust fan"
    instead.

_The Washington Monthly_ and _Defense Week_ are both reputable publications.
Does anyone have a citation for a retraction in _Defense Week_, or should we
assume that the TV networks swallowed Ford's story whole?

Larry Campbell                             The Boston Software Works, Inc.
ARPA: campbell%maynard.uucp@harvard.ARPA   120 Fulton Street, Boston MA 02109
UUCP: {alliant,wjh12}!maynard!campbell     (617) 367-6846

------------------------------

Date: Wednesday, 11 June 1986  12:48-EDT
From: Marc Vilain <MVILAIN at G.BBN.COM>
To: risks at SRI-CSL.ARPA, arms-d@xx.lcs.mit.edu
cc:   mvilain at G.BBN.COM, reid%oz at MC.LCS.MIT.EDU
Re:   Sgt. York software

Here is some information on the DIVAD software that hasn't appeared yet in
this forum.  [It] is abstracted from a longer note compiled by Reid
Simmons from material he received from Gregg Easterbrook (both his article
in the Atlantic, and personal communications).

According to Easterbrook, the DIVAD did target a latrine exhaust fan in
one series of tests.  The target was displayed to the gunners that man
the DIVAD.  But the Sgt. York did not shoot at the latrine, or even
swivel its turret in the latrine's direction, having prioritized the
target as less important than other targets in its range.

In another series of tests (Feb. 4 1984), U.S. and British officials
were to review the DIVAD as it took upon a rather cooperative target: a
stationary drone helicopter.  On the first test run, the DIVAD swiveled
its turret towards the reviewing stand as "brass flashed" and the
officials ducked for cover.  It was stopped only because an interlock
was put in place the night before to prevent the turret from being able
to point at the reviewing grandstand.  Afterwards, the DIVAD shot in the
general direction of the helicopter but the shells traveled only 300
yards.  The official explanation is that the DIVAD had been washed the
night before, screwing up its electronics.  Easterbrook wonders what
would happen if it rained in Europe when the DIVAD was being used.

Easterbrook goes on to claim that the snafus the DIVAD experienced were
very much due to software.  The main problem was that the pulse-Doppler
tracking radar and target acquisition computer were a very poor match.
Easterbrook claims that the hard problem for the software (tracking
fast, maneuvering planes) was easiest for the pulse-Doppler radar which
needs a moving target.  On the other hand, the hard part for the radar
(detecting stationary helicopters) was the easiest to aim at.  The DIVAD
mixed two opposing missions.

Easterbrook goes on to say that human gunners are often more successful
than their automated counterparts.  They can pick up on visual cues, such
as flap position on approaching aircraft, to determine what evasive
maneuvers the enemy might make.  These kinds of cues are not visible to
things like pulse-Doppler radars.  Further, evasive courses of action
are hard for human gunners to counter, but even harder for target
tracking algorithms (again the lack of visual cues comes as a
disadvantage).  For example, the DIVAD expected its targets to fly in a
straight line (which my military friends tell me is not too likely in a
real combat).

There is lots more to the Sgt. York story, not all of which is relevant
here.  If there is a moral to be drawn specifically for RISKS, it's
that as advanced as our technology may be, it may not always be the
match of the problems to which it is applied.  This was certainly the
case with the unfortunate DIVAD.

marc vilain

------------------------------

Date: Thu, 12 Jun 86 11:41:02 PDT
From: Clifford Johnson <GA.CJJ@SU-Forsythe.ARPA>
Subject:  Scorpions-in-the-bottle

> This is in fact what is known as the Prisoner's Dilemma.  I believe
> that the nuclear arms race is in fact the Prisoner's Dilemma.
> Unfortunately, one or both sides do not seem to know the optimal
> long-term strategy (cooperation).

Both the U.S. and U.S.S.R. do understand the optimization problem.
Nuclear war, preceeding crises, conventional war, the President, the
USSR supreme command, the rest of the world, and so on, have all
been game-theoretically programmed for the DOD by the RAND Strategy
Assessment Center.  For example, a decision to go to war is nicely
preprogrammed, as explained in Treatment Of Escalation In The RAND
Strategy Assessment Center, RAND N-1969-DNA, 1983, at 19-35:

"The specific characteristics of the adversaries, including their
military capabilities, political objectives, and behavioral or
doctrinal features, are required to estimate the decision
probabilities, outcome probabilities, and the values of the
outcomes... ** the decision on whether to begin a conflict can be
determined by calculating the expected utility of the conflict... **
the expected utility of a chance node is defined as the probability
of an outcome times the utility of the outcome, summed over all
branches at the node...  If the expected utility of the conflict is
greater than the utility of the status quo, which can be set to
zero, the decision would be to begin the conflict."

The model, known as the RSAC system, can take decisions such as:

    SITUATION: ... BLUE detects the launching of numerous RED
               ICBMs, SSBNs...
    SPECIAL INSTRUCTIONS:  (Do a Force look-ahead)
               IF (Sufficient time-to-RED-ICBM-impact remains)...
               THEN (Launch BLUE ICBMs immediately (LUA))...
               ELSE (Ride out the initial RED attack...)
(The Mark II Red And Blue Agent Control Systems For The RAND
Strategy Assessment Center, RAND N-1838-DNA, 1983.)

But it can also take less time-urgent decisions, and its
conventional war flowcharts lead inexorably to a decision box
captioned "Use nucs?"  (See, e.g., RAND R-2945, 1983, at 25.)
Naturally, it proved impossible on the basis of reasonable utility
measures to get the model to escalate over the first-use threshold,
etc., and the decision had to be taken as to whether to preserve the
model's rigor or to fudge the utilities to escalate according to DOD
demands.  The latter course was taken, despite the *conscious*
realization that the strategies generated were certainly suboptimal:

"A basic element of decision theory is the idea of utility values:
values representing the relative desirability of alternative
outcomes.  The reality of implicit utilities is easily demonstrated:
if general nuclear war were absolutely unacceptable, then we would
have to surrender whenever we approached the nuclear barrier -- at
least, in a "rational" calculation.  In fact, the United States
would take substantial risks regarding general nuclear war rather
than submit.  There is likely to be resistance to assigning
numerical utilities to outcomes.  This reluctance may be diminished
somewhat once it is realized that utilities can be constructed to
reflect beliefs about a decisionmaker's willingness to gamble on
alternative outcomes.  It is not necessary to make statements like
"this outcome is x times as bad as that outcome." ...  For example,
one could deduce utilities for model purposes without referring to
them directly ... the point is that the objectionable aspects of
specifying utility values can to a large extent be averted by
translating the problem into queries about subjective indifference
points for intuitively understandable tradeoffs...  (I)t may be more
appropriate for the RSAC to go directly to the "bottom line rules of
behavior," even if those rules might, upon further analyses, appear
to be irrational or *at least suboptimal by many criteria*.  Usual
heuristic modeling (i.e. this "fix") is, on the other hand, less
realistic than decision analysis in its failure to consider
uncertainties and the role of odds...  (I)t will be possible to
specify more exactly the requirements that the escalation model must
specify.  We suspect, however, that the RSAC's Red and Blue Agents
will use heuristic rules for deciding which alternatives to choose
in decision trees; there will probably be no explicit use of utility
sanctions except for background research."
(Treatment Of Escalation, supra, at 24-25,38-39.)

As suspected, and as recounted in, for example, RAND P-6763, the
DOD, who own the program, demanded the forced, suboptimal
escalation space, despite such cautions as:

"An even more perverse application is to demonstrate the efficacy of
a "solution" or "strategy" that is inherently flawed.  Such games
are at best expensive ways of demonstrating that the proposal is a
bad one. At worst, with heavy game overcontrol to achieve
the foregone conclusion, the game is subsequently cited as proof of
the notion's efficacy."
(On Free-Form Gaming, RAND N-2322-RC, 1985.)

Lest the RSAC system be thought irrelevant, it is noted that RAND
redesigned the SIOP in 1961 (Ellsberg especially) to include options
other than spasm response, and that this assignment arose from his
game-theoretical model of deterrence, that is, the Prisoner's
Dilemma.  Also, RAND design SIOP target sets (generally by
optimizing *comparison*, rather than absolute, damage functions -
i.e. U.S. minus Soviet damages), and in particular the LOW target
sets.  (RAND IN-24214-AF, 1979, now top secret.)

The message is, the administration does know, or have some objective
comprehension, of strategies optimal according to game theory, but
deliberately overrides them.  I think this is important to
appreciate.

To:  ARMS-D@XX.LCS.MIT.EDU

------------------------------

Date: Thu, 12 Jun 86 10:23:22 pdt
From: Steve Walton <ametek!walton@csvax.caltech.edu>
Subject: debate style

"One horse laugh is worth 10,000 syllogisms."--H.L. Mencken, as quoted by
tireless debunker of psuedoscience, Martin Gardner

------------------------------

Date: Thu, 12 Jun 86 10:38:48 pdt
From: Steve Walton <ametek!walton@csvax.caltech.edu>
Subject: Third World dictatorships

Charles Crummer commented recently that we don't have a moral leg to
stand on (or words to that effect) if we continue to support
repressive regimes while condemning the Soviets for having one.  I'm
afraid that Jeanne Kirkpatrick's distinction between authoritarian and
totalitarian regimes is both accurate and useful, and submit as
evidence recent events in the Phillipines and Haiti as contrasted with
the far worse repression in Vietnam, Laos, and Cambodia.  The latter
group's governments have not, and will not fall due to internal
popular uprisings.  Even Pol Pot (and before him, Hitler) was removed
only by externally applied force.

It certainly appears that the real problem is that the governments
which support us are not repressive enough.  The ones which are are
nearly all regimes which call themselves socialist and are supported
by the USSR.

Steve Walton, Ametek Computer Research Division
(standard disclaimer)

------------------------------

End of Arms-Discussion Digest
*****************************