[mod.politics.arms-d] Arms-Discussion Digest V6 #109

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (06/13/86)

Arms-Discussion Digest                    Friday, June 13, 1986 8:55AM
Volume 6, Issue 109

Today's Topics:

                   Scorpions-in-the-bottle (5 msgs)
        An additional SDI problem: sensor technology (2 msgs)
                            Debate styles
----------------------------------------------------------------------

Date: Thu, 12 Jun 1986  20:41 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Scorpions-in-the-bottle

the whole business about calculating desired outcomes on a
game-theoretic basis assumes that is it possible to design/construct
utility functions appropriate to the problem.  I have read Ellsberg's
model of deterrence, and in fact I have a long critique of it.  When
someone first told me of the model, after reading it I thought it was
assigned as an example of how NOT to do deterrence research.  Its most
significant error is that it -- just as many other papers in game
throey -- assumes utility functions that can be treated just like
ordinary (cardinal) numerical data.  

In fact, utility functions are more appropriately *preference*
functions -- i.e., ordinal data -- and the rules of ordinary
arithmetic do not apply to these.  

When you treat ordinal data as cardinal data, you can generate any
result you want.

------------------------------

Date: Thursday, 12 June 1986  23:29-EDT
From: Clifford Johnson <GA.CJJ at SU-Forsythe.ARPA>
To:   LIN, arms-d@xx.lcs.mit.edu
Re:   Scorpions-in-the-bottle

> In fact, utility functions are more appropriately *preference*
> functions -- i.e., ordinal data -- and the rules of ordinary
> arithmetic do not apply to these.

I disagree entirely.  There are, within certain bounds, utility
functions that have numerical correlates.  I think one of the best
in nuclear exchange models is the number of people of killed.
It's a good objective measure, and it does mean something to
say that 100 million people killed is ten times as bad as 10 million
people killed.  But such measures as are reasonably arrived at
by this approach, are such as to numerically prohibit first use.

It is fallacious to proceed from the fact that nuclear weapons
might well be first used to the conclusion that the model and
the utility function is wrong.  The utility function is correct,
and the model is correct.  Ellsberg's fallacy, which you seem to
go along with, is to make utility functions mirror human folly,
in which case the analysis becomes foolish.

------------------------------

Date: Thu, 12 Jun 1986  23:58 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Scorpions-in-the-bottle

    From: Clifford Johnson <GA.CJJ at SU-Forsythe.ARPA>

    ....  There are, within certain bounds, utility
    functions that have numerical correlates.

I misspoke.  Certainly not *all* utility functions are ordinal in
nature.  But unless you can define one as corresponding to some
physical quantity that can be measured (such as number of deaths),
they are meaningless in an arithmetic context.

    ... one of the best
    in nuclear exchange models is the number of people of killed.

Ellsberg's model (if it is the one in his "Crude Models" paper) NEVER
uses this as a utility function.

Many economists have told me that the reason utility functions were
introduced in the first place was to fudge over the fact that
assigning well-defined metrics to a U.F. is an intractable process
over which people will disagree endlessly.  Its utility precisely
arises from the fact that you can define UFs in such a way that DOES
support your intuitive ideas about the way the world works, in short
that utility functions DO, and are intended to* mirror human folly,
You don't *need* utility functions in the first place if you have a
well-defined function like number of deaths.

    ...  But measures (such as # of deaths?) as are reasonably arrived at
    by this approach, are such as to numerically prohibit first use.

If the models really DO say that damage limitation is *impossible*
with a disarming first use, then they are wrong.  You can work the
numbers yourself.  The problem is that while damage limitation may be
possible in the sense of limiting deaths from a hundred million to
several tens of millions, those are *still* unacceptable losses, and
the utility functions must reflect those beliefs.

------------------------------

Date: Friday, 13 June 1986  05:21-EDT
From: Clifford Johnson <GA.CJJ at SU-Forsythe.ARPA>
To:   LIN
Re:   Scorpions-in-the-bottle

>     ...  But measures (such as # of deaths?) as are reasonably arrived at
>     by this approach, are such as to numerically prohibit first use.
>
> If the models really DO say that damage limitation is *impossible*
> with a disarming first use, then they are wrong.  You can work the
> numbers yourself.

I say the models say that crossing the nuclear threshold vis a
vis conflict with the Soviet Union brings into play a substantial
probability of annihilation-size damages.  There is argument as to
whther damage-limitation is possible -- and very good arguments that
nuclear war cannot be controlled.  Certainly, nuclear war, once
initiated, *might* not not be controlled, and it is the introduction
of *probabilities of relatively astronomical damages* that trips up
the models, or rather, that trips up those who would apply them
to justify first-use.

> The problem is that while damage limitation may be
> possible in the sense of limiting deaths from a hundred million to
> several tens of millions, those are *still* unacceptable losses, and
> the utility functions must reflect those beliefs.

I agree, but those are acceptable losses according to the present
administration; and the utility functions that are applied
in models must not reflect the losses, is what this adminstration
dictates.  I agree with you that they *should* reflect this
belief of ours, which is little more than a translation of "all
men are equal."

------------------------------

Date: Fri, 13 Jun 1986  08:55 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Scorpions-in-the-bottle

    From: Clifford Johnson <GA.CJJ at SU-Forsythe.ARPA>

    ... the models say that crossing the nuclear threshold vis a
    vis conflict with the Soviet Union brings into play a substantial
    probability of annihilation-size damages.

What models are you talking about??  My comments refer to what I
thought was the Ellsberg model in "Crude Choices", and that model says
NOTHING about the probability of escalation.  It discusses only
whether or not it is better to go first or to go second, and under
what circumstances either is true.

    ... it is the introduction
    of *probabilities of relatively astronomical damages* that trips up
    the models, or rather, that trips up those who would apply them
    to justify first-use.

You may be able to say that certain scenarios are more or less likely,
but lacking any empirical evidence one way or another, you can't
assign real values to those probabilities in any meaningful way.  It
is that fact that makes the whole business suspect.

    > deaths [on the order of]
    > several tens of millions are *still* unacceptable losses, and
    > the utility functions must reflect those beliefs.

    I agree, but those are acceptable losses according to the present
    administration; and the utility functions that are applied
    in models must not reflect the losses, is what this adminstration
    dictates.

I have never seen a statement from this Administration to the effect
that deaths of 70 M were acceptable; moreover, I don't believe it.  If
you have, please provide a citation.

    Utility functions *should* reflect this
    belief of ours, which is little more than a translation of "all
    men are equal."

I was not clear.  "Unacceptable" must be always be qualified to
"unacceptable to whom?"  I left the term unqualified not to suggest
that there is an absolute standard of what acceptable and unacceptable
are, but rather to suggest that the "to whom" part had to be filled in
on the basis of assumptions made by the creator (or user) of the
model.  These models say nothing more than "if a decision maker has
a given utility function, this is how he ought to proceed to maximize
his gain according to that utility function."  If the current
administration has a different utility function than mine, the model
will predict different things for their behavior as opposed to my
behavior. 

------------------------------

Date: Thu, 12 Jun 86 22:32:55 PDT
From: jon@uw-june.arpa (Jon Jacky)
Subject: Re: An additional SDI problem: sensor technology

> (Eugene Miya writes:) ... Where there are various groups watchdogging
> computing, but the more hardware oriented, EE areas such as radar have
> fewer opposition elements.

Sensors and signal processing comprise a larger portion of 
the SDI effort than anything else, according to many reports.

The most informative comments I have heard were by Michael Gamble, a 
vice president (I think) at Boeing, and head of that company's 'Star Wars'
research programs. He administers about half a billion dollars worth of
contracts.  In a talk to the Seattle chapter of the IEEE on Nov. 14, 1985,
he noted that the total SDI budget requests for fiscal years 1985 through 
1990 would total about $30 billiion, broken down as follows:  Sensors $13B,
directed energy weapons $7B, kinetic energy weapons $7B, Battle Management
$1B, Survivability $2B.  Sensors comprise almost half the total. (I do not
know whether these proportions are maintained in the somewhat reduced 
budgets that get approved.)

Gamble also explained why he thought missile defense was once again 
plausible, after being debunked in the early 70's.  "What has changed 
since then?" he asked rhetorically, and gave five answers, three of which 
involved sensors: first, long wave infrared detectors and associated cooling
systems, which permit small warheads to be seen agains the cold of space;
second, "fly's eye" mosaic sensor techniques (like the ones used on the 
F-15 launched ASATS and in the 1984 "homing overlay experiment") -- these
are said to "permit smaller apertures" (I didn't catch the signficance of
that);  and third, low-medium power lasers for tracking, designation, and
homing.  The other two factors were long-life space systems and powerful
onboard computing capabilities.

There is a large computing component in the sensor field: digital signal
processing.  However, this area is not so well known to computer science
types.  Boeings largest SDI contract - over $300M - is for the "Airborne
Optical Adjunct," an infrared telescope and a lot of computers mounted 
in a 767 airliner, apparently for experiments in sensing and battle
managment for midcourse and terminal phase.  Two of the systems people
involved in this project gave a seminar at the UW computer science department
last January.  They mentioned that the signal processing was being handled
by the sensor people and they just regarded it as a black box.

I can think of two reasons why this area has received relatively little 
attention.  First, there were no galvanizingly absurd statments about sensors 
from relatively prominent SDI proponents - nothing like James Fletcher 
calling for "ten million lines of error-free code," or all that bizarre stuff
in the Fletcher report and elsewhere about launching pop-up X-ray lasers 
under computer control.  Second, there is a lot secrecy in the sensor area--
unlike battle management, where the important issues do not turn on classified
material.  Gamble noted that "there is not that much that is classified about
SDI, except things like, 'How far can you see?  How far do you have to see?'"
Needless to say, talking in detail about sensors would reveal how much we know
about Soviet warhead characteristics, how good our early warning systems 
really are, and so forth.

-Jonathan Jacky
University of Washington

------------------------------

Date: Fri, 13 Jun 1986  08:33 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: An additional SDI problem: sensor technology

there other reasons that sensors are relatively non-controversial.
For one, sensors have significant applications for other non-BMD uses.
For example, you would like very precise warning and attack assessment
information if you want to support a strategy of fighting very precise
and controlled and limited nuclear wars.  For another, sensors aren't
as sexy as new weapons.

------------------------------

Date: Fri 13 Jun 86 08:55:43-ADT
From:  Don Chiasson <CHIASSON@DREA-XX.ARPA>
Subject: Debate styles

From Herb Lin: 
> ....  Is it better to cooperate or confront in the long run?  An
> interesting [article] suggests that a strategy of always cooperating
> EXCEPT when you have been confronted (and then replying by confronting
> ONCE) -- a strategy called tit-for-tat retaliation -- ...

It is also necessary to know at what level of debate/confrontation the
players perceive the game to be.  For this theory to work, you must both
understand what is confrontation and what isn't.  It is the *perception*
that matters; your perception and his may be different.  There are people
whose *normal* mode of operation is stronger than others confrontation
mode.  Such situations make the game difficult to play, be the game who
gets the last piece of cake on the plate or global thermonuclear war.
          Don

------------------------------

End of Arms-Discussion Digest
*****************************