[mod.politics.arms-d] Arms-Discussion Digest V7 #38

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (10/23/86)

Arms-Discussion Digest               Thursday, October 23, 1986 4:24PM
Volume 7, Issue 38

Today's Topics:

                                 LLNL
        Risks from Expert Articles on SDI (3 msgs from RISKS)
               An SDI Debate from the Past (from RISKS)
                         Autononmous Superman
                       Pin down what 80% means
                    Editorial on SDI (from RISKS)
          Stealth vs. ATC / SDI Impossibility? (from RISKS)
                          SDI Impossibility
                        Re:  SDI Impossibility
             Star Wars, Lying, and the John Doe Syndrome
                      RAND-ABEL's one-way street
                          SDI Impossibility

----------------------------------------------------------------------

Date: Tue, 21 Oct 86 13:55:27 PDT
From: Steve Walton <ametek!walton@csvax.caltech.edu>
Subject: LLNL

Since the subject of the location and names of national laboratories
has come up, I thought I'd throw this out.  In the October 1986
Physics Today is a letter to the editor from Mary B. Lawrence, widow
of Ernest O. Lawrence, asking that his name be removed from the
Lawrence Livermore National Laboratory.  She feels that her husband's
memory is ill served by his name being attached to a prominent weapons
laboratory, and that it is often confused with another location: "This
sort of thing is very damaging to the Lawrence Berkeley Laboratory,
the direct descendant of the original Radiation Laboratory and
Lawrence's true and proper memorial."  Unfortunately, it would take an
act of Congress, since the name Lawrence Livermore National Laboratory
appears in the Military Authorization Act of 1979.  She asks for help
in bringing this about from the scientific community.

------------------------------

Date: Tuesday, 21 October 1986  09:39-EDT
From: parnas%qucis.BITNET at WISCVM.WISC.EDU
To:   RISKS at CSL.SRI.COM, arms-d
Re:   Risks from Expert Articles (RISKS-3.82)

   Andy Freeman criticizes the following by Michael L.  Scott,
"Computers have no such abilities.  They can only deal with situations
they were programmed in advance to expect."  He writes, "Dr.  Scott
obviously doesn't write very interesting programs.  :-) Operating
systems, compilers, editors, mailers, etc. all receive input that
their designers/authors didn't know about exactly.  "

   Scott's statement is not refuted by Freeman's.  Scott said that the
computer had to have been programmed, in advance, to deal with a
situation.  Freeman said that sometimes the programmer did not expect
what happened.  Scott made a statement about the computer.  Freeman's
statement was about the programmer.  Except for the anthropomorphic
terms in which it is couched, Scott's statement is obviously correct.

   It appears to me that Freeman considers a program interesting only
if we don't know what the program is supposed to do or what it does.
My engineering education taught me that the first job of an engineer
is to find out what problem he is supposed to solve.  Then he must
design a system whose limits are well understood.  In Freeman's
terminology, it is the job of the software engineer to rid the world
of interesting programs.

   Reliable compilers, editors, etc., (of which there are few) are all
designed on the basis of a definition of the class of inputs that they
are to process.  We cannot identify the actual indvidual inputs, but
we must be able to define the class of possible inputs if we are to
talk about trustworthiness or reliability.  In fact, to talk about
reliability we need to know, not just the set of possible inputs, but
the statistical distribution of those inputs.

Dave Parnas

------------------------------

Date: Tuesday, 21 October 1986  09:16-EDT
From: LIN
To:   Andy Freeman <ANDY at SUSHI.STANFORD.EDU>
cc:   RISKS at CSL.SRI.COM, arms-d
Re:   Risks from Expert Articles

    From: Andy Freeman <ANDY at Sushi.Stanford.EDU>

    Operating systems, compilers, editors, mailers, etc. all receive input
    that their designers/authors didn't know about exactly.  

When was the last time you used a mailer, operating system, compiler,
etc.. that you trusted to work *exactly* as documented on all kinds of
input?  (If you have, pls share it with the rest of us!)

    It can be argued that SDI isn't understood well enough for humans to make
    the correct decisions (assuming super-speed people), let alone for them to
    be programmed.  That's a different argument, and Dr. Scott is (presumably)
    unqualified to give an expert opinion.  His expertise does apply
    to the "can
    SDI decision be programmed correctly?"  question, which he spends just one
    paragraph on.

You are essentially assuming away the essence of the problem by
asserting that the specs for the programs involved are not part of the
programming problem.  You can certainly SAY that, but that's too
narrow a definition in my view.

------------------------------

Date: Tuesday, 21 October 1986  17:40-EDT
From: Andy Freeman <ANDY at Sushi.Stanford.EDU>
To:   LIN
cc:   RISKS at CSL.SRI.COM, arms-d
Re:   Risks from Expert Articles

Herb Lin writes:

    When was the last time you used a mailer, operating system, compiler,
    etc.. that you trusted to work *exactly* as documented on all kinds of
    input?  (If you have, pls share it with the rest of us!)

The programs I use profit me, that is, their benefits to me exceed
their costs.  The latter includes their failures (as well as mine).  A
similar metric applies to weapons in general, including SDI.  (Machine
guns jam too, but I'd rather have one than a sword in most battle
conditions.  The latter are, for the most obsolete, but there aren't
perfect defenses against them.)

Lin continued with:

    You are essentially assuming away the essence of the problem by
    asserting that the specs for the programs involved are not part of the
    programming problem.  You can certainly SAY that, but that's too
    narrow a definition in my view.

Sorry, I was unclear.  Specification and implementation are related,
but they aren't the same.  There are specs that can't be implemented
acceptably (as opposed to perfectly).  Some specs can't be implemented
acceptably in some technologies, but can in others.  (This can be
context dependent.)  Dr. Scott's expertise applies to the question of
whether a given spec can be programmed acceptably, not whether there
is an spec that can be implemented acceptably.  Much of the spec,
including the interesting parts of the definition of "acceptable", is
outside CS, and (presumably) Dr. Scott's expertise.

Another danger (apart from simplification to incorrectness) of expert
opinion articles is unwarranted claims of expertise.  Dr. Scott
(presumably) has no expertise in directed energy weapons yet he claims
that they can be used against cities and missiles in silos.  Both
proponents and opponents of SDI usually agree that it doesn't deal
with cruise missiles.  If you can kill missiles in silos and attack
cities, cruise missiles are easy.

-andy

------------------------------

Date: Tuesday, 21 October 1986  11:03-EDT
From: "DYMOND, KEN" <dymond at nbs-vms.ARPA>
To:   risks <risks at csl.sri.com>
Re:   An SDI Debate from the Past

While looking something up in Martin Shooman's book on software 
engineering yesterday, I came across the following footnote (p.495):

    Alan Kaplan, the editor of Modern Data magazine, posed the question,
    "Is the ABM system capable of being practically implemented or is
    it beyond our current state-of-the-art ?"  The replies to this
    question were printed in the January and April 1970 issues of the
    magazine.  John S. Foster, director of the Office of Defense
    Research and Engineering, led the proponents, and Daniel D.
    McCracken, chairman of Computer Professionals against ABM, led
    the opposition.

It's startling that the very question that so interests us today was
put 15 or so years ago; to make it the exact question, all you have
to do is change the 3 letters of the acronym.  And this was 3 (?)
generations ago in computer hardware terms (LSI, VLSI, VHSIC ?) and
some indeterminate time in terms of software engineering (I can't
think of anything so clear-cut as circuit size to mark progress in
software).  International politics, however, seems not to have
changed much at all.

I'll try to track down those articles (Modern Data no longer exists
having become Mini-Micro Systems in 1976), but in the meantime can anyone 
shed light on this debate from the dim past ?

(BTW, Shooman comments "Technical and political considerations were
finally separated, and diplomatic success caused an abrupt termination
of the project." p. 498)

------------------------------

Date:       22 Oct 86  12:10:48 bst
From: S.WILSON%UK.AC.EDINBURGH@ac.uk
Subject:    Autononmous Superman

   I can't remember the exact quote, but one of the Martians in
C.S.Lewis' "Out of the Silent Planet" says something like, "I
think you lost something when you learned to kill at a
distance": he would happily risk his life hunting a savage fish
with a spear from a small boat but couldn't understand why human
beings should want to shoot each other. (Actually, now I think
about it it may have been the earth man who said it, but the
sentiment remains the same.)

   Most of our modern weapons development has had the effect, if
not the explicit aim, of insulating people from the effects of
their decision to fight.  I remember seeing a reenactment of a
Civil War battle (English Civil War, 17th Century).  Firearms
were just coming in but the major weapons were still swords and
pikestaffs and to kill a man you had to go up to within 2 or 3
feet, look him in the face and try to hack pieces off him.
Firearms increasingly take away the intimacy of battle: you
point a rifle at someone some hundreds of feet away, pull the
trigger and he falls down.  With artillery and missiles you push
a button, turn a key or whatever and you never even see the
result.  The idea of robot warriors pushes the result of one's
action another (final?) step away - you send someone else off to
do the killing and it isn't even a real person!

   Looking at it dispassionately, if you've got to fight wars
and the idea is to kill the enemy with as little risk to
yourself as possible then the autonomous weapon seems like a
good idea.  However I think we need to bring in the moral and
emotional dimensions to the question.  My original comment about
the neutron bomb (V7 #19) was intended to point out that
Clifford Johnson's intuitive horror about weapons intended to
kill people rather than damage materiel could be stimulated by
current technology rather than something in the future.

   So, to put a question, if we had to look the victims in the
face while we used any weapon, would we use them at all?  Even
if the other guy did?  I think my answer would be no, but I'd be
much happier that I'd reached the right decision about killing
someone if I did have to look him in the eye while I did it than
if I did it at a distance of several thousand miles or at the
moral distance of having a robot do it for me.

Sam Wilson, ERCC, University of Edinburgh, Scotland

------------------------------

Date: 1986 October 23 00:07:04 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:Pin down what 80% means

LIN> Date: Wed, 15 Oct 1986  12:45 EDT
LIN> From: LIN@XX.LCS.MIT.EDU
LIN> Subject: Fossedal asserts 80%+ effective SDI imminent 

LIN> What do you mean by "effectiveness"?  I believe that in the Falklands,
LIN> over 80% of the Sidewinder missiles fired hit their targets.  But 80%
LIN> of the incoming warplanes were not shot down.  In other owrds, I sort
LIN> of understand what 80% effective might mean in the context of BMD.
LIN> I'm not sure I understand it in the context of other weapons.

I have always used "effectiveness" of a defense to refer to the
proportion of incoming active targets that get destroyed or otherwise
fail to reach their target due to the defense. Any other definition of
"effectiveness" would be misleading unless spelled out, and probably
useless even if spelled out. I don't care if we send a million pellets
in reverse-orbit just to kill 400 ICBMs. I don't care if only 0.04% of
the pellets actually hit ICBMs, the remaining 99.96% being failures.
What I care about is whether 50% or 90% or 99% or 99.9% of the ICBMs
are hit, the remaining 50% (200) or 10% (40) or 1% (4) or 0.1% (0 or 1)
ICBMs hitting our cities.  (Of course without arms reduction first, we
have 4000 warheads heading this way and nothing sort of about 99%
effectiveness of defense would make much of a qualitative difference.)

Shall we all agree to use this enemy-target-destroyed measure of
effectiveness instead of some other measure in our ARMS-Discussions?

------------------------------

Date: Wednesday, 22 October 1986  11:51-EDT
From: scott at rochester.arpa
To:   ANDY at Sushi.Stanford.EDU, RISKS at CSL.SRI.COM
cc:   scott at rochester.arpa
Re:   Editorial on SDI

RISKS-3.82 contains a response from Andy Freeman to an editorial
I posted to RISKS-3.81.  Andy and I have also exchanged a fair amount
of personal correspondence in the past couple of days.  In that
correspondence he maintains that I have disguised a political argument
as expert opinion.  This from his posting to RISKS:

> Most op-ed pieces written by experts (on any subject, supporting any
> position) simplify things so far that they're actually incorrect.  The
> public may be ignorant, but they aren't stupid.  Don't lie to them.
> (This is one of the risks of experts.)

I do not believe that I have oversimplified anything.  I certainly haven't
lied to anybody (let's not get personal here, ok?).

When technical arguments disagree with government policy, it is standard
practice to dismiss those arguments as "purely political."  Almost everything
that a citizen says or does in a democratic society has political overtones,
but those overtones do not in and of themselves diminish the technical
validity of an argument.  "The emperor has no clothes!" can be regarded
as a highly political statement.  It is also technically accurate.

In my original editorial, I declared that we could not be certain that
the software developed for SDI would work correctly, 1) because we don't
know what 'correctly' means, and 2) because even if we did, we wouldn't
be able to capture that meaning in a computer program with absolute
certainty.  Andy takes issue with point 1).  My words on the subject:

   > Human commanders cope with unexpected situations by drawing on their
   > experience, their common sense, and their knack for military
   > tactics.  Computers have no such abilities.  They can only deal with
   > situations they were programmed in advance to expect.

This is the statement Andy feels is 'actually incorrect'.  His words:

> Operating systems, compilers, editors, mailers, etc. all receive input
> that their designers/authors didn't know about exactly.  Some people
> believe that computer reasoning is inherently less powerful than human
> reasoning, but it hasn't been proven yet....
>
> It can be argued that SDI isn't understood well enough for humans to
> make the correct decisions (assuming super-speed people), let alone
> for them to be programmed.  That's a different argument and Dr. Scott
> is (presumably) unqualified to give an expert opinion.

Very true, the designers of everyday programs don't know about their
input *exactly*, but they *are* able to come up with complete
characterizations of valid inputs.  That is what counts.  The "inputs"
to SDI include virtually anything the Soviets can do on the planet or
in outer space.  It does not require an expert to realize that there is
no way to characterize the set of all such actions.  A command interpreter
is free to respond "invalid input; try again"; SDI is not.

I stand by the technical content of my article: SDI cannot provide
an impenetrable population defense.  Impenetrability requires certainty,
and that we can never provide.  Though the White House has kept
debate alive in the minds of the public, it is really not an issue
among the technically literate.  Almost no one with scientific credentials
is wiling to maintain that SDI can defend the American population
against nuclear weapons.  There are individuals, of course (Edward Teller
springs to mind), but in light of the evidence I must admit to a personal
tendency to doubt their personal or scientific judgment.  Certainly
there is no groundswell of qualified support to match the incredible
numbers of top-notch physicists, engineers, and computer scientists
who have publically declared that population defense is a myth.

What we do see are large numbers of individuals who believe that the
SDI program should continue for reasons *other* than perfect population
defense.  It is possible to make a very good case for developing
directed energy and kinetic weapons to keep the U.S. up-to-date in
military technology and to enhance our defensive capabilities.

My editorial is not anti-SDI; it is anti-falsity in advertising.
Those who oppose SDI will oppose it however it is sold.  Those who
support it will find it very tempting to allow the "right" ends to
be achieved (with incredible budgets) through deceptive means, but
that is not how a democracy is supposed to work.  Let the public know
what SDI is all about, and let us debate it for what it is.

------------------------------

Date: Wednesday, 22 October 1986  12:52-EDT
From: Douglas Humphrey <deh at eneevax.umd.edu>
To:   arms-d, risks at csl.sri.com
Re:   Stealth vs. ATC / SDI Impossibility? 


Stealth vs. ATC - The general public does not seem to know a lot about
the Air Traffic Control system and how it works. In controlled
airspace such as around large airports, a Terminal Control Area (TCA)
is defined into which only aircraft equipped with a Transponder may
traverse. In reality, the rules and flavors concerned with this whole
process are very complex and aren't needed here. If you are really
interested, go to Ground School.  The transponder replies to the
interrogation of the ATC radar providing at least a bright radar
image, and in more sophisticated systems the call sign of the
aircraft, heading, altitude, etc. Thus, the concept of Stealth vs. ATC
is not real. If the stealth aircraft is flying under Positive Control
of ATC, then it will have the transponder. If it does not have one,
then it better stay out of busy places or it is illegal and the pilot
sure as hell will have his ticket pulled.


SDI Impossibility?  - I have a good background in physics, computing
(software and vlsi hardware) and a lot of DEW (Directed Energy
Weapons), and I have yet to hear ANYONE explain WHY SDI is impossible.
I hear all this about the complexity of the software, but I used to be
part of a group that supported a software system of over 20 million
lines of code, and it rarely had problems. Admittedly, we wrote
simulators for a lot of the load since we did not want to try
experimental code out on the production machines, but we never had a
simulator fail to correctly simulate the situation. There were over
100 programmers supporting this stuff, and it was properly managed and
it all worked well.  Is someone suggesting that the incoming target
stream can not be simulated ?  Why not ? We do it now on launch
profile simulations involving the DEW (Distant Early Warning) network
and a lot of other sensor systems.  Is someone suggesting that PENAIDS
(Penetration Aids) can not be simulated ?  Why not ? We do it now
also. Worst case studies just treat all of the PENAIDS as valid
targets. If you can intercept THAT mess, then you can stop anything !

I get the feeling that people are assuming that the SDI software is
going to be one long chunk of code running on one machine and that if
it ever sees anything that is not what it expects its going to do a
HALT and stop the entire process. Wrong. I wouldn't build a game that
way, much less something like SDI ?

So. The Challenge. People out there who think it is Impossible, please
identify what is impossible. Pointing systems ? Target acquisition ?
Target Classification ? Target descrimination ? Destruction of the
targets ?  Nobody is saying that it is easy. Nobody is saying that our
current level of technology is capable of doing it all perfectly. But
it sure isn't (in my opinion) impossible.
    
[.. stuff about missing engines omitted from arms-d..]

Doug Humphrey
Digital Express Inc.

------------------------------

Date: Thu, 23 Oct 1986  08:47 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: SDI Impossibility

    From: Douglas Humphrey <deh at eneevax.umd.edu>

    SDI Impossibility?  - I have a good background in physics, computing
    (software and vlsi hardware) and a lot of DEW (Directed Energy
    Weapons), and I have yet to hear ANYONE explain WHY SDI is impossible.

Tell us what you mean by SDI, and it can be explained or not.
Every technical analyst believes it is possible to build something
that will destroy some missiles.  No analyst believes it is possible
to build something that will destroy all missiles.  The question is
whether or not the ability to destroy some missiles is worth what you
must pay to get it.

    I hear all this about the complexity of the software, but I used to be
    part of a group that supported a software system of over 20 million
    lines of code, and it rarely had problems. 

But it sometimes did.  How much would you have been willing to bet
that the problems would not arise at critical times when you could not
do debugging?

    we wrote
    simulators for a lot of the load since we did not want to try
    experimental code out on the production machines, but we never had a
    simulator fail to correctly simulate the situation. 

I'll bet you didn't simulate something with which you had no
experience.  To judge what it means to have a simulator run correctly
means that you have some way of judging its correctness.  No one has
such experience with a real nuclear war.

    There were over
    100 programmers supporting this stuff, and it was properly managed and
    it all worked well.

Given the current estimates of SDI software size, the total
programming team might be an order of magnitude bigger.  100
programmers would be tiny.

    Is someone suggesting that the incoming target stream can not be
    simulated ?  Why not ? We do it now on launch profile simulations
    involving the DEW (Distant Early Warning) network and a lot of other
    sensor systems.

But ballistic missile attacks would be straightforward now, because
there are no defenses.  If you assume that the Soviets do nothing
differently, then maybe you could (though I personally doubt that).
But the Soviets will react, and what gives you the confidence that you
can predict their new tactics?

       Is someone suggesting that PENAIDS (Penetration Aids) can not be
    simulated ?  Why not ? We do it now also.

Penaids that we know about we can simulate.  Penaids that we don't
know about we can't.

    Worst case studies just treat all of the PENAIDS as valid targets. If
    you can intercept THAT mess, then you can stop anything !

But you can't. Current threat cloud estimates range from a low of
30,000 to a high of a few million.  If you spend enough money, you
might be able to kill everything, but it seems unlikely that you can
kill them all with just a few thousand platforms in 20 minutes.

    I get the feeling that people are assuming that the SDI software is
    going to be one long chunk of code running on one machine and that if
    it ever sees anything that is not what it expects its going to do a
    HALT and stop the entire process.

No critic has said this.  The fear is that it will do something that
it should not do, of which halting could be one thing.  The problem is
that you can't predict what that thing will be.

    So. The Challenge. People out there who think it is Impossible, please
    identify what is impossible. Pointing systems ? Target acquisition ?
    Target Classification ? Target descrimination ? Destruction of the
    targets ?

The hard thing is not any of these, and it illustrates the primary
issue in software as well.  The hard thing is knowing what the Soviets
will do; that places the specification of requirements of our software
in their hands, and they are unlikely to tell us what they will do.
You've mentioned essentially the analog of implementation details --
serious, complicated, hard, maybe (or maybe not) impossible.  But
that's assuming a cooperative opponent.  

It seems that the real question on which we disagree is one raised by
the recent discussion of Scott's editorial and Freeman's response.
Computer programs handle a variety of inputs, even if we can't specify
in precise detail the exact sequence of bits that are input.  However,
our ability to write computer programs that do this is dependent on
our ability to formulate general rules that characterize the essential
features and regularities in the bit stream.  That is one reason why
writing compilers is easier than writing automatic translators from
English to French; rules for computer languages are easy, rules for
natural language are hard (and maybe impossible).

Similarly, all military systems function in unknown environments,
i.e., environments that cannot be specified down to the last detail.
When these systems function as expected, the system designers must
have correctly predicted the essential features of the operating
environment -- you could say that they have been able to formulate
general rules that characterize the essential features and
regularities of the environment.

Critics of SDI have no faith that it is possible to capture the
essential features of ALL possible Soviet responses to SDI.  As a
non-critic of SDI, do you think we can?  Or do you think that this
criterion is too strong?

------------------------------

Date: Thu, 23 Oct 86 13:41:41 EDT
From: Douglas Humphrey <deh@eneevax.umd.edu>
Subject: Re:  SDI Impossibility

To LIN : It does seem that before people start making statements about the
         possibility of achieving a goal, they ought to define the goal
         as clearly as possible. I fear that it is this definition of the
         Primary, Secondary, etc. goals of an ABM related system that will
         (and does) cause people on both sides of the issue to declare 
         'jihad' and flame with great energy. This is an emotional issue,
         whereas I consider the actual science and engineering to be a 
         a chalenge and a lot of (if I can use the word) fun. 

         Concerning the technologies that this will be implimented in, 
         I am reminded of two things; First, a friend of mine who started
         writting a large system for an IBM 370/75 system a long time 
         back, knowing that the system would be no where near fast enough 
         to do the video processing things that he wanted to do. Orders
         of magnitude too slow. But that was OK because it took him almost
         10 years to develop the methodology for patern recognition.
         By the time he had a commercial product from his research, it 
         was running on an IBM 3084 and more than fast enough. The same
         code processes some 4 million checks a day somewhere in the 
         Federal Reserve system, and he is rich. Second, not too long ago
         I was pretty impressed with an Altair system that I owned. Seemed
         like a pretty fair amount of power, all things considered. Now we
         can joke about putting a bunch of 68030 chips together on our
         desks. I wonder what the '040 will do, or maybe the '050. Any bets
         that an '050 will be on the drawing boards by 1990 ? (maybe it is
         now). How in the world do we know what we will be using in the 
         way of technology when (if) this SDI stuff gets built ?

         Opps. was that a 360/75 ? I can't remember !

Doug

------------------------------

Subject: Star Wars, Lying, and the John Doe Syndrome
Date: Thu, 23 Oct 86 11:05:27 -0800
From: crummer@aerospace.ARPA

I think that most people would agree if asked that administration
spoksemen (or spokeswomen) lie to the public.  Not always, just when
they think or are told it is necessary.  For this reason I, for one,
don't believe anything that comes from the administration unless it is
corroborated by independent reports.  This bothers me a lot.  It is
actually a breakdown of the utility of language as a vehicle to convey
truth and actually limits language to the realm of persuasion, i.e.
"image" creation.

Witness the repackaging of the Rekjavik meeting: First we were told
not to expect anything! (What do we pay Mr. Reagan for, anyway!); Next
our hopes rose as the media reported rumors of real breakthrough
activity; Next the disappointment, our hopes were dashed as soon as we
saw Reagan's face as he left the meetings (a picture is worth more
than all the hype words that followed); Finally the repackaging,
"Reagan was magnificent, I have never been more proud of my
president!"...  The Gipper, the Messiah from the proletariat of "John
Does", the immaculate embodiment of the "common man" can do no wrong.
This is a replay of the old Gary Cooper, Barbara Stanwyck movie, "Meet
John Doe" where Gary Cooper, selected at random from the masses
becomes their Messiah speaking eternal truth and wisdom and perfectly
reflecting the will of the people.

It is common knowledge that Star Wars was proposed by Reagan without
consultation with his scientific advisors.  Since the arts of science
and engineering do not have the benefit of anything equivalent to
mathematical proof they must be based on the "engineering judgment" of
competent, serious workers.  Reagan's Star Wars proposal stands
outside of the realm of truth or falsity since it is a question.  The
answer, yes or no, will come from the best judgement of serious
experts or it will come from the uncompromising physical world.  There
really are things that can be stated that will NEVER be possible.
What these are may not be clear at any given time.

If Star Wars is not used as a bargaining chip, I think that it will
soon crumble to little or nothing but spin-offs.  Already Pete Worden
of the SDIO has said that the orbiting laser battle station idea has
been scrapped and in a recent debate at Stanford, he also said that
the entire boost phase intercept concept might be scrapped and the
problem worked at the deployment, midcourse, and reentry phases!

  --Charlie

------------------------------

Date: Thu, 23 Oct 86 12:39:01 PDT
From: Clifford Johnson <GA.CJJ@forsythe.stanford.edu>
Subject:  RAND-ABEL's one-way street

I had some feedback on the RSAC system's modeling of international
relations.  I can verify that the "automatic response limit," which
determines the level below which a nonsuperpower will automatically
accede to its superpower's requests, is one-way.

This asymmetry is flowcharted in The RAND-ABEL Programming Language,
RAND R-3274-NA, Aug. 1985, at 5.  Although there are two-way arrows
labelled "Requests" between the superpowers and the Systems Monitor
module, the arrows labelled "Requests" between the superpowers and
nonsuperpowers are strictly one-way.  The thought that nonsuperpowers
might make a request of a superpower simply slipped by the nation's
strategic planners.

To:  ARMS-D@XX.LCS.MIT.EDU

------------------------------

Date: Thu, 23 Oct 1986  16:22 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: SDI Impossibility

    From: Douglas Humphrey <deh at eneevax.umd.edu>

    To LIN : It does seem that before people start making statements about the
             possibility of achieving a goal, they ought to define the goal
             as clearly as possible.

Good.  Now, when you tell us that you do not believe that "SDI" is
impossible, what do *you* believe is possible to do?  Are there
any "SDI-type "things that you believe to be impossible?

------------------------------

End of Arms-Discussion Digest
*****************************