[mod.politics.arms-d] Arms-Discussion Digest V7 #12

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (09/16/86)

Arms-Discussion Digest              Tuesday, September 16, 1986 9:05AM
Volume 7, Issue 12

Today's Topics:

                            F-16 software
                            F-16 software
                        [nancy: F-16 software]
             [jon: Upside-down F-16's and "Human error"]
                    [preece%ccvaxa: F-16 software]
[CMP.WERNER: Captain Midnight & military satellites (Mother Jones, October 86)]
                  [mmdf: Failed mail  (msg.a007930)]
          [benson%wsu.csnet: Flight Simulators Have Faults]
           [gwhisen%ccvaxa: Flight Simulators Have Faults]
                  [mmdf: Failed mail  (msg.a022856)]
                       [eugene: F-16 software]
             [Doug_Wade%UBC.MAILNET: re. F-16 Software.]
                          Autonomous weapons
                      One student's view of SDI
           [chapman: "Unreasonable behavior" and software]
                 "Unreasonable behavior" and software

----------------------------------------------------------------------

Date: Tue, 16 Sep 1986  00:07 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: F-16 software

Date: Thursday, 4 September 1986  02:59-EDT
From: allegra!utzoo!henry at ucbvax.Berkeley.EDU
To:   RISKS-LIST:, allegra!CSL.SRI.COM!RISKS at ucbvax.Berkeley.EDU
Re:   F-16 software

Phil Ngai writes:

> It sounds very funny that the software would let you drop a bomb on the wing
> while in inverted flight but is it really important to prevent this? ...

This issue actually is even more complex than it sounds, because it may be
*desirable* to permit this in certain circumstances.  The question is not
whether the plane is upside down at the time of bomb release, but which way
the bomb's net acceleration vector is pointing.  If the plane is in level
flight upside-down, the vector points into the wing, which is a no-no.  But
the same thing can happen with the plane upright but pulling hard into a
dive.  Not common, but possible.  On the other side of the coin, some
"toss-bombing" techniques *demand* bomb release in unusual attitudes,
because aircraft maneuvering is being used to throw the bomb into an
unorthodox trajectory.  Toss-bombing is common when it is desired to bomb
from a distance (e.g. well-defended targets) or when the aircraft should
be as far away from the explosion as possible (e.g. nuclear weapons).
Low-altitude flight in rough terrain at high speeds can also involve quite
violent maneuvering, possibly demanding bomb release in other than straight-
and-level conditions.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: Tue, 16 Sep 1986  00:32 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: F-16 software

Date: Friday, 5 September 1986  13:19-EDT
From: rti-sel!dg_rtp!throopw%mcnc.csnet at CSNET-RELAY.ARPA
To:   RISKS-LIST:
Re:   F-16 software
Apparently-To: mcnc!csl.sri.com!risks

> It sounds very funny that the software would let you drop a bomb on the wing
> while in inverted flight but is it really important to prevent this? Is it
> worth the chance of introducing a new bug to fix this very minor problem?

>      [The probability is clearly NONZERO.  It is very dangerous to start
>       making assumptions in programming about being able to leave out an
>       exception condition simply because you think it cannot arise.  Such
>       assumptions have a nasty habit of interacting with other assumptions
>       or propagating.  PGN]

It is also dangerous to start making assumptions about the ways in which
the system will be used.  Can you really not think of a reason why one
would want to "drop" a bomb while the dorsal surface of the plane points
towards the planet's center (a possible interpretation of "inverted")?
I can think of several.

I am trying to make the point that the gross simplification of
"preventing bomb release while inverted" doesn't map very well to what I
assume the actual goal is: "preventing weapons discharge from damaging
the aircraft".  This is yet another instance where the assumptions made
to simplify a real-world situation to manageable size can easily lead to
design "errors", and is an architypical "computer risk" in the use of
relatively simple computer models of reality.

In addition to all this, it may well be that one doesn't *want* to
prevent all possible modes weapons discharge that may damage the
aircraft...  some of them may be useful techniques for use in extreme
situations.

   The more control,
   The more that requires control.
   This is the road to chaos.
                                --- PanSpechi aphorism {Frank Herbert}

Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

------------------------------

Date: Tue, 16 Sep 1986  00:35 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [nancy: F-16 software]

Date: Monday, 8 September 1986  12:53-EDT
From: Nancy Leveson <nancy at ICSD.UCI.EDU>
To:   RISKS-LIST:, risks at csl.sri.com
Re:   F-16 software

Wayne Throop writes:  

   >it may well be that one doesn't *want* to prevent all possible 
   >modes weapons discharge that may damage the aircraft ... some of
   >them may be useful techniques for use in extreme situations.

This raises some extremely important points that should be remembered
by those attempting to deal with risk.

   1) nothing can be made 100% safe under all circumstances.  In papers I
      have written I have pointed out that safety razors and safety matches
      are not completely safe, they are only *safer* than their alternatives.
      Drinking water is usually considered safe, but drinking too much water
      can cause kidney failure.  
   
   1) the techniques used to make things safer usually involve
      limiting functionality or design freedom and thus involve tradeoffs
      with other desirable characteristics of the product.

All we can do is attempt to provide "acceptable risk."  What is "acceptable" 
will depend upon moral, political, and practical issues such as how much
we are willing to "pay" for a particular level of safety.

I define "software safety" as involving procedures to ensure that the
software will execute within a system context without resulting in
unacceptable risk.  This implies that when building safety-critical systems, 
one of the first and most important design problems may be in identifying
the risks and determining what will be considered acceptable risk for that
system.  And just as important, our models and techniques are going to have
to consider the tradeoffs implicit in any attempt to enhance safety and
to allow estimation of the risk implicit in any design decisions.  
If we have such models, then we can use them for decision making, including 
the decision about whether acceptable risk can be achieved (and thus
whether the system can and should be built).  If it is determined that
acceptable risk can be achieved, then the models and techniques should 
provide help in making the necessary design decisions and tradeoffs.
The important point is that these decisions should be carefully considered
and not subject to the whim of one programmer who decides in an ad hoc
fashion whether or not to put in the necessary checks and interlocks.

      Nancy Leveson
      Information & Computer Science
      University of California, Irvine

------------------------------

Date: Tue, 16 Sep 1986  00:36 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [jon: Upside-down F-16's and "Human error"]

Date: Monday, 8 September 1986  19:55-EDT
From: jon at uw-june.arpa (Jon Jacky)
To:   RISKS-LIST:, risks at CSL.SRI.COM
Re:   Upside-down F-16's and "Human error"

> (... earlier postings mentioned "fly-by-wire" F-16 computer would 
> attempt to raise landing gear while aircraft was sitting on runway,
> would attempt to drop bombs while flying inverted, and other such 
> maneuvers -- in response to pilot's commands

These are regarded as errors?  Maybe I'm missing something, but it sounds 
like the right solution is to remind the pilots not to attempt obviously
destructive maneuvers.  I detect a notion floating about that software 
should prevent any unreasonable behavior.  This way lies madness.  Do we have 
to include code to prevent the speed from exceeding 55 mph while taxiing down
an interstate highway?

My point is, if you take the approach that the computer is supposed to check
for and prevent any incorrect behavior, then you have saddled yourself with
the task enumerating every possible thing the system should NOT do.  Such a 
list of prohibited behaviors is likely to be so long it will make the 
programming task quite intractable, not to mention that you will never get all
of them.

I suggest that the correct solution is the time-honored one: the operator must
be assumed to possess some level of competence; no attempt is made to 
protect against every conceivable error that might be committed by a flagrantly
incompetent or malicious operator.

Note that all non-computerized equipment is designed this way.  If I steer my
car into a freeway abutment, I am likely to get killed.  Is this a "design
flaw" or an "implementation bug?"  Obviously, it is neither.  People who are
drunk or suicidal are advised not to drive.

This relates to the ongoing discusssion about "human error."  This much-abused
term used to refer to violations of commonly accepted standards of operator
performance -- disobeying clear instructions, attempting to work when drunk, 
things like that.  Apparently it has come to refer to almost any behavior which,
in retrospect, turns out to have unfortunate consequences.  It is sometimes 
applied to situations for which the operator was never trained, and which the 
people who installed the system had not even anticipated.  

When abused in this way, the term "human error" can be a transparent attempt
to deflect blame from designers and management to those with the least control
over events.  Other times, however, it is evidence of genuine confusion over
who is responsible for what.  Right at the beginning, designers must draw a
clear line between what the automated system is supposed to do and what the
operators must do.  This may require facing the painful truth that there 
may be situations where, if the operator makes a mistake, a real disaster
may occur.  The choice is then one of ensuring the trustworthiness of the
operators, or finding an alternative approach to the problem that is more
robust.  

I suggest that if additional computer-based checking against operator errors
keeps getting added on after the system has been installed, it is evidence that
the role of the operator was not very clearly defined to begin with.

-Jonathan Jacky
University of Washington

------------------------------

Date: Tue, 16 Sep 1986  00:36 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [preece%ccvaxa: F-16 software]

Date: Monday, 8 September 1986  10:36-EDT
From: Scott E. Preece <preece%ccvaxa at GSWD-VMS.ARPA>
To:   RISKS-LIST:, RISKS at CSL.SRI.COM
Re:   F-16 software

> From: amdcad!phil@decwrl.DEC.COM (Phil Ngai)

> It sounds very funny that the software would let you drop a bomb on the
> wing while in inverted flight but is it really important to prevent
> this?

Others have already pointed out that sometimes you may WANT to
release the bomb when inverted.  I would ask the more obvious
question: Would a mechanical bomb release keep you from releasing
the bomb when inverted?  I tend to doubt it.  While it's nice
to think that a software controlled plane should be smarter than
a mechanical plane, I don't think it's fair to cite as an error
in the control software that it isn't smarter than a mechanical
plane...

If, in fact, the mechanical release HAD protected against inverted
release, I would have expected that to be part of the specs for
the plane; I would also expect that the acceptance tests for the
software comtrolled plane would test all of the specs and that
the fault would have been caught in that case.

scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@gswd-vms

------------------------------

Date: Tue, 16 Sep 1986  00:38 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [CMP.WERNER: Captain Midnight & military satellites (Mother Jones, October 86)]

Date: Monday, 8 September 1986  01:01-EDT
From: Werner Uhrig <CMP.WERNER at R20.UTEXAS.EDU>
To:   RISKS-LIST:, telecom at R20.UTEXAS.EDU, risks at R20.UTEXAS.EDU
Re:   Captain Midnight & military satellites (Mother Jones, October 86)

[ pointer to article in print:  Mother Jones, Oct '86 Cover Story on Satellite
  Communications Security (or lack thereof) ]

(p.26)	CAPTAIN MIDNIGHT, HBO, AND WORLD WAR III - by Donald Goldberg
	John "Captain Mignight" MacDougall has been caught but the flaws he
exposed in the U.S. military and commercial ssatellite communications system
are still with us and could lead to far scarier things than a $12.95 monthly
cable charge.

(p.49)	HOME JAMMING: A DO-IT-YOURSELF GUIDE - by Donald Goldberg
	What cable companies and the Pentagon don;t want you to know.

PS: Donald Goldberg is described as "senior reporter in Washington, D.C., for
the syndicated Jack Anderson column

[ this is not an endorsement of the article, just a pointer.
  you be the judge of the contents. ]

------------------------------

Date: Tue, 16 Sep 1986  00:49 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [mmdf: Failed mail  (msg.a007930)]

Date: Monday, 1 September 1986  08:36-EDT
From: VAX Memo Service (MMDF 4/84) <mmdf at VAX.BBN.COM>
Sender: mmdf at VAX.BBN.COM
To:   ARMS-D-Request
Re:   Failed mail  (msg.a007930)

    Your message could not be delivered to
'nbreen@BBN.COM (host: bbn.com) (queue: smtp)' for the following
reason:  ' (USER) Unknown user name in "nbreen@BBN.COM"'


    Your message begins as follows:

Received: from xx.lcs.mit.edu by VAX.BBN.COM id a007930; 1 Sep 86 8:28 EDT
Date: 1 Sep 86 08:13-EDT
From: Moderator <ARMS-D-Request@XX.LCS.MIT.EDU>
Subject: Arms-Discussion Digest V7 #10
To: ARMS-D@XX.LCS.MIT.EDU
Reply-To: ARMS-D@XX.LCS.MIT.EDU

Arms-Discussion Digest                Monday, September 1, 1986 8:13AM
Volume 7, Issue 10
...

------------------------------

Date: Tue, 16 Sep 1986  00:52 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [benson%wsu.csnet: Flight Simulators Have Faults]

Date: Sunday, 31 August 1986  02:08-EDT
From: Dave Benson <benson%wsu.csnet at CSNET-RELAY.ARPA>
To:   RISKS-LIST:, risks%csl.sri.com at CSNET-RELAY.ARPA
Re:   Flight Simulators Have Faults

I mentioned the F-16 RISKS contributions to my Software Engineering class
yesterday.  After class, one of the students told me the following story about
the B-1 Flight Simulator. The student had been employed over the summer to
work on that project, thus having first-hand knowledge of the incident.

Seems when a pilot attempts to loop the B-1 Flight Simulator that
the (simulated) sky disappears.  Why?  Well, the simulated aircraft
pitch angle was translated by the software into a visual image by
taking the trigonometric tangent somewhere in the code.  With the
simulated aircraft on its nose, the angle is 90 degrees and the
tangent routine just couldn't manage the infinities involved.  As I
understand the story, the monitors projecting the window view went blank.

Ah, me.  The B-1 is the first aircraft with the capability to loop?  Nope,
its been done for about 70 years now...  The B-1 Flight Simulator is the
first flight simulator with the capability to allow loops?  Nope, seems to
me I've played with a commercially available Apple IIe program in which a
capable player could loop the simulated Cessna 180.  $$ to donuts that
military flight simulators with all functionality in software have been
allowing simulated loops for many years now.

Dick Hamming said something to the effect that while physicists stand on one
another's shoulders, computer scientists stand on one another's toes.  At
least on the toes is better than this failure to do as well as a game
program...  Maybe software engineers dig one another's graves?

And this company wants to research Starwars software...  Jus' beam me up,
Scotty, there's no intelligent life here.

------------------------------

Date: Sun, 14 Sep 86 22:40:51 pdt
From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>

Subject: I found one! (A critical real-time application worked the first time)

Last spring I issued a call for hard data to refute a hypothesis which I,
perhaps mistakenly, called the Parnas Hypothesis:
	No large computer software has ever worked the first time.
Actually, I was only interested in military software, so let me repost the
challenge in the form I am most interested in:
	NO MILITARY SOFTWARE (large or small) HAS EVER WORKED IN ITS FIRST
	OPERATIONAL TEST OR ITS FIRST ACTUAL BATTLE.
Contradict me if you can. (Send citations to the open literature
to benson@wsu via csnet)

Last spring's request for data has finally led to the following paper:
	Bonnie A. Claussen, II
	VIKING '75 -- THE DEVELOPMENT OF A RELIABLE FLIGHT PROGRAM
	Proc. IEEE COMPSAC 77 (Computer Software & Applications Conference)
	IEEE Computer Society, 1977
	pp. 33-37

I offer some quotations for your delictation:

	The 1976 landings of Viking 1 and Viking 2 upon the surface of
	Mars represented a significant achievement in the United States
	space exploration program. ... The unprecented success of the Viking
	mission was due in part to the ability of the flight software
	to operate in an autonomous and error free manner. ...
	Upon separation from the Oribiter the Viking Lander, under autonomous
	software control, deorbits, enters the Martian atmosphere,
	and performs a soft landing on the surface. ... Once upon the surface,
	... the computer and its flight software provide the means by
	which the Lander is controlled.  This control is semi-autonomous
	in the sense that Flight Operations can only command the Lander
	once a day at 4 bit/sec rate.

(Progress occured in a NASA contract over a decade ago, in that)

	In the initial stages of the Viking flight program development,
	the decision was made to test the flight algorithms and determine
	the timing, sizing and accuracy requirements that should be 
	levied upon the flight computer prior to computer procurement.
	... The entire philosophy of the computer hardware and
	software reliability was to "keep it simple."  Using the
	philosophy of simplification, modules and tasks tend toward 
	straight line code with minium decisions and minimum
	interactions with other modules.

(It was lots of work, as)

	When questioning the magnitude of the qulity assurance task,
	it should be noted that the Viking Lander flight program development
	required approximately 135 man-years to complete.

(But the paper gives no quantitative data about program size or complexity.)

Nevertheless, we may judge this as one of the finest software engineering
acomplishments to date.  The engineers on this project deserve far more
plaudits than they've received.  I know of no similar piece of software
with so much riding upon its reliable behavior which has done so well.
(If you do, please do tell me about it.)

However, one estimates that this program is on the order of kilolines of FORTRAN
and assembly code, probably less than one hundred kilolines.  Thus
Parnas will need to judge for himself whether or not the Viking Lander
flight software causes him to abandon (what I take to be) his hypothesis
about programs not working the first time.

It doesn't cause me to abandon mine because there were no Martians shooting
back, as far as we know...

David B. Benson, Computer Science Department, Washington State University,
Pullman, WA 99164-1210  csnet: benson@wsu

------------------------------

Date: Tue, 16 Sep 1986  01:01 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [gwhisen%ccvaxa: Flight Simulators Have Faults]

Date: Tuesday, 2 September 1986  11:35-EDT
From: Gary Whisenhunt <gwhisen%ccvaxa at GSWD-VMS.ARPA>
To:   RISKS-LIST:, RISKS at CSL.SRI.COM
Re:   Flight Simulators Have Faults

    I developed flight simulators for over 7 years and could describe many such
bizarre incidents.  I seriously doubt that the sky went blank in the B-1
simulator when it was delivered to the government.  Military simulators have
formal acceptance tests that last for months.  The last one that I worked on
had a test procedure over 12 inches thick.  To point out a failure during
testing (or more likely development) seems meaningless.  Failures that make
it into the actual product are what should be of concern.
    Most flight simulators procured by the Air Force and the Navy require
Mil-Std 1644 or Mil-Std 1679 to be followed when developing software.  These
standards detail how software is to be developed and tested.  The standards
are fairly strict and exhaustive.  This is to ensure product correctness 
even if it incurrs greater costs.  It would be interesting study for a 
class in Software Engineering.
    The greatest risks that I see from flight simulators (especially
military) is that the simulator often lags behind the aircraft in
functionality by a year or 2.  Simulators require design data to be frozen
at a certain date so that the simulator can be designed using consistent,
tested data.  After 2 years of development, the aircraft may have changed
functionaly (sometimes in subtle ways) from the simulator design.  The
effect is much more dramatic for newer aircraft than it is for more
established ones.  The simulator is upgraded, but during the upgrade period
pilots train on a simulator that is mildly different from their aircraft.
    As for the effectiveness of simulators, I've been told by more than one
pilot that the simulator saved his life because he was able to practice
malfunction conditions in the simulator that prepared him for a real emergency
that occurred later.

Gary Whisenhunt
Gould Computer Systems Division
Urbana, Ill.

    [I thought that by now these simulators were designed so that they could
     be driven by the same software that is used in the live aircraft -- a
     change in one place would be reflected by the same change in the other,
     although changing the application code without having to modify the
     simulator itself.  Maybe not...  PGN]

------------------------------

Date: Tue, 16 Sep 1986  08:20 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [mmdf: Failed mail  (msg.a022856)]

Date: Tuesday, 16 September 1986  01:40-EDT
From: BRL Memo Service (MMDF 4/84) <mmdf at BRL.ARPA>
Sender: mmdf at BRL.ARPA
To:   arms-d-request at BRL.ARPA
Re:   Failed mail  (msg.a022856)

    Your message could not be delivered to
'MCLAUGHLINJR@a.isi.edu (host: a.isi.edu) (queue: smtp)' for the following
reason:  ' Unknown user - MCLAUGHLINJR@a.isi.edu'


    Your message begins as follows:

Received: from brl-vgr.arpa by SMOKE.BRL.ARPA id a022830; 16 Sep 86 1:24 EDT
Received: from XX.LCS.MIT.EDU by VGR.BRL.ARPA id aa00462; 16 Sep 86 1:10 EDT
Date: 15 Sep 86 23:51-EDT
From: Moderator <ARMS-D-Request@XX.LCS.MIT.EDU>
Subject: Arms-Discussion Digest V7 #11
To: ARMS-D@XX.LCS.MIT.EDU
Reply-To: ARMS-D@XX.LCS.MIT.EDU
Message-ID:  <8609160112.aa00462@VGR.BRL.ARPA>

Arms-Discussion Digest              Monday, September 15, 1986 11:51PM
Volume 7, Issue 11
...

------------------------------

Date: Tue, 16 Sep 1986  08:24 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [eugene: F-16 software]

Date: Wednesday, 10 September 1986  16:17-EDT
From: eugene at AMES-NAS.ARPA (Eugene Miya)
To:   RISKS-LIST:, risk at sri-csl.ARPA
Re:   F-16 software

It seems F-16's are a hot topic everywhere.  I think it's novelty
thing like computers except for aeronautics.

> I am trying to make the point that the gross simplification of
> "preventing bomb release while inverted" doesn't map very well to what I
> assume the actual goal is: "preventing weapons discharge from damaging
> the aircraft".  This is yet another instance where the assumptions made
> to simplify a real-world situation to manageable size can easily lead to
> design "errors", and is an architypical "computer risk" in the use of
> relatively simple computer models of reality.
> 
> Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

Excellent point.

Several things strike me about this problem.  First, the language used
by writers up to this point don't use words like "centrifugal force"
and "gravity."  This worries me about the training of some computer people
for jobs like writing mission critical software [Whorf's "If the word
does not exist, the concept does not exist."]  I am awaiting a paper
by Whitehead whch I am told talks about some of this.

It can certainly be acknowledged that there are uses which are novel
(Spencer cites "lob" bombing, and others cite other reasons [all marginal])
equal concern must be given to straight-and-level flight AND those
novel cases.  In other words, we have to assume some skill on the part of
pilots [Is this arrogance on our part?]. 

Another problem is that planes and Shuttles do not have the types of sensory
mechanisms which living organisms have.  What is damage if we cannot
"sense it?"  Sensing equipment costs weight.  I could see some interesting
dialogues ala "Dark Star."

Another thing is that the people who write simulations seem to have the
great difficulty discriminating between the quality of thier simulations
and "real world" in the presence of incomplete cues (e.g., G-forces,
visual cues, etc.) when solely relying on things like instrument disk
[e.g., pilot: "Er, you notice that we are flying on empty tanks?" disturbed
pilot expression,  programmer: "Ah, it's just a simulation."]
Computer people seem to be "ever the optimist."  Besides, would you ever
get into a real plane with a pilot who's only been in simulators?

Most recently, another poster brought up the issue of autonmous weapons.
We had a discussion of of this at the last Palo Alto CPSR meeting.
Are autonmous weapons moral?  If an enemy has a white flag or hand-ups,
is the weapon "smart enough" to know the Geneva Convention (or is too
moral for programmers of such systems)?

On the subject of flight simulators: I visited Singer Link two years
ago (We have a DIG 1 system which we are replacing).  I "crashed" underneath
the earth and the polygon structure became more "visible."  It was like
being underneath Disneyland.

--eugene miya			sorry for the length, RISKS covered alot.
  NASA Ames Research Center
  President
  Bay Area ACM SIGGRAPH

------------------------------

Date: Tue, 16 Sep 1986  08:26 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [Doug_Wade%UBC.MAILNET: re. F-16 Software.]

Date: Wednesday, 10 September 1986  14:42-EDT
From: Doug_Wade%UBC.MAILNET at MIT-MULTICS.ARPA
To:   RISKS-LIST:, risks at csl.sri.com
Re:   re. F-16 Software.

Reading comments about putting restraints on jet performance within
the software reminded me of a  conversation I had a few years ago
at an air-show.
In talking to a pilot who flew F-4's in Vietnam he mentioned that
the F-4 specs said a turn exerting more than say 8 G's would cause
the wings to "fall off". However in avoiding SAMs or ground-fire
they would pull double? this with no such result.
  My comment to this, is what if a 8G limit had been programmed into
the plane (if it had been fly-by-wire). Planes might have been hit and
lost which otherwise were saved by violent maneuvers. With a SAM targeted
on your jet, nothing could be lost by exceeding the structural limitations
of the plane since it was a do-or-die situation.
I'm sure 99.99% of the lifetime of a jet is spent within designed
specifications, but should software limit the plane the one time
a pilot needs to override this constraint?

------------------------------

Date: Tue, 16 Sep 1986  08:31 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Autonomous weapons


    From: eugene at AMES-NAS.ARPA (Eugene Miya)

    ... another poster brought up the issue of autonmous weapons.
    We had a discussion of of this at the last Palo Alto CPSR meeting.
    Are autonmous weapons moral?  If an enemy has a white flag or hand-ups,
    is the weapon "smart enough" to know the Geneva Convention (or is too
    moral for programmers of such systems)?

What do you consider an autonomous weapon?  Some anti-tank devices are
intended to recognize tanks and then attack them without human
intervention after they have been launched (so-called fire-and-forget
weapons).  But they still must be fired under human control.  *People*
are supposed to recognize white flags and surrendering soldiers.

------------------------------

Date: Tue, 16 Sep 1986  08:48 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: One student's view of SDI


    From: GROSS at BCVAX3.BITNET (Rob Gross)
    *****  Forwarded message  *****

    A Computer Science Student's View of SDI:

    What is SDI's purpose?

    Answer:  SDI's purpose is to defend the U.S. from ICBMs that enter the
    upper ionosphere by using laser or particle beam weapons that would
    destroy the incoming missiles.

    This all sounds great!! No more threat of Nuclear War!!  But this is
    not the fifties, where the vast majority of warheads were in
    intercontinental ballistic missiles!!!

False.  They were mostly in bombers then.

    I recall reading that less
    than 25% of all nuclear weapons are now ICBMs.  

True for the U.S., not true for the Soviets, who have 75% of their
warheads in ICBMs.

------------------------------

Date: Tue, 16 Sep 1986  08:55 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [chapman: "Unreasonable behavior" and software]

Date: Tuesday, 9 September 1986  17:28-EDT
From: Gary Chapman <chapman at russell.stanford.edu>
To:   RISKS-LIST:, RISKS at CSL.SRI.COM
Re:   "Unreasonable behavior" and software

Jon Jacky wrote:

	I detect a notion floating about that software should 
	prevent any unreasonable behavior.  This way lies mad-
	ness.  Do we have to include code to prevent the speed
	[of an F-16] from exceeding 55 mph while taxiing down
	an interstate highway?

I certainly agree with the thrust of this.  But we should note that there is
plenty of evidence that coding in prohibitions on unreasonable behavior will
be required, particularly in the development of "autonomous" weapons that
are meant to combat the enemy without human "operators" on the scene.

Here's a description of a contract let by the United States Army Training and
Doctrine Command (TRADOC), Field Artillery Division, for something called a
"Terminal Homing Munition" (THM):

	Information about targets can be placed into the munitions
	processor prior to firing along with updates on meteorologi-
	cal conditions and terrain.  Warhead functioning can also be
	selected as variable options will be available.  The intro-
	duction of VHSIC processors will give the terminal homing
	munitions the capability of distinguishing between enemy and
	friendly systems and finite target type selection.  Since
	the decision of which target to attack is made on board the
	weapon, the THM will approach human intelligence in this area.
	The design criteria is pointed toward one munition per target
	kill.

(I scratched my head along with the rest of you when I saw this;  I've always
thought if you fire a bullet or a shell out of a tube it goes until it hits
something, preferably something you're aiming at.  But maybe the Army has
some new theories of ballistics we don't know about yet.)

As Nancy Leveson notes, we make tradeoffs in design and functionality for
safety, and how many and what kinds of tradeoffs are made depends on ethical,
political and cost considerations, among other things.  Since, as Jon Jacky
notes, trying to prohibit all unreasonable situations in code is itself un-
reasonable, then one wonders what sorts of things will be left out of the code
of terminal homing munitions?  What sorts of things will we have to take into
account in the code of a "warhead" that is supposed to find its own targets?
What level of confidence would we have to give soldiers (human soldiers--we
may have to get used to using that caveat) operating at close proximity to
THMs that the things are "safe"?

I was once a participant in an artillery briefing by a young, smart artillery
corps major.  This officer told us (a bunch of grunts) that we no longer needed
"forward observers," or guys attached to patrols to call in the ranges on
artillery strikes.  In fact, said the major, we don't need to call in our
artillery stikes at all--his methods had become so advanced  would
just know where and when we needed support.  We all looked at him like he had
gone stark raving mad.  An old grizzled master sergeant who had been in the Army
since Valley Forge I think, got up and said, "Sir, with all due respect, if I
find out you're in charge of the artillery in my sector, I will personally come
back and shoot you right between the eyes."  (His own form of THM "approaching
human intelligence", no doubt.) (I wouldn't be surprised if this major wrote
the language above.)

What is "unreasonable" behavior to take into account in coding software?  The
major's or the sergeant's?
							-- Gary Chapman

------------------------------

Date: Tue, 16 Sep 1986  09:01 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: "Unreasonable behavior" and software


    From: Gary Chapman <chapman at russell.stanford.edu>
    	Information about targets can be placed into the munitions
    	processor prior to firing along with updates on meteorologi-
    	cal conditions and terrain.  Warhead functioning can also be
    	selected as variable options will be available.  The intro-
    	duction of VHSIC processors will give the terminal homing
    	munitions the capability of distinguishing between enemy and
    	friendly systems and finite target type selection.  Since
    	the decision of which target to attack is made on board the
    	weapon, the THM will approach human intelligence in this area.
    	The design criteria is pointed toward one munition per target
    	kill.

    (I scratched my head along with the rest of you when I saw this;
    I've always
    thought if you fire a bullet or a shell out of a tube it goes until it hits
    something, preferably something you're aiming at.  But maybe the Army has
    some new theories of ballistics we don't know about yet.)

The THM is an example of what the army calls a "fire-and-forget"
munition. A human being fires it in the general direction of the
target, and then the munition seeks out its target without further
intervention.  The munition has mechanisms to alter its course from a
ballistic trajectory.

    What level of confidence would we have to give soldiers (human soldiers--we
    may have to get used to using that caveat) operating at close proximity to
    THMs that the things are "safe"?

That is indeed the question.  My own guess is that THMs and other
smart munitions will never be able to distinguish between friend or
foe.  That's why most current concepts are directed towards attacking
enemy forces deep behind enemy lines, where you can ASSUME that
anything you see is hostile.

------------------------------

End of Arms-Discussion Digest
*****************************
16-Sep-86 09:46:35-EDT,37018;000000000000
Date: Tuesday, 16 September 1986  09:20-EDT
From: The Mailer Daemon <Mailer>
To:   ARMS-D-Request
Re:   XX:<MAILQ>[--QUEUED-MAIL--].NEW-ARMSD.1

No such host as "LEAR.STANFORD.EDU",
bad queue file follows:
-------
_XX.LCS.MIT.EDU
ARMS-D-Request
A.CS.CMU.EDU
ARMS-D-MSGS
Andreas.Nowatzyk
Thomas.Rodeheffer
ACC.ARPA
BBOARD.ARMSD
ALMSA-1.ARPA
wmartin
AMES.ARPA
arms-d-local
ANDREW.CMU.EDU
arpadigests
APG-1.ARPA
arms-d
ARDEC-LCSS.ARPA
BECK
ATHENA.MIT.EDU
bkdavis
gkalonji
B.ISI.EDU
ISI-ARMS-D
BBN-VAX.ARPA
arms-d
BRL.ARPA
arms-d-redist
Berkeley.edu
mtxinu!sybase!marcy
CCA.CCA.COM
ima!inmet!brianu%cca-unix.arpa
CRYS.WISC.EDU
pal
CS.UCL.AC.UK
ucl-arms-d
CSL.SRI.COM
BBOARD-ARMS-D
CSNET-RELAY.ARPA
8440827%wwu.csnet
BROCK%sc.intel.com
Cerys%TI-CSL
EFREEMAN%HERC%rca.com
MAZUR%gmr.com
arms-d.umass-coins
armslist%lsu
dietz%slb-doll.csnet
dietz@slb-test.csnet
kjs%tufts.csnet
rpg%brown.csnet
uci-arms-d.uci
velu%umcp-cs
arms-d%farg.umich
CSVAX.CALTECH.EDU
ametek!walton
engvax!ymir!ben
CVL.UMD.EDU
sven
DECWRL.DEC.COM
wachsmuth%gva04.DEC
DEEP-THOUGHT.MIT.EDU
ARMS-D-BBOARD
DREA-XX.ARPA
ARMS-D
E.ISI.EDU
SAC.NEACP
ERNIE.BERKELEY.EDU
tedrick
HAMLET.CALTECH.EDU
ARMS-D-USERS
HARVARD.HARVARD.EDU
arms-d-incoming
HAWAII-EMH.ARPA
cfccs
HT.AI.MIT.EDU
armsd
IBM.COM
rlg2
JPL-VLSI.ARPA
august
KESTREL.ARPA
arms-d
LANL.ARPA
Post-Arms-D
LBL-RTSG.ARPA
jef
LEAR.STANFORD.EDU
ARMS-D
LLL-CRG.ARPA
arms-d-lll
LLL-MFE.ARPA
greyzck terry%e.mfenet
LOGICON.ARPA
arms
LOUIE.UDEL.EDU
dist-arms-d
MC.LCS.MIT.EDU
BGL
FFM
REM
RHB
SASWX
MULTICS.MIT.EDU
JSLove
Schiller
PCO-disty
arms-disty
NCSC.ARPA
steve
NJITCCCC.BITNET
Marty
NPRDC.ARPA
west
NRL-AIC.ARPA
bbd-arms-d
NRL-CSS.ARPA
arms-d
NWC-143B.ARPA
estell
schwartz
NYU.ARPA
ARMS_D
OFFICE-1.ARPA
ARMS-D.MDC
OZ.AI.MIT.EDU
EGK
FONER-magazines
sidney
wayne
R20.UTEXAS.EDU
CS.DSTUART
RED.RUTGERS.EDU
Carter
MCGREW%RU-BLUE
RICE.EDU
hudel
RSCH.WISC.EDU
herb
SDCSVAX.UCSD.EDU
jvz
SEISMO.CSS.GOV
harvard!bu-cs!sam
SRI-NIC.ARPA
Arms-D
SU-RUSSELL.ARPA
chapman
SU-SHASTA.ARPA
arms-d-redistribution
SUN.COM
dirk%words
UCBVAX.BERKELEY.EDU
post-arms-d
sun!edh
sun!oscar!wild
sun!thales!toma
ucscc!cpsr
USC-ECL.ARPA
LOCAL-ARMS-D
UW-JUNE.ARPA
jon
VX.LCS.MIT.EDU
RAE
WASHINGTON.ARPA
Borning
WHARTON-10.ARPA
maarten
WISCVM.WISC.EDU
26421079%NMSUVM1.BITNET
APRI1801%UA.BITNET
ARMS-L%KLA.WESLYN%WESLEYAN.BITNET
ATSWAF%UOFT01.BITNET
C0144%CSUOHIO.BITNET
CN0001ER%UKCC.BITNET
COMPSCI%WSUVM1.BITNET
CS0250EI%UKCC.BITNET
CWM%PSUVM.BITNET
DEP%SLACVM.BITNET
FJOHNSO3%UA.BITNET
FQOJ%CORNELLA.BITNET
Flash%UMass.BITNET
GA.CJJ%Stanford.BITNET
GROSS%BCVAX3.BITNET
KEN%NJITCCCC.BITNET
KRAMER%ANLHEP.BITNET
LA%DDAESA10.BITNET
MEK-MK%FINHUT.BITNET
MT354TMW%YALEVMX.BITNET
NETNEWS%ULKYVX.BITNET
RMADSEN%NORUNIT.BITNET
RSHEPHE%UA.BITNET
SRCHP%SLACVM.BITNET
UBIQUI%TUCC.BITNET
YBMCU%CUNYVM.BITNET
oth104%bostonu.bitnet
XEROX.COM
ArmsDiscussion^.x
XX.LCS.MIT.EDU
*ps:<arms-d>archive.current
BBARMS-D
LIN
YANNIS
joseph
ZARATHUSTRA.THINK.COM
art
aerospace.arpa
arms-d
csnet-relay.arpa
ARMS%ti-eg
gosset.wisc.edu
carwardi
mitre-bedford.arpa
arms-d
nswc-wo.ARPA
jlynch
su-forsythe.arpa
gd.aml

Date: 16 Sep 86 09:05-EDT
From: Moderator <ARMS-D-Request@XX.LCS.MIT.EDU>
Subject: Arms-Discussion Digest V7 #12
To: ARMS-D@XX.LCS.MIT.EDU
Reply-To: ARMS-D@XX.LCS.MIT.EDU

Arms-Discussion Digest              Tuesday, September 16, 1986 9:05AM
Volume 7, Issue 12

Today's Topics:

                            F-16 software
                            F-16 software
                            F-16 software
                 Upside-down F-16's and "Human error"
                            F-16 software
  Captain Midnight & military satellites (Mother Jones, October 86)
                            Administrivia
                    Flight Simulators Have Faults
        A critical real-time application worked the first time
                    Flight Simulators Have Faults
                            F-16 software
                            F-16 Software
                          Autonomous weapons
                      One student's view of SDI
                 "Unreasonable behavior" and software
                 "Unreasonable behavior" and software

----------------------------------------------------------------------

Date: Thursday, 4 September 1986  02:59-EDT
From: allegra!utzoo!henry at ucbvax.Berkeley.EDU
To:   arms-d, RISKS@CSL.SRI.COM
Re:   F-16 software

Phil Ngai writes:

> It sounds very funny that the software would let you drop a bomb on the wing
> while in inverted flight but is it really important to prevent this? ...

This issue actually is even more complex than it sounds, because it may be
*desirable* to permit this in certain circumstances.  The question is not
whether the plane is upside down at the time of bomb release, but which way
the bomb's net acceleration vector is pointing.  If the plane is in level
flight upside-down, the vector points into the wing, which is a no-no.  But
the same thing can happen with the plane upright but pulling hard into a
dive.  Not common, but possible.  On the other side of the coin, some
"toss-bombing" techniques *demand* bomb release in unusual attitudes,
because aircraft maneuvering is being used to throw the bomb into an
unorthodox trajectory.  Toss-bombing is common when it is desired to bomb
from a distance (e.g. well-defended targets) or when the aircraft should
be as far away from the explosion as possible (e.g. nuclear weapons).
Low-altitude flight in rough terrain at high speeds can also involve quite
violent maneuvering, possibly demanding bomb release in other than straight-
and-level conditions.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: Friday, 5 September 1986  13:19-EDT
From: rti-sel!dg_rtp!throopw%mcnc.csnet at CSNET-RELAY.ARPA
To:   RISKS@csl.sri.com, arms-d
Re:   F-16 software

> It sounds very funny that the software would let you drop a bomb on the wing
> while in inverted flight but is it really important to prevent this? Is it
> worth the chance of introducing a new bug to fix this very minor problem?

>      [The probability is clearly NONZERO.  It is very dangerous to start
>       making assumptions in programming about being able to leave out an
>       exception condition simply because you think it cannot arise.  Such
>       assumptions have a nasty habit of interacting with other assumptions
>       or propagating.  PGN]

It is also dangerous to start making assumptions about the ways in which
the system will be used.  Can you really not think of a reason why one
would want to "drop" a bomb while the dorsal surface of the plane points
towards the planet's center (a possible interpretation of "inverted")?
I can think of several.

I am trying to make the point that the gross simplification of
"preventing bomb release while inverted" doesn't map very well to what I
assume the actual goal is: "preventing weapons discharge from damaging
the aircraft".  This is yet another instance where the assumptions made
to simplify a real-world situation to manageable size can easily lead to
design "errors", and is an architypical "computer risk" in the use of
relatively simple computer models of reality.

In addition to all this, it may well be that one doesn't *want* to
prevent all possible modes weapons discharge that may damage the
aircraft...  some of them may be useful techniques for use in extreme
situations.

   The more control,
   The more that requires control.
   This is the road to chaos.
                                --- PanSpechi aphorism {Frank Herbert}

Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

------------------------------

Date: Monday, 8 September 1986  12:53-EDT
From: Nancy Leveson <nancy at ICSD.UCI.EDU>
To:   arms-d, risks at csl.sri.com
Re:   F-16 software

Wayne Throop writes:  

   >it may well be that one doesn't *want* to prevent all possible 
   >modes weapons discharge that may damage the aircraft ... some of
   >them may be useful techniques for use in extreme situations.

This raises some extremely important points that should be remembered
by those attempting to deal with risk.

   1) nothing can be made 100% safe under all circumstances.  In papers I
      have written I have pointed out that safety razors and safety matches
      are not completely safe, they are only *safer* than their alternatives.
      Drinking water is usually considered safe, but drinking too much water
      can cause kidney failure.  
   
   1) the techniques used to make things safer usually involve
      limiting functionality or design freedom and thus involve tradeoffs
      with other desirable characteristics of the product.

All we can do is attempt to provide "acceptable risk."  What is "acceptable" 
will depend upon moral, political, and practical issues such as how much
we are willing to "pay" for a particular level of safety.

I define "software safety" as involving procedures to ensure that the
software will execute within a system context without resulting in
unacceptable risk.  This implies that when building safety-critical systems, 
one of the first and most important design problems may be in identifying
the risks and determining what will be considered acceptable risk for that
system.  And just as important, our models and techniques are going to have
to consider the tradeoffs implicit in any attempt to enhance safety and
to allow estimation of the risk implicit in any design decisions.  
If we have such models, then we can use them for decision making, including 
the decision about whether acceptable risk can be achieved (and thus
whether the system can and should be built).  If it is determined that
acceptable risk can be achieved, then the models and techniques should 
provide help in making the necessary design decisions and tradeoffs.
The important point is that these decisions should be carefully considered
and not subject to the whim of one programmer who decides in an ad hoc
fashion whether or not to put in the necessary checks and interlocks.

      Nancy Leveson
      Information & Computer Science
      University of California, Irvine

------------------------------

Date: Monday, 8 September 1986  19:55-EDT
From: jon at uw-june.arpa (Jon Jacky)
To:   arms-d, risks at CSL.SRI.COM
Re:   Upside-down F-16's and "Human error"

> (... earlier postings mentioned "fly-by-wire" F-16 computer would 
> attempt to raise landing gear while aircraft was sitting on runway,
> would attempt to drop bombs while flying inverted, and other such 
> maneuvers -- in response to pilot's commands

These are regarded as errors?  Maybe I'm missing something, but it sounds 
like the right solution is to remind the pilots not to attempt obviously
destructive maneuvers.  I detect a notion floating about that software 
should prevent any unreasonable behavior.  This way lies madness.  Do we have 
to include code to prevent the speed from exceeding 55 mph while taxiing down
an interstate highway?

My point is, if you take the approach that the computer is supposed to check
for and prevent any incorrect behavior, then you have saddled yourself with
the task enumerating every possible thing the system should NOT do.  Such a 
list of prohibited behaviors is likely to be so long it will make the 
programming task quite intractable, not to mention that you will never get all
of them.

I suggest that the correct solution is the time-honored one: the operator must
be assumed to possess some level of competence; no attempt is made to 
protect against every conceivable error that might be committed by a flagrantly
incompetent or malicious operator.

Note that all non-computerized equipment is designed this way.  If I steer my
car into a freeway abutment, I am likely to get killed.  Is this a "design
flaw" or an "implementation bug?"  Obviously, it is neither.  People who are
drunk or suicidal are advised not to drive.

This relates to the ongoing discusssion about "human error."  This much-abused
term used to refer to violations of commonly accepted standards of operator
performance -- disobeying clear instructions, attempting to work when drunk, 
things like that.  Apparently it has come to refer to almost any behavior which,
in retrospect, turns out to have unfortunate consequences.  It is sometimes 
applied to situations for which the operator was never trained, and which the 
people who installed the system had not even anticipated.  

When abused in this way, the term "human error" can be a transparent attempt
to deflect blame from designers and management to those with the least control
over events.  Other times, however, it is evidence of genuine confusion over
who is responsible for what.  Right at the beginning, designers must draw a
clear line between what the automated system is supposed to do and what the
operators must do.  This may require facing the painful truth that there 
may be situations where, if the operator makes a mistake, a real disaster
may occur.  The choice is then one of ensuring the trustworthiness of the
operators, or finding an alternative approach to the problem that is more
robust.  

I suggest that if additional computer-based checking against operator errors
keeps getting added on after the system has been installed, it is evidence that
the role of the operator was not very clearly defined to begin with.

-Jonathan Jacky
University of Washington

------------------------------

Date: Monday, 8 September 1986  10:36-EDT
From: Scott E. Preece <preece%ccvaxa at GSWD-VMS.ARPA>
To:   arms-d, RISKS at CSL.SRI.COM
Re:   F-16 software

> From: amdcad!phil@decwrl.DEC.COM (Phil Ngai)

> It sounds very funny that the software would let you drop a bomb on the
> wing while in inverted flight but is it really important to prevent
> this?

Others have already pointed out that sometimes you may WANT to
release the bomb when inverted.  I would ask the more obvious
question: Would a mechanical bomb release keep you from releasing
the bomb when inverted?  I tend to doubt it.  While it's nice
to think that a software controlled plane should be smarter than
a mechanical plane, I don't think it's fair to cite as an error
in the control software that it isn't smarter than a mechanical
plane...

If, in fact, the mechanical release HAD protected against inverted
release, I would have expected that to be part of the specs for
the plane; I would also expect that the acceptance tests for the
software comtrolled plane would test all of the specs and that
the fault would have been caught in that case.

scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@gswd-vms

------------------------------

Date: Monday, 8 September 1986  01:01-EDT
From: Werner Uhrig <CMP.WERNER at R20.UTEXAS.EDU>
To:   arms-d, risks at csl.sri.com
Re:   Captain Midnight & military satellites (Mother Jones, October 86)

[ pointer to article in print:  Mother Jones, Oct '86 Cover Story on Satellite
  Communications Security (or lack thereof) ]

(p.26)	CAPTAIN MIDNIGHT, HBO, AND WORLD WAR III - by Donald Goldberg
	John "Captain Mignight" MacDougall has been caught but the flaws he
exposed in the U.S. military and commercial ssatellite communications system
are still with us and could lead to far scarier things than a $12.95 monthly
cable charge.

(p.49)	HOME JAMMING: A DO-IT-YOURSELF GUIDE - by Donald Goldberg
	What cable companies and the Pentagon don;t want you to know.

PS: Donald Goldberg is described as "senior reporter in Washington, D.C., for
the syndicated Jack Anderson column

[ this is not an endorsement of the article, just a pointer.
  you be the judge of the contents. ]

------------------------------

Date: Tue, 16 Sep 1986  00:49 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [mmdf: Failed mail  (msg.a007930)]

==>> Someone at BBN please help:

    Date: Monday, 1 September 1986  08:36-EDT
    From: VAX Memo Service (MMDF 4/84) <mmdf at VAX.BBN.COM>
    Sender: mmdf at VAX.BBN.COM
    To:   ARMS-D-Request
    Re:   Failed mail  (msg.a007930)

	Your message could not be delivered to
    'nbreen@BBN.COM (host: bbn.com) (queue: smtp)' for the following
    reason:  ' (USER) Unknown user name in "nbreen@BBN.COM"'

==>>  Someone please help from BRL:


    Date: Tuesday, 16 September 1986  01:40-EDT
    From: BRL Memo Service (MMDF 4/84) <mmdf at BRL.ARPA>
    Sender: mmdf at BRL.ARPA
    To:   arms-d-request at BRL.ARPA
    Re:   Failed mail  (msg.a022856)

	Your message could not be delivered to
    'MCLAUGHLINJR@a.isi.edu (host: a.isi.edu) (queue: smtp)' for the following
    reason:  ' Unknown user - MCLAUGHLINJR@a.isi.edu'

------------------------------

Date: Sunday, 31 August 1986  02:08-EDT
From: Dave Benson <benson%wsu.csnet at CSNET-RELAY.ARPA>
To:   arms-d, risks@csl.sri.com
Re:   Flight Simulators Have Faults

I mentioned the F-16 RISKS contributions to my Software Engineering class
yesterday.  After class, one of the students told me the following story about
the B-1 Flight Simulator. The student had been employed over the summer to
work on that project, thus having first-hand knowledge of the incident.

Seems when a pilot attempts to loop the B-1 Flight Simulator that
the (simulated) sky disappears.  Why?  Well, the simulated aircraft
pitch angle was translated by the software into a visual image by
taking the trigonometric tangent somewhere in the code.  With the
simulated aircraft on its nose, the angle is 90 degrees and the
tangent routine just couldn't manage the infinities involved.  As I
understand the story, the monitors projecting the window view went blank.

Ah, me.  The B-1 is the first aircraft with the capability to loop?  Nope,
its been done for about 70 years now...  The B-1 Flight Simulator is the
first flight simulator with the capability to allow loops?  Nope, seems to
me I've played with a commercially available Apple IIe program in which a
capable player could loop the simulated Cessna 180.  $$ to donuts that
military flight simulators with all functionality in software have been
allowing simulated loops for many years now.

Dick Hamming said something to the effect that while physicists stand on one
another's shoulders, computer scientists stand on one another's toes.  At
least on the toes is better than this failure to do as well as a game
program...  Maybe software engineers dig one another's graves?

And this company wants to research Starwars software...  Jus' beam me up,
Scotty, there's no intelligent life here.

------------------------------

Date: Sun, 14 Sep 86 22:40:51 pdt
From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Subject: I found one! (A critical real-time application worked the first time)

Last spring I issued a call for hard data to refute a hypothesis which I,
perhaps mistakenly, called the Parnas Hypothesis:
	No large computer software has ever worked the first time.
Actually, I was only interested in military software, so let me repost the
challenge in the form I am most interested in:
	NO MILITARY SOFTWARE (large or small) HAS EVER WORKED IN ITS FIRST
	OPERATIONAL TEST OR ITS FIRST ACTUAL BATTLE.
Contradict me if you can. (Send citations to the open literature
to benson@wsu via csnet)

Last spring's request for data has finally led to the following paper:
	Bonnie A. Claussen, II
	VIKING '75 -- THE DEVELOPMENT OF A RELIABLE FLIGHT PROGRAM
	Proc. IEEE COMPSAC 77 (Computer Software & Applications Conference)
	IEEE Computer Society, 1977
	pp. 33-37

I offer some quotations for your delictation:

	The 1976 landings of Viking 1 and Viking 2 upon the surface of
	Mars represented a significant achievement in the United States
	space exploration program. ... The unprecented success of the Viking
	mission was due in part to the ability of the flight software
	to operate in an autonomous and error free manner. ...
	Upon separation from the Oribiter the Viking Lander, under autonomous
	software control, deorbits, enters the Martian atmosphere,
	and performs a soft landing on the surface. ... Once upon the surface,
	... the computer and its flight software provide the means by
	which the Lander is controlled.  This control is semi-autonomous
	in the sense that Flight Operations can only command the Lander
	once a day at 4 bit/sec rate.

(Progress occured in a NASA contract over a decade ago, in that)

	In the initial stages of the Viking flight program development,
	the decision was made to test the flight algorithms and determine
	the timing, sizing and accuracy requirements that should be 
	levied upon the flight computer prior to computer procurement.
	... The entire philosophy of the computer hardware and
	software reliability was to "keep it simple."  Using the
	philosophy of simplification, modules and tasks tend toward 
	straight line code with minium decisions and minimum
	interactions with other modules.

(It was lots of work, as)

	When questioning the magnitude of the qulity assurance task,
	it should be noted that the Viking Lander flight program development
	required approximately 135 man-years to complete.

(But the paper gives no quantitative data about program size or complexity.)

Nevertheless, we may judge this as one of the finest software engineering
acomplishments to date.  The engineers on this project deserve far more
plaudits than they've received.  I know of no similar piece of software
with so much riding upon its reliable behavior which has done so well.
(If you do, please do tell me about it.)

However, one estimates that this program is on the order of kilolines
of FORTRAN and assembly code, probably less than one hundred
kilolines.  Thus Parnas will need to judge for himself whether or not
the Viking Lander flight software causes him to abandon (what I take
to be) his hypothesis about programs not working the first time.

It doesn't cause me to abandon mine because there were no Martians shooting
back, as far as we know...

David B. Benson, Computer Science Department, Washington State University,
Pullman, WA 99164-1210  csnet: benson@wsu

------------------------------

Date: Tuesday, 2 September 1986  11:35-EDT
From: Gary Whisenhunt <gwhisen%ccvaxa at GSWD-VMS.ARPA>
To:   arms-d, RISKS at CSL.SRI.COM
Re:   Flight Simulators Have Faults

    I developed flight simulators for over 7 years and could describe many such
bizarre incidents.  I seriously doubt that the sky went blank in the B-1
simulator when it was delivered to the government.  Military simulators have
formal acceptance tests that last for months.  The last one that I worked on
had a test procedure over 12 inches thick.  To point out a failure during
testing (or more likely development) seems meaningless.  Failures that make
it into the actual product are what should be of concern.
    Most flight simulators procured by the Air Force and the Navy require
Mil-Std 1644 or Mil-Std 1679 to be followed when developing software.  These
standards detail how software is to be developed and tested.  The standards
are fairly strict and exhaustive.  This is to ensure product correctness 
even if it incurrs greater costs.  It would be interesting study for a 
class in Software Engineering.
    The greatest risks that I see from flight simulators (especially
military) is that the simulator often lags behind the aircraft in
functionality by a year or 2.  Simulators require design data to be frozen
at a certain date so that the simulator can be designed using consistent,
tested data.  After 2 years of development, the aircraft may have changed
functionaly (sometimes in subtle ways) from the simulator design.  The
effect is much more dramatic for newer aircraft than it is for more
established ones.  The simulator is upgraded, but during the upgrade period
pilots train on a simulator that is mildly different from their aircraft.
    As for the effectiveness of simulators, I've been told by more than one
pilot that the simulator saved his life because he was able to practice
malfunction conditions in the simulator that prepared him for a real emergency
that occurred later.

Gary Whisenhunt
Gould Computer Systems Division
Urbana, Ill.

    [I thought that by now these simulators were designed so that they could
     be driven by the same software that is used in the live aircraft -- a
     change in one place would be reflected by the same change in the other,
     although changing the application code without having to modify the
     simulator itself.  Maybe not...  PGN]

------------------------------

Date: Wednesday, 10 September 1986  16:17-EDT
From: eugene at AMES-NAS.ARPA (Eugene Miya)
To:   arms-d, risk at sri-csl.ARPA
Re:   F-16 software

It seems F-16's are a hot topic everywhere.  I think it's novelty
thing like computers except for aeronautics.

> I am trying to make the point that the gross simplification of
> "preventing bomb release while inverted" doesn't map very well to what I
> assume the actual goal is: "preventing weapons discharge from damaging
> the aircraft".  This is yet another instance where the assumptions made
> to simplify a real-world situation to manageable size can easily lead to
> design "errors", and is an architypical "computer risk" in the use of
> relatively simple computer models of reality.
> 
> Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

Excellent point.

Several things strike me about this problem.  First, the language used
by writers up to this point don't use words like "centrifugal force"
and "gravity."  This worries me about the training of some computer people
for jobs like writing mission critical software [Whorf's "If the word
does not exist, the concept does not exist."]  I am awaiting a paper
by Whitehead whch I am told talks about some of this.

It can certainly be acknowledged that there are uses which are novel
(Spencer cites "lob" bombing, and others cite other reasons [all marginal])
equal concern must be given to straight-and-level flight AND those
novel cases.  In other words, we have to assume some skill on the part of
pilots [Is this arrogance on our part?]. 

Another problem is that planes and Shuttles do not have the types of sensory
mechanisms which living organisms have.  What is damage if we cannot
"sense it?"  Sensing equipment costs weight.  I could see some interesting
dialogues ala "Dark Star."

Another thing is that the people who write simulations seem to have the
great difficulty discriminating between the quality of thier simulations
and "real world" in the presence of incomplete cues (e.g., G-forces,
visual cues, etc.) when solely relying on things like instrument disk
[e.g., pilot: "Er, you notice that we are flying on empty tanks?" disturbed
pilot expression,  programmer: "Ah, it's just a simulation."]
Computer people seem to be "ever the optimist."  Besides, would you ever
get into a real plane with a pilot who's only been in simulators?

Most recently, another poster brought up the issue of autonmous weapons.
We had a discussion of of this at the last Palo Alto CPSR meeting.
Are autonmous weapons moral?  If an enemy has a white flag or hand-ups,
is the weapon "smart enough" to know the Geneva Convention (or is too
moral for programmers of such systems)?

On the subject of flight simulators: I visited Singer Link two years
ago (We have a DIG 1 system which we are replacing).  I "crashed" underneath
the earth and the polygon structure became more "visible."  It was like
being underneath Disneyland.

--eugene miya			sorry for the length, RISKS covered alot.
  NASA Ames Research Center
  President
  Bay Area ACM SIGGRAPH

------------------------------

Date: Wednesday, 10 September 1986  14:42-EDT
From: Doug_Wade%UBC.MAILNET at MIT-MULTICS.ARPA
To:   arms-d, risks at csl.sri.com
Re:   F-16 Software.

Reading comments about putting restraints on jet performance within
the software reminded me of a  conversation I had a few years ago
at an air-show.
In talking to a pilot who flew F-4's in Vietnam he mentioned that
the F-4 specs said a turn exerting more than say 8 G's would cause
the wings to "fall off". However in avoiding SAMs or ground-fire
they would pull double? this with no such result.
  My comment to this, is what if a 8G limit had been programmed into
the plane (if it had been fly-by-wire). Planes might have been hit and
lost which otherwise were saved by violent maneuvers. With a SAM targeted
on your jet, nothing could be lost by exceeding the structural limitations
of the plane since it was a do-or-die situation.
I'm sure 99.99% of the lifetime of a jet is spent within designed
specifications, but should software limit the plane the one time
a pilot needs to override this constraint?

------------------------------

Date: Tue, 16 Sep 1986  08:31 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Autonomous weapons


    From: eugene at AMES-NAS.ARPA (Eugene Miya)

    ... another poster brought up the issue of autonmous weapons.
    We had a discussion of of this at the last Palo Alto CPSR meeting.
    Are autonmous weapons moral?  If an enemy has a white flag or hand-ups,
    is the weapon "smart enough" to know the Geneva Convention (or is too
    moral for programmers of such systems)?

What do you consider an autonomous weapon?  Some anti-tank devices are
intended to recognize tanks and then attack them without human
intervention after they have been launched (so-called fire-and-forget
weapons).  But they still must be fired under human control.  *People*
are supposed to recognize white flags and surrendering soldiers.

------------------------------

Date: Tue, 16 Sep 1986  08:48 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: One student's view of SDI


    From: GROSS at BCVAX3.BITNET (Rob Gross)
    *****  Forwarded message  *****

    A Computer Science Student's View of SDI:

    What is SDI's purpose?

    Answer:  SDI's purpose is to defend the U.S. from ICBMs that enter the
    upper ionosphere by using laser or particle beam weapons that would
    destroy the incoming missiles.

    This all sounds great!! No more threat of Nuclear War!!  But this is
    not the fifties, where the vast majority of warheads were in
    intercontinental ballistic missiles!!!

False.  They were mostly in bombers then.

    I recall reading that less
    than 25% of all nuclear weapons are now ICBMs.  

True for the U.S., not true for the Soviets, who have 75% of their
warheads in ICBMs.

------------------------------

Date: Tuesday, 9 September 1986  17:28-EDT
From: Gary Chapman <chapman at russell.stanford.edu>
To:   arms-d, RISKS at CSL.SRI.COM
Re:   "Unreasonable behavior" and software

Jon Jacky wrote:

	I detect a notion floating about that software should 
	prevent any unreasonable behavior.  This way lies mad-
	ness.  Do we have to include code to prevent the speed
	[of an F-16] from exceeding 55 mph while taxiing down
	an interstate highway?

I certainly agree with the thrust of this.  But we should note that there is
plenty of evidence that coding in prohibitions on unreasonable behavior will
be required, particularly in the development of "autonomous" weapons that
are meant to combat the enemy without human "operators" on the scene.

Here's a description of a contract let by the United States Army Training and
Doctrine Command (TRADOC), Field Artillery Division, for something called a
"Terminal Homing Munition" (THM):

	Information about targets can be placed into the munitions
	processor prior to firing along with updates on meteorologi-
	cal conditions and terrain.  Warhead functioning can also be
	selected as variable options will be available.  The intro-
	duction of VHSIC processors will give the terminal homing
	munitions the capability of distinguishing between enemy and
	friendly systems and finite target type selection.  Since
	the decision of which target to attack is made on board the
	weapon, the THM will approach human intelligence in this area.
	The design criteria is pointed toward one munition per target
	kill.

(I scratched my head along with the rest of you when I saw this;  I've always
thought if you fire a bullet or a shell out of a tube it goes until it hits
something, preferably something you're aiming at.  But maybe the Army has
some new theories of ballistics we don't know about yet.)

As Nancy Leveson notes, we make tradeoffs in design and functionality for
safety, and how many and what kinds of tradeoffs are made depends on ethical,
political and cost considerations, among other things.  Since, as Jon Jacky
notes, trying to prohibit all unreasonable situations in code is itself un-
reasonable, then one wonders what sorts of things will be left out of the code
of terminal homing munitions?  What sorts of things will we have to take into
account in the code of a "warhead" that is supposed to find its own targets?
What level of confidence would we have to give soldiers (human soldiers--we
may have to get used to using that caveat) operating at close proximity to
THMs that the things are "safe"?

I was once a participant in an artillery briefing by a young, smart artillery
corps major.  This officer told us (a bunch of grunts) that we no longer needed
"forward observers," or guys attached to patrols to call in the ranges on
artillery strikes.  In fact, said the major, we don't need to call in our
artillery stikes at all--his methods had become so advanced  would
just know where and when we needed support.  We all looked at him like he had
gone stark raving mad.  An old grizzled master sergeant who had been in the Army
since Valley Forge I think, got up and said, "Sir, with all due respect, if I
find out you're in charge of the artillery in my sector, I will personally come
back and shoot you right between the eyes."  (His own form of THM "approaching
human intelligence", no doubt.) (I wouldn't be surprised if this major wrote
the language above.)

What is "unreasonable" behavior to take into account in coding software?  The
major's or the sergeant's?
							-- Gary Chapman

------------------------------

Date: Tue, 16 Sep 1986  09:01 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: "Unreasonable behavior" and software


    From: Gary Chapman <chapman at russell.stanford.edu>
    	Information about targets can be placed into the munitions
    	processor prior to firing along with updates on meteorologi-
    	cal conditions and terrain.  Warhead functioning can also be
    	selected as variable options will be available.  The intro-
    	duction of VHSIC processors will give the terminal homing
    	munitions the capability of distinguishing between enemy and
    	friendly systems and finite target type selection.  Since
    	the decision of which target to attack is made on board the
    	weapon, the THM will approach human intelligence in this area.
    	The design criteria is pointed toward one munition per target
    	kill.

    (I scratched my head along with the rest of you when I saw this;
    I've always
    thought if you fire a bullet or a shell out of a tube it goes until it hits
    something, preferably something you're aiming at.  But maybe the Army has
    some new theories of ballistics we don't know about yet.)

The THM is an example of what the army calls a "fire-and-forget"
munition. A human being fires it in the general direction of the
target, and then the munition seeks out its target without further
intervention.  The munition has mechanisms to alter its course from a
ballistic trajectory.

    What level of confidence would we have to give soldiers (human soldiers--we
    may have to get used to using that caveat) operating at close proximity to
    THMs that the things are "safe"?

That is indeed the question.  My own guess is that THMs and other
smart munitions will never be able to distinguish between friend or
foe.  That's why most current concepts are directed towards attacking
enemy forces deep behind enemy lines, where you can ASSUME that
anything you see is hostile.

------------------------------

End of Arms-Discussion Digest
*****************************

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (09/16/86)

Arms-Discussion Digest              Tuesday, September 16, 1986 9:05AM
Volume 7, Issue 12

This digest is a cleaned-up version of the last one sent out with the
same number.  By mistake, I sent out the unedited one.  Sorry.

Today's Topics:

                            F-16 software
                            F-16 software
                            F-16 software
                 Upside-down F-16's and "Human error"
                            F-16 software
  Captain Midnight & military satellites (Mother Jones, October 86)
                            Administrivia
                    Flight Simulators Have Faults
        A critical real-time application worked the first time
                    Flight Simulators Have Faults
                            F-16 software
                            F-16 Software
                          Autonomous weapons
                      One student's view of SDI
                 "Unreasonable behavior" and software
                 "Unreasonable behavior" and software

----------------------------------------------------------------------

Date: Thursday, 4 September 1986  02:59-EDT
From: allegra!utzoo!henry at ucbvax.Berkeley.EDU
To:   arms-d, RISKS@CSL.SRI.COM
Re:   F-16 software

Phil Ngai writes:

> It sounds very funny that the software would let you drop a bomb on the wing
> while in inverted flight but is it really important to prevent this? ...

This issue actually is even more complex than it sounds, because it may be
*desirable* to permit this in certain circumstances.  The question is not
whether the plane is upside down at the time of bomb release, but which way
the bomb's net acceleration vector is pointing.  If the plane is in level
flight upside-down, the vector points into the wing, which is a no-no.  But
the same thing can happen with the plane upright but pulling hard into a
dive.  Not common, but possible.  On the other side of the coin, some
"toss-bombing" techniques *demand* bomb release in unusual attitudes,
because aircraft maneuvering is being used to throw the bomb into an
unorthodox trajectory.  Toss-bombing is common when it is desired to bomb
from a distance (e.g. well-defended targets) or when the aircraft should
be as far away from the explosion as possible (e.g. nuclear weapons).
Low-altitude flight in rough terrain at high speeds can also involve quite
violent maneuvering, possibly demanding bomb release in other than straight-
and-level conditions.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: Friday, 5 September 1986  13:19-EDT
From: rti-sel!dg_rtp!throopw%mcnc.csnet at CSNET-RELAY.ARPA
To:   RISKS@csl.sri.com, arms-d
Re:   F-16 software

> It sounds very funny that the software would let you drop a bomb on the wing
> while in inverted flight but is it really important to prevent this? Is it
> worth the chance of introducing a new bug to fix this very minor problem?

>      [The probability is clearly NONZERO.  It is very dangerous to start
>       making assumptions in programming about being able to leave out an
>       exception condition simply because you think it cannot arise.  Such
>       assumptions have a nasty habit of interacting with other assumptions
>       or propagating.  PGN]

It is also dangerous to start making assumptions about the ways in which
the system will be used.  Can you really not think of a reason why one
would want to "drop" a bomb while the dorsal surface of the plane points
towards the planet's center (a possible interpretation of "inverted")?
I can think of several.

I am trying to make the point that the gross simplification of
"preventing bomb release while inverted" doesn't map very well to what I
assume the actual goal is: "preventing weapons discharge from damaging
the aircraft".  This is yet another instance where the assumptions made
to simplify a real-world situation to manageable size can easily lead to
design "errors", and is an architypical "computer risk" in the use of
relatively simple computer models of reality.

In addition to all this, it may well be that one doesn't *want* to
prevent all possible modes weapons discharge that may damage the
aircraft...  some of them may be useful techniques for use in extreme
situations.

   The more control,
   The more that requires control.
   This is the road to chaos.
                                --- PanSpechi aphorism {Frank Herbert}

Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

------------------------------

Date: Monday, 8 September 1986  12:53-EDT
From: Nancy Leveson <nancy at ICSD.UCI.EDU>
To:   arms-d, risks at csl.sri.com
Re:   F-16 software

Wayne Throop writes:  

   >it may well be that one doesn't *want* to prevent all possible 
   >modes weapons discharge that may damage the aircraft ... some of
   >them may be useful techniques for use in extreme situations.

This raises some extremely important points that should be remembered
by those attempting to deal with risk.

   1) nothing can be made 100% safe under all circumstances.  In papers I
      have written I have pointed out that safety razors and safety matches
      are not completely safe, they are only *safer* than their alternatives.
      Drinking water is usually considered safe, but drinking too much water
      can cause kidney failure.  
   
   1) the techniques used to make things safer usually involve
      limiting functionality or design freedom and thus involve tradeoffs
      with other desirable characteristics of the product.

All we can do is attempt to provide "acceptable risk."  What is "acceptable" 
will depend upon moral, political, and practical issues such as how much
we are willing to "pay" for a particular level of safety.

I define "software safety" as involving procedures to ensure that the
software will execute within a system context without resulting in
unacceptable risk.  This implies that when building safety-critical systems, 
one of the first and most important design problems may be in identifying
the risks and determining what will be considered acceptable risk for that
system.  And just as important, our models and techniques are going to have
to consider the tradeoffs implicit in any attempt to enhance safety and
to allow estimation of the risk implicit in any design decisions.  
If we have such models, then we can use them for decision making, including 
the decision about whether acceptable risk can be achieved (and thus
whether the system can and should be built).  If it is determined that
acceptable risk can be achieved, then the models and techniques should 
provide help in making the necessary design decisions and tradeoffs.
The important point is that these decisions should be carefully considered
and not subject to the whim of one programmer who decides in an ad hoc
fashion whether or not to put in the necessary checks and interlocks.

      Nancy Leveson
      Information & Computer Science
      University of California, Irvine

------------------------------

Date: Monday, 8 September 1986  19:55-EDT
From: jon at uw-june.arpa (Jon Jacky)
To:   arms-d, risks at CSL.SRI.COM
Re:   Upside-down F-16's and "Human error"

> (... earlier postings mentioned "fly-by-wire" F-16 computer would 
> attempt to raise landing gear while aircraft was sitting on runway,
> would attempt to drop bombs while flying inverted, and other such 
> maneuvers -- in response to pilot's commands

These are regarded as errors?  Maybe I'm missing something, but it sounds 
like the right solution is to remind the pilots not to attempt obviously
destructive maneuvers.  I detect a notion floating about that software 
should prevent any unreasonable behavior.  This way lies madness.  Do we have 
to include code to prevent the speed from exceeding 55 mph while taxiing down
an interstate highway?

My point is, if you take the approach that the computer is supposed to check
for and prevent any incorrect behavior, then you have saddled yourself with
the task enumerating every possible thing the system should NOT do.  Such a 
list of prohibited behaviors is likely to be so long it will make the 
programming task quite intractable, not to mention that you will never get all
of them.

I suggest that the correct solution is the time-honored one: the operator must
be assumed to possess some level of competence; no attempt is made to 
protect against every conceivable error that might be committed by a flagrantly
incompetent or malicious operator.

Note that all non-computerized equipment is designed this way.  If I steer my
car into a freeway abutment, I am likely to get killed.  Is this a "design
flaw" or an "implementation bug?"  Obviously, it is neither.  People who are
drunk or suicidal are advised not to drive.

This relates to the ongoing discusssion about "human error."  This much-abused
term used to refer to violations of commonly accepted standards of operator
performance -- disobeying clear instructions, attempting to work when drunk, 
things like that.  Apparently it has come to refer to almost any behavior which,
in retrospect, turns out to have unfortunate consequences.  It is sometimes 
applied to situations for which the operator was never trained, and which the 
people who installed the system had not even anticipated.  

When abused in this way, the term "human error" can be a transparent attempt
to deflect blame from designers and management to those with the least control
over events.  Other times, however, it is evidence of genuine confusion over
who is responsible for what.  Right at the beginning, designers must draw a
clear line between what the automated system is supposed to do and what the
operators must do.  This may require facing the painful truth that there 
may be situations where, if the operator makes a mistake, a real disaster
may occur.  The choice is then one of ensuring the trustworthiness of the
operators, or finding an alternative approach to the problem that is more
robust.  

I suggest that if additional computer-based checking against operator errors
keeps getting added on after the system has been installed, it is evidence that
the role of the operator was not very clearly defined to begin with.

-Jonathan Jacky
University of Washington

------------------------------

Date: Monday, 8 September 1986  10:36-EDT
From: Scott E. Preece <preece%ccvaxa at GSWD-VMS.ARPA>
To:   arms-d, RISKS at CSL.SRI.COM
Re:   F-16 software

> From: amdcad!phil@decwrl.DEC.COM (Phil Ngai)

> It sounds very funny that the software would let you drop a bomb on the
> wing while in inverted flight but is it really important to prevent
> this?

Others have already pointed out that sometimes you may WANT to
release the bomb when inverted.  I would ask the more obvious
question: Would a mechanical bomb release keep you from releasing
the bomb when inverted?  I tend to doubt it.  While it's nice
to think that a software controlled plane should be smarter than
a mechanical plane, I don't think it's fair to cite as an error
in the control software that it isn't smarter than a mechanical
plane...

If, in fact, the mechanical release HAD protected against inverted
release, I would have expected that to be part of the specs for
the plane; I would also expect that the acceptance tests for the
software comtrolled plane would test all of the specs and that
the fault would have been caught in that case.

scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@gswd-vms

------------------------------

Date: Monday, 8 September 1986  01:01-EDT
From: Werner Uhrig <CMP.WERNER at R20.UTEXAS.EDU>
To:   arms-d, risks at csl.sri.com
Re:   Captain Midnight & military satellites (Mother Jones, October 86)

[ pointer to article in print:  Mother Jones, Oct '86 Cover Story on Satellite
  Communications Security (or lack thereof) ]

(p.26)	CAPTAIN MIDNIGHT, HBO, AND WORLD WAR III - by Donald Goldberg
	John "Captain Mignight" MacDougall has been caught but the flaws he
exposed in the U.S. military and commercial ssatellite communications system
are still with us and could lead to far scarier things than a $12.95 monthly
cable charge.

(p.49)	HOME JAMMING: A DO-IT-YOURSELF GUIDE - by Donald Goldberg
	What cable companies and the Pentagon don;t want you to know.

PS: Donald Goldberg is described as "senior reporter in Washington, D.C., for
the syndicated Jack Anderson column

[ this is not an endorsement of the article, just a pointer.
  you be the judge of the contents. ]

------------------------------

Date: Tue, 16 Sep 1986  00:49 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: [mmdf: Failed mail  (msg.a007930)]

==>> Someone at BBN please help:

    Date: Monday, 1 September 1986  08:36-EDT
    From: VAX Memo Service (MMDF 4/84) <mmdf at VAX.BBN.COM>
    Sender: mmdf at VAX.BBN.COM
    To:   ARMS-D-Request
    Re:   Failed mail  (msg.a007930)

	Your message could not be delivered to
    'nbreen@BBN.COM (host: bbn.com) (queue: smtp)' for the following
    reason:  ' (USER) Unknown user name in "nbreen@BBN.COM"'

==>>  Someone please help from BRL:


    Date: Tuesday, 16 September 1986  01:40-EDT
    From: BRL Memo Service (MMDF 4/84) <mmdf at BRL.ARPA>
    Sender: mmdf at BRL.ARPA
    To:   arms-d-request at BRL.ARPA
    Re:   Failed mail  (msg.a022856)

	Your message could not be delivered to
    'MCLAUGHLINJR@a.isi.edu (host: a.isi.edu) (queue: smtp)' for the following
    reason:  ' Unknown user - MCLAUGHLINJR@a.isi.edu'

------------------------------

Date: Sunday, 31 August 1986  02:08-EDT
From: Dave Benson <benson%wsu.csnet at CSNET-RELAY.ARPA>
To:   arms-d, risks@csl.sri.com
Re:   Flight Simulators Have Faults

I mentioned the F-16 RISKS contributions to my Software Engineering class
yesterday.  After class, one of the students told me the following story about
the B-1 Flight Simulator. The student had been employed over the summer to
work on that project, thus having first-hand knowledge of the incident.

Seems when a pilot attempts to loop the B-1 Flight Simulator that
the (simulated) sky disappears.  Why?  Well, the simulated aircraft
pitch angle was translated by the software into a visual image by
taking the trigonometric tangent somewhere in the code.  With the
simulated aircraft on its nose, the angle is 90 degrees and the
tangent routine just couldn't manage the infinities involved.  As I
understand the story, the monitors projecting the window view went blank.

Ah, me.  The B-1 is the first aircraft with the capability to loop?  Nope,
its been done for about 70 years now...  The B-1 Flight Simulator is the
first flight simulator with the capability to allow loops?  Nope, seems to
me I've played with a commercially available Apple IIe program in which a
capable player could loop the simulated Cessna 180.  $$ to donuts that
military flight simulators with all functionality in software have been
allowing simulated loops for many years now.

Dick Hamming said something to the effect that while physicists stand on one
another's shoulders, computer scientists stand on one another's toes.  At
least on the toes is better than this failure to do as well as a game
program...  Maybe software engineers dig one another's graves?

And this company wants to research Starwars software...  Jus' beam me up,
Scotty, there's no intelligent life here.

------------------------------

Date: Sun, 14 Sep 86 22:40:51 pdt
From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Subject: I found one! (A critical real-time application worked the first time)

Last spring I issued a call for hard data to refute a hypothesis which I,
perhaps mistakenly, called the Parnas Hypothesis:
	No large computer software has ever worked the first time.
Actually, I was only interested in military software, so let me repost the
challenge in the form I am most interested in:
	NO MILITARY SOFTWARE (large or small) HAS EVER WORKED IN ITS FIRST
	OPERATIONAL TEST OR ITS FIRST ACTUAL BATTLE.
Contradict me if you can. (Send citations to the open literature
to benson@wsu via csnet)

Last spring's request for data has finally led to the following paper:
	Bonnie A. Claussen, II
	VIKING '75 -- THE DEVELOPMENT OF A RELIABLE FLIGHT PROGRAM
	Proc. IEEE COMPSAC 77 (Computer Software & Applications Conference)
	IEEE Computer Society, 1977
	pp. 33-37

I offer some quotations for your delictation:

	The 1976 landings of Viking 1 and Viking 2 upon the surface of
	Mars represented a significant achievement in the United States
	space exploration program. ... The unprecented success of the Viking
	mission was due in part to the ability of the flight software
	to operate in an autonomous and error free manner. ...
	Upon separation from the Oribiter the Viking Lander, under autonomous
	software control, deorbits, enters the Martian atmosphere,
	and performs a soft landing on the surface. ... Once upon the surface,
	... the computer and its flight software provide the means by
	which the Lander is controlled.  This control is semi-autonomous
	in the sense that Flight Operations can only command the Lander
	once a day at 4 bit/sec rate.

(Progress occured in a NASA contract over a decade ago, in that)

	In the initial stages of the Viking flight program development,
	the decision was made to test the flight algorithms and determine
	the timing, sizing and accuracy requirements that should be 
	levied upon the flight computer prior to computer procurement.
	... The entire philosophy of the computer hardware and
	software reliability was to "keep it simple."  Using the
	philosophy of simplification, modules and tasks tend toward 
	straight line code with minium decisions and minimum
	interactions with other modules.

(It was lots of work, as)

	When questioning the magnitude of the qulity assurance task,
	it should be noted that the Viking Lander flight program development
	required approximately 135 man-years to complete.

(But the paper gives no quantitative data about program size or complexity.)

Nevertheless, we may judge this as one of the finest software engineering
acomplishments to date.  The engineers on this project deserve far more
plaudits than they've received.  I know of no similar piece of software
with so much riding upon its reliable behavior which has done so well.
(If you do, please do tell me about it.)

However, one estimates that this program is on the order of kilolines
of FORTRAN and assembly code, probably less than one hundred
kilolines.  Thus Parnas will need to judge for himself whether or not
the Viking Lander flight software causes him to abandon (what I take
to be) his hypothesis about programs not working the first time.

It doesn't cause me to abandon mine because there were no Martians shooting
back, as far as we know...

David B. Benson, Computer Science Department, Washington State University,
Pullman, WA 99164-1210  csnet: benson@wsu

------------------------------

Date: Tuesday, 2 September 1986  11:35-EDT
From: Gary Whisenhunt <gwhisen%ccvaxa at GSWD-VMS.ARPA>
To:   arms-d, RISKS at CSL.SRI.COM
Re:   Flight Simulators Have Faults

    I developed flight simulators for over 7 years and could describe many such
bizarre incidents.  I seriously doubt that the sky went blank in the B-1
simulator when it was delivered to the government.  Military simulators have
formal acceptance tests that last for months.  The last one that I worked on
had a test procedure over 12 inches thick.  To point out a failure during
testing (or more likely development) seems meaningless.  Failures that make
it into the actual product are what should be of concern.
    Most flight simulators procured by the Air Force and the Navy require
Mil-Std 1644 or Mil-Std 1679 to be followed when developing software.  These
standards detail how software is to be developed and tested.  The standards
are fairly strict and exhaustive.  This is to ensure product correctness 
even if it incurrs greater costs.  It would be interesting study for a 
class in Software Engineering.
    The greatest risks that I see from flight simulators (especially
military) is that the simulator often lags behind the aircraft in
functionality by a year or 2.  Simulators require design data to be frozen
at a certain date so that the simulator can be designed using consistent,
tested data.  After 2 years of development, the aircraft may have changed
functionaly (sometimes in subtle ways) from the simulator design.  The
effect is much more dramatic for newer aircraft than it is for more
established ones.  The simulator is upgraded, but during the upgrade period
pilots train on a simulator that is mildly different from their aircraft.
    As for the effectiveness of simulators, I've been told by more than one
pilot that the simulator saved his life because he was able to practice
malfunction conditions in the simulator that prepared him for a real emergency
that occurred later.

Gary Whisenhunt
Gould Computer Systems Division
Urbana, Ill.

    [I thought that by now these simulators were designed so that they could
     be driven by the same software that is used in the live aircraft -- a
     change in one place would be reflected by the same change in the other,
     although changing the application code without having to modify the
     simulator itself.  Maybe not...  PGN]

------------------------------

Date: Wednesday, 10 September 1986  16:17-EDT
From: eugene at AMES-NAS.ARPA (Eugene Miya)
To:   arms-d, risk at sri-csl.ARPA
Re:   F-16 software

It seems F-16's are a hot topic everywhere.  I think it's novelty
thing like computers except for aeronautics.

> I am trying to make the point that the gross simplification of
> "preventing bomb release while inverted" doesn't map very well to what I
> assume the actual goal is: "preventing weapons discharge from damaging
> the aircraft".  This is yet another instance where the assumptions made
> to simplify a real-world situation to manageable size can easily lead to
> design "errors", and is an architypical "computer risk" in the use of
> relatively simple computer models of reality.
> 
> Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

Excellent point.

Several things strike me about this problem.  First, the language used
by writers up to this point don't use words like "centrifugal force"
and "gravity."  This worries me about the training of some computer people
for jobs like writing mission critical software [Whorf's "If the word
does not exist, the concept does not exist."]  I am awaiting a paper
by Whitehead whch I am told talks about some of this.

It can certainly be acknowledged that there are uses which are novel
(Spencer cites "lob" bombing, and others cite other reasons [all marginal])
equal concern must be given to straight-and-level flight AND those
novel cases.  In other words, we have to assume some skill on the part of
pilots [Is this arrogance on our part?]. 

Another problem is that planes and Shuttles do not have the types of sensory
mechanisms which living organisms have.  What is damage if we cannot
"sense it?"  Sensing equipment costs weight.  I could see some interesting
dialogues ala "Dark Star."

Another thing is that the people who write simulations seem to have the
great difficulty discriminating between the quality of thier simulations
and "real world" in the presence of incomplete cues (e.g., G-forces,
visual cues, etc.) when solely relying on things like instrument disk
[e.g., pilot: "Er, you notice that we are flying on empty tanks?" disturbed
pilot expression,  programmer: "Ah, it's just a simulation."]
Computer people seem to be "ever the optimist."  Besides, would you ever
get into a real plane with a pilot who's only been in simulators?

Most recently, another poster brought up the issue of autonmous weapons.
We had a discussion of of this at the last Palo Alto CPSR meeting.
Are autonmous weapons moral?  If an enemy has a white flag or hand-ups,
is the weapon "smart enough" to know the Geneva Convention (or is too
moral for programmers of such systems)?

On the subject of flight simulators: I visited Singer Link two years
ago (We have a DIG 1 system which we are replacing).  I "crashed" underneath
the earth and the polygon structure became more "visible."  It was like
being underneath Disneyland.

--eugene miya			sorry for the length, RISKS covered alot.
  NASA Ames Research Center
  President
  Bay Area ACM SIGGRAPH

------------------------------

Date: Wednesday, 10 September 1986  14:42-EDT
From: Doug_Wade%UBC.MAILNET at MIT-MULTICS.ARPA
To:   arms-d, risks at csl.sri.com
Re:   F-16 Software.

Reading comments about putting restraints on jet performance within
the software reminded me of a  conversation I had a few years ago
at an air-show.
In talking to a pilot who flew F-4's in Vietnam he mentioned that
the F-4 specs said a turn exerting more than say 8 G's would cause
the wings to "fall off". However in avoiding SAMs or ground-fire
they would pull double? this with no such result.
  My comment to this, is what if a 8G limit had been programmed into
the plane (if it had been fly-by-wire). Planes might have been hit and
lost which otherwise were saved by violent maneuvers. With a SAM targeted
on your jet, nothing could be lost by exceeding the structural limitations
of the plane since it was a do-or-die situation.
I'm sure 99.99% of the lifetime of a jet is spent within designed
specifications, but should software limit the plane the one time
a pilot needs to override this constraint?

------------------------------

Date: Tue, 16 Sep 1986  08:31 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: Autonomous weapons


    From: eugene at AMES-NAS.ARPA (Eugene Miya)

    ... another poster brought up the issue of autonmous weapons.
    We had a discussion of of this at the last Palo Alto CPSR meeting.
    Are autonmous weapons moral?  If an enemy has a white flag or hand-ups,
    is the weapon "smart enough" to know the Geneva Convention (or is too
    moral for programmers of such systems)?

What do you consider an autonomous weapon?  Some anti-tank devices are
intended to recognize tanks and then attack them without human
intervention after they have been launched (so-called fire-and-forget
weapons).  But they still must be fired under human control.  *People*
are supposed to recognize white flags and surrendering soldiers.

------------------------------

Date: Tue, 16 Sep 1986  08:48 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: One student's view of SDI


    From: GROSS at BCVAX3.BITNET (Rob Gross)
    *****  Forwarded message  *****

    A Computer Science Student's View of SDI:

    What is SDI's purpose?

    Answer:  SDI's purpose is to defend the U.S. from ICBMs that enter the
    upper ionosphere by using laser or particle beam weapons that would
    destroy the incoming missiles.

    This all sounds great!! No more threat of Nuclear War!!  But this is
    not the fifties, where the vast majority of warheads were in
    intercontinental ballistic missiles!!!

False.  They were mostly in bombers then.

    I recall reading that less
    than 25% of all nuclear weapons are now ICBMs.  

True for the U.S., not true for the Soviets, who have 75% of their
warheads in ICBMs.

------------------------------

Date: Tuesday, 9 September 1986  17:28-EDT
From: Gary Chapman <chapman at russell.stanford.edu>
To:   arms-d, RISKS at CSL.SRI.COM
Re:   "Unreasonable behavior" and software

Jon Jacky wrote:

	I detect a notion floating about that software should 
	prevent any unreasonable behavior.  This way lies mad-
	ness.  Do we have to include code to prevent the speed
	[of an F-16] from exceeding 55 mph while taxiing down
	an interstate highway?

I certainly agree with the thrust of this.  But we should note that there is
plenty of evidence that coding in prohibitions on unreasonable behavior will
be required, particularly in the development of "autonomous" weapons that
are meant to combat the enemy without human "operators" on the scene.

Here's a description of a contract let by the United States Army Training and
Doctrine Command (TRADOC), Field Artillery Division, for something called a
"Terminal Homing Munition" (THM):

	Information about targets can be placed into the munitions
	processor prior to firing along with updates on meteorologi-
	cal conditions and terrain.  Warhead functioning can also be
	selected as variable options will be available.  The intro-
	duction of VHSIC processors will give the terminal homing
	munitions the capability of distinguishing between enemy and
	friendly systems and finite target type selection.  Since
	the decision of which target to attack is made on board the
	weapon, the THM will approach human intelligence in this area.
	The design criteria is pointed toward one munition per target
	kill.

(I scratched my head along with the rest of you when I saw this;  I've always
thought if you fire a bullet or a shell out of a tube it goes until it hits
something, preferably something you're aiming at.  But maybe the Army has
some new theories of ballistics we don't know about yet.)

As Nancy Leveson notes, we make tradeoffs in design and functionality for
safety, and how many and what kinds of tradeoffs are made depends on ethical,
political and cost considerations, among other things.  Since, as Jon Jacky
notes, trying to prohibit all unreasonable situations in code is itself un-
reasonable, then one wonders what sorts of things will be left out of the code
of terminal homing munitions?  What sorts of things will we have to take into
account in the code of a "warhead" that is supposed to find its own targets?
What level of confidence would we have to give soldiers (human soldiers--we
may have to get used to using that caveat) operating at close proximity to
THMs that the things are "safe"?

I was once a participant in an artillery briefing by a young, smart artillery
corps major.  This officer told us (a bunch of grunts) that we no longer needed
"forward observers," or guys attached to patrols to call in the ranges on
artillery strikes.  In fact, said the major, we don't need to call in our
artillery stikes at all--his methods had become so advanced  would
just know where and when we needed support.  We all looked at him like he had
gone stark raving mad.  An old grizzled master sergeant who had been in the Army
since Valley Forge I think, got up and said, "Sir, with all due respect, if I
find out you're in charge of the artillery in my sector, I will personally come
back and shoot you right between the eyes."  (His own form of THM "approaching
human intelligence", no doubt.) (I wouldn't be surprised if this major wrote
the language above.)

What is "unreasonable" behavior to take into account in coding software?  The
major's or the sergeant's?
							-- Gary Chapman

------------------------------

Date: Tue, 16 Sep 1986  09:01 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: "Unreasonable behavior" and software


    From: Gary Chapman <chapman at russell.stanford.edu>
    	Information about targets can be placed into the munitions
    	processor prior to firing along with updates on meteorologi-
    	cal conditions and terrain.  Warhead functioning can also be
    	selected as variable options will be available.  The intro-
    	duction of VHSIC processors will give the terminal homing
    	munitions the capability of distinguishing between enemy and
    	friendly systems and finite target type selection.  Since
    	the decision of which target to attack is made on board the
    	weapon, the THM will approach human intelligence in this area.
    	The design criteria is pointed toward one munition per target
    	kill.

    (I scratched my head along with the rest of you when I saw this;
    I've always
    thought if you fire a bullet or a shell out of a tube it goes until it hits
    something, preferably something you're aiming at.  But maybe the Army has
    some new theories of ballistics we don't know about yet.)

The THM is an example of what the army calls a "fire-and-forget"
munition. A human being fires it in the general direction of the
target, and then the munition seeks out its target without further
intervention.  The munition has mechanisms to alter its course from a
ballistic trajectory.

    What level of confidence would we have to give soldiers (human soldiers--we
    may have to get used to using that caveat) operating at close proximity to
    THMs that the things are "safe"?

That is indeed the question.  My own guess is that THMs and other
smart munitions will never be able to distinguish between friend or
foe.  That's why most current concepts are directed towards attacking
enemy forces deep behind enemy lines, where you can ASSUME that
anything you see is hostile.

------------------------------

End of Arms-Discussion Digest
*****************************