[mod.risks] RISKS-3.30

RISKS@CSL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (08/05/86)

RISKS-LIST: RISKS-FORUM Digest,  Monday, 4 August 1986  Volume 3 : Issue 30

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Ozone hole undetected (Jeffrey Mogul)
  Re: Risks of CAD (Henry Spencer)
  Comment on Hartford Civic Roof Design (Richard S D'Ippolito)
  Expert system to catch spies (Larry Van Sickle)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
(Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM)
  (Back issues Vol i Issue j available in CSL.SRI.COM:<RISKS>RISKS-i.j.
  Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

From: mogul@decwrl.DEC.COM (Jeffrey Mogul)
Date:  4 Aug 1986 1058-PDT (Monday)
To: Risks@csl.sri.com
Subject: Ozone hole undetected

Although I, too, am relying on memory, I'm pretty sure that the article Bill
McGarry mentioned was published in The New Yorker sometime during the past
two or three months.                   [Also something in Science a few issues
                                        ago on the phenonenon itself...  PGN]

My understanding is that it was not so much a case of the researchers
believing the satellite instead of other evidence, but rather that the
researchers who ran the satellite must not have been too terribly interested
in what was going on over the poles.  After all, if they were interested, I
would think they might have been bothered by large empty spots in their data.

As to Bill's being disturbed that "the satellite would observe this huge
drop in the ozone level year after year and just throw the results away", I
think this imputes a certain level of intelligence to the computer system
that probably isn't there.  I'd bet that their computer spits out maps of
the ozone layer, but probably doesn't have any facility to spot trends.

Still, it's obvious that a little more care in the decision to discard
anomalous data would have gone a long way.  When humans through away
anomalous results, at least they realize that they are doing so [although
not always consciously; see Stephen Jay Gould's "The Mismeasure of Man".]
When a computer throws away anomalous data, the user might not be aware that
anything unusual is going on.  A good program would at least remark that it
has thrown away some fraction of the input data, to alert the user that
something might be amiss.

------------------------------

From: decwrl!decvax!utzoo!henry@ucbvax.Berkeley.EDU
Date: Sun, 3 Aug 86 03:17:32 edt
To: decvax!CSL.SRI.COM!RISKS
Subject: Re: Risks of CAD

Alan Wexelblat comments:

> Petroski also fears that inadequate computer simulation is replacing crucial
> real testing...

One can see examples of the sort of engineering this produces in many pieces
of high-tech US military equipment.  In the recent times, the criteria used
to evaluate a new military system have increasingly drifted away from straight
field-test results and toward complex and arbitrary scoring schemes only
vaguely related to real use.  Consider how many official reports on the
Sergeant York air-defence gun concluded, essentially, "no serious problems",
when people participating in actual trials clearly knew better.  Some of this
was probably deliberate obfuscation -- juggling the scoring scheme to make
the results look good -- but this was possible only because the evaluation
process was well divorced from the field trials.  Another infamous example
is the study a decade or so ago which seriously contended that the F-15 would
have a kill ratio of several hundred to one against typical opposition.
These are conspicuous cases because the evaluation results are so grossly
unrealistic, but a lot of this goes on, and the result is unreliable equipment
with poor performance.

It should be noted, however, that there is "real testing" and real testing.
Even the most realistic testing is usually no better than a fair facsimile
of worst-case real conditions.  The shuttle boosters superficially looked
all right because conditions had never been bad enough to produce major
failure.  The Copperhead laser-guided antitank shell looks good until you
note that most testing has been in places like Arizona, not in the cloud and
drizzle more typical of a land war in Europe.  Trustworthy test results
come from real efforts to produce realistic conditions and vary them as much
as possible; witness the lengthy and elaborate tests a new aircraft gets.
Even if the results of CAD do get real-world testing, one has to wonder
whether those tests will be scattered data points to "validate" the output
of simulations, as opposed to thorough efforts to uncover subtle flaws that
may be hiding between the data points.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

Date: 4 Aug 1986 00:33:41-EDT
From: Richard.S.D'Ippolito@sei.cmu.edu
Subject: Comment on Hartford Civic Roof Design
Apparently-To: Risks@SRI-CSL.ARPA

I would like to point out that Alan Wexelblat's comment on inadequate use of
computers for CAD might be somewhat misleading regarding the roof modelling
for the Hartford Civic Center. The problem was that the program user
selected the wrong model for the beam connection to be used. When the 
program was re-run with the correct model, it predicted the collapse in 
precisely the mode that it happened. I'm not sure that that was clear from 
the wording in Mr. Wexelblat's comment, i.e., that the modelling was 
improperly done by the operator (GIGO again!).

Richard D'Ippolito, P.E.
Carnegie-Mellon University
Software Engineering Institute
(412)268-6752
rsd@SEI.CMU.EDU

------------------------------

Date: Wednesday, 23 July 1986  22:39-EDT
From: CS.VANSICKLE at R20.UTEXAS.EDU
To:   AIList                                  [REMAILED TO RISKS BY HERB LIN]
Re:   Expert system to catch spies

Today's (July 23, 1986) Wall Street Journal contains an editorial by Paul M.
Rosa urging the use of expert systems to identify potential spies (acutally
traitors).  Mr. Rosa is a lawyer and a former intelligence analyst.  Since
virtually all American traitors sell out for money, an expert system
embodying the expertise of trained investigators could examine credit
histories, court files, registers of titled assets such as real estate and
vehicles, airline reservations, telephone records, income tax returns, bank
transactions, use of passports, and issuance of visas.  The system would
look for suspicious patterns and alert counter-intelligence officials for
further investigation.

There are some obvious considerations of privacy and legality, but that is
probably best discussed on another bulletin board.  Mr. Rosa says the system
would be used only on the 4.3 million people who hold security clearances,
who have consented to government scrutiny.

According to Mr. Rosa, "the obstacles to implementation are not
technological," and "the system could be implemented quickly and cheaply."
He predicts that the Soviets, working through their extensive international
banking network, will use the same techniques to identify potential
recruits.  He also says that the FBI has three expert systems for monitoring
labor rackets, narcotics shipments, and terrorist activities.

Any reactions?  Is this doable?  It strikes me as more of a data collection
problem than an expert system problem.  Is there anyone who knows more about
the FBI expert systems and can talk about it?

Larry Van Sickle
cs.vansickle@r20.utexas.edu
Computer Sciences Dept.
U of Texas at Austin
Austin, TX 78712

------------------------------

End of RISKS-FORUM Digest
************************
-------