[mod.risks] RISKS-3.18

RISKS@CSL.SRI.COM (RISKS FORUM, Peter G. Neumann, Coordinator) (07/09/86)

RISKS-LIST: RISKS-FORUM Digest,  Tuesday, 8 July 1986  Volume 3 : Issue 18

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Computer Crime in Scandinavia (Martin Minow)
  Re: Risks from inappropriate scale of energy technologies (Henry Spencer)
  Sensor technology and disinformation (Eugene Miya)
  Educating to prevent RISKS (Steven Gutfreund)
  Rash of 'Undeliverable mail' (Chuck Price)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
(Contributions to RISKS@SRI-CSL.ARPA, Requests to RISKS-Request@SRI-CSL.ARPA.)
  (Back issues Vol i Issue j available in SRI-CSL:<RISKS>RISKS-i.j.
  Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

Date: 04-Jul-1986 0922
From: minow%pauper.DEC@decwrl.DEC.COM  (Martin Minow, DECtalk Engineering ML3-1/U47 223-9922)
To: risks@csl.sri.com
Subject: Computer Crime in Scandinavia

From the Danish newspaper, Information, (I think on 31-May-1986):

Datatapping -- The Oslo [Norway] firm, Finn Solvang A/S, has reported a
Danish engineer to the police in Denmark for an attempt to get a woman
employed by the firm to tap the company's computer system for valuable
information on customer lists and design.  The woman was offered money and
instruction on how she could do the work during a weekend.  The engineer is
employed by a Danish firm which had collaborated with the Norwegian, but
which became a competitor at the beginning of the year.

Martin Minow

(In my note on Chernobyl, I accidentally translated the Danish
word "chokerade" as "choked" when it should be "shocked" -- that's
what comes from writing with my fingers and not my mind.  Funny
that my spelling checker didn't catch it...

A few native speakers of Danish confirmed that the sentence I wasn't
too certain of was reasonably translated.  One said that a better
translation might have been "the mathematical models used were
completely wrong," making it more of a design failure than a
programming bug.

Martin.)

------------------------------

From: decwrl!decvax!utzoo!henry@ucbvax.Berkeley.EDU
Date: Fri, 4 Jul 86 21:18:30 edt <RETRY OF MUCH EARLIER FAILED TRANSMISSION>
To: decvax!CSL.SRI.COM!RISKS
Subject: Re: Risks from inappropriate scale of energy technologies

>   I think that we should be pursuing a policy course which develops
> technology that can be put safely in the hands of non-technical people.
> This might take the form of small burners which use the methanol from
> organic wastes, windmills, or non-electrical solar collectors, to name a few
> possibilities.  Localized, distributed technologies have many advantages,
> including ease of repair, localization of risk from outage, and major
> reductions in distribution losses and cost of distribution equipment and
> labor...

Let us not forget that distributed technologies create their own new
categories of risks.  The advantage of centralized resources is that much
more attention can be given to keeping them safe, and they do not have to
be designed to be utterly idiot-proof.  (Although it helps...)

Automatic collision avoidance for airliners is imminent, while for cars it
is far away.  Why?  Because such a system for cars would have to be cheap,
easy to install and maintain, and 99.999999% reliable in a wide range of
conditions despite being maintained at long, irregular intervals by
largely unskilled people.  Although all these characteristics certainly
would be desirable for airliner systems, they are not *necessary*.  Airlines
can afford relatively expensive systems needing frequent attention, and can
ensure that they are given regular checkouts by skilled personnel.  An
airliner system can also assume that a qualified pilot, prepared for the
possibility of mechanical failure, is minding the store at all times.
(Such assumptions are not invariably true even for airliners; the point
is that they are seldom or never true for cars.)

Even disregarding this specific example, a quick look at accident rates for
car travel and air travel yields interesting results for the "distributed
is better" theory.  Does anyone seriously believe that the level of safety
attention routinely given to aircraft could possibly be given to cars?

Don't forget to compute the accident potential of distributed technologies.
Methane is an explosion hazard, as witness the safety considerations for
virtually any appliance using natural gas (natural gas is essentially
straight methane).  Windmills and solar-heat collectors don't have that
problem, at least, but they do require maintenance and they are generally
far enough off the ground to present a risk of accidental falls.  (Last
I heard, falls were the #2 [after auto accidents] cause of accidental
death.)  One can argue about whether lots of little accidents are preferable
to a few big ones, but dead is dead either way if you're one of the victims.
And it's not clear that the overall death rates are lower for distributed
systems.

There is also the question of voluntarily-assumed risks versus ones one
cannot avoid, but it seems to me that this case doesn't really present much
of a dichotomy.  If nobody builds central power plants, I really have little
choice about whether I assume the risks of generating my own power.  Yes,
I can avoid them at the cost of major inconvenience (doing without), but I
could also avoid most of the risks of centralized power at the cost of
major inconvenience (move to Fiji).

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

------------------------------

From: Eugene miya <eugene@ames-aurora.arpa>
Date:  7 Jul 1986 1519-PDT (Monday)
To: arms-d@mit-xx
Cc: risks@sri-csl
Subject: Sensor technology and disinformation

As the person who started the SDI sensor technology question which
has had a couple of follow ons to Arms-d, permit me to make one comment
and raise one question which Charlie Crummer@aerospace only alludes.

First, IR technology despite advances in sensor technology cannot get around
the "3-body, hidden object" problem.  Given a sensor and a target, if an
intervening "warmer object" passes in between, the target disappears.  This
is an azimuth ambiguity.  It sound trivial, but it is not, especially when
the intervening object might be air (which does have temperature), or a
mist, or other non-massive-solid.  My intent is only to point this out, not
some IR remote sensing.

Second, the Administration has stated a policy of disinformation with regard
to SDI and letters denouncing such have appeared in Aviation Week.  My
question is: if we as scientists announce something as "disinformation" as
one of Charlie's comments, what are all of the consequences?  I can think of
several including counter-announcements, the usual financial thumbscrews to
funding agencies, Ellsberg type operations, and so forth.  Problem is this
is not a leak of information, and it's not clear to me that the SDIO can
persecute this like espionage cases.  Is Charlie irresponsible for revealing
disinformation?  Are we as scientists expected to maintain disinformation?
Also, disinformation in the past has been known to backfire (another risk?).

Again the usual disclaimer that these are the opinions of the individual
and not my employer, and STAR WARS is a trademark of Lucasfilm, Ltd.
despite what courts say.

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA

------------------------------

Date:     Mon, 7 Jul 86 12:32 EST
From:     "Steven H. Gutfreund" <GUTFREUND%cs.umass.edu@CSNET-RELAY.ARPA>
To:       risks@SRI-CSL.ARPA
Subject:  Educating to prevent RISKS

RE: Jan Lee (RISKS V3 N17) on the risks of not educating casual programmers.

Your problem (in a nutshell) seem to be with the administration which needs
to be made aware (educated) about the risks of under-educated programmers,
than with the students themselves.

To phrase this question in full generality:

	How do I make a person aware that his course of action 
	contains risks which he is underplaying or not cognizant of?

Classic examples of this are:

a) Try teaching a child not to touch the hot stove.
b) Teach your young and eager disciple that you have learned (via years
   of painful pratical experience) that he needs to take a more cautious
   approach (e.g. to design of large programming problems)
c) Teach your manager (who lacks modern engineering skills) that the project
   plan is too risky.


Approaches to attack this include:

1) Let the kid touch the stove (or the project go down the tubes)
2) Turn the issue into a confrontation (boycott the project meetings,
   threaten the child with loss of priviledges, etc.)
3) Try and instill the Fear of G-D in the person (long careful explanations,
   dissertations, memos, etc.)

There seems to be a fundamental problem in any form of directly trying to
educate the unaware individual. Since what you are basically trying to do
is increase the persons level of anxiety, fear, or distrust of his own
thought processes. Since these emotions are not normally identified with
more "rational" attitudes, there is bound to be distrust of your motives.
As long as you proceed with any of the above mentioned "direct" approaches,
he is bound to be AWARE of your efforts, and draw the negative conclusions.

It seems to me then that only indirect and subtle approaches will succeed.

This conclusion should be seen as especially relevent to RISKS contributors
since most of them seem to be involved in publicizing fears and anxieties.

			- Steven Gutfreund

------------------------------

Date: Tue, 8 Jul 86 11:20:05 pdt
From: price@src.DEC.COM (Chuck Price)
Message-Id: <8607081820.AA04763@barnum.DEC.COM>
To: neumann@sri-csl.ARPA
Subject: Rash of 'Undeliverable mail' 
Subtitle: Risks of undigestification

Help! Ever since you published "License Plate Risks" in the Risks Forum,
I have been receiving a number of 'undeliverable mail' messages. A sample
is attached.

Is there any way we can stop this? I'm starting to feel like Robert Barbour.

-chuck
 
  ------- Forwarded Message  [...]

  Date: 8 Jul 1986 12:30:26-EDT
  From: netmailer%MIT-CCC@mit-mc
  Subject: Undeliverable mail
  Apparently-To: <price@SRC.DEC.COM>

-- Your letter to `ghuber@MIT-MARIE' is being returned because: --

	Mail undeliverable for too long

-- Returned letter follows: --

Date: 30 Jun 1986 12:32:31-EDT
From: price@SRC.DEC.COM@MIT-CCC
Date: Monday, 23 June 1986  12:56-EDT
To: RISKS-LIST:@XX.LCS.MIT.EDU, RISKS@SRI-CSL.ARPA
Subject:   License Plate Risks
ReSent-From: LENOIL@XX.LCS.MIT.EDU
ReSent-To: info-cobol@ccc
ReSent-Date: Mon 30 Jun 1986 01:50-EDT

    [Chuck's original message followed.  This could be another risk
     of undigestification.  If I simply remailed individually all of the
     messages in each issue of RISKS, then EACH contributor would have to
     put up with the enormous number of BARF message that your moderator
     otherwise puts up with!  PGN]

------------------------------

End of RISKS-FORUM Digest
************************
-------