[comp.risks] RISKS DIGEST 8.15

RISKS@KL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (01/26/89)

RISKS-LIST: RISKS-FORUM Digest  Wednesday 25 January 1989   Volume 8 : Issue 15

        FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  More video piracy (Dave Curry)
  Computerized records of employee informers (Mike Trout)
  Censorship and computers (Anthony Finkelstein)
  Re: Object Oriented Programming (Benjamin Ellsworth)
  Structuring large systems (John Spragge)
  About non-redundant redudant systems (Elizabeth D. Zwicky)
  Engine-count and the Spirit of St. Louis (Michael McClary)
  Counting engines (Jordan Brown)
  Re: Space shuttle computer problems, 1981--1985 (Henry Spencer)
  Revised Computer Ethics Course Proposal (Bob Barger)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, and nonrepetitious.  Diversity is welcome.
* RISKS MOVES SOON TO csl.sri.com.  FTPable ARCHIVES WILL REMAIN ON KL.sri.com.
CONTRIBUTIONS to RISKS@CSL.SRI.COM, with relevant, substantive "Subject:" line
(otherwise they may be ignored).  REQUESTS to RISKS-Request@CSL.SRI.COM.
FOR VOL i ISSUE j / ftp KL.sri.com / login anonymous (ANY NONNULL PASSWORD) /
  get stripe:<risks>risks-i.j ... (OR TRY cd stripe:<risks> / get risks-i.j ...
  Volume summaries in (i.j)=(1.46),(2.57),(3.92),(4.97),(5.85),(6.95),(7.99).

----------------------------------------------------------------------

Date: Wed, 25 Jan 89 08:29:47 -0800
From: davy@riacs.edu    <Dave Curry>
Subject: More video piracy

Taken from the San Jose Mercury News, Jan. 23, 1989.

Video pirates disrupt Super Bowl broadcast on L.A. cable system

  LOS ANGELES(AP) - Cable television viewers of the Super Bowl said video
pirates disrupted the audio portion of the play-by-play Sunday with music
from "The Jetsons" cartoon show and an anti-Semitic slur.
  "First there was music from 'The Jetsons' cartoon show.  Then someone said
something about Century Cable and 'There's too many (expletive deleted) Jews
in this industry,'" said Doug Debber, a viewer in Santa Monica.    [.....]
  The interruption occurred about 3:15 p.m., when an audio signal invaded
Century's cable system, he [Bill Rosendahl, a company spokesman] said.
Viewers in West Los Angeles, Santa Monica and Beverly Hills reported the
intrusion, Rosendahl said.   [.....]
  Company officials have contacted the FBI and Santa Monica police and were
planning to contact the Federal Communications Commission Monday morning, he
said.

------------------------------

Date: 25 Jan 89 16:15:59 GMT
From: miket@brspyr1.brs.com (Mike Trout)
Subject: Computerized records of employee informers

The 16 May 1988 issue of _Flagship_News_ (employee publication of American
Airlines) includes a small article on a spiffy way for employees to rat on
their fellow workers.  It's part of a nationwide computerized database on
"business abuse," which is apparently a euphemism for workers who don't measure
up to management's standards.  Listed examples of business abuse include 
theft, drug and alcohol abuse, unsafe work habits, and "any act not in the best
interest" of the employer.  All you have to do to destroy your fellow workers
is call the National Business Crime Information Network Inc. (known as "The
Network"), at 1-800-241-5689.  You may do this anonymously, as each caller is
simply assigned a code number.  This also allows you to call back later and
check to see what action has been taken against that guy in the next cubicle
who took a pencil home.  The Network says that your information is relayed to
top management, who it is claimed will not take any disciplinary action on the
basis of the phone call alone.

Right.                                         Michael Trout 
BRS Information Technologies, 1200 Rt. 7, Latham, N.Y. 12110  (518) 783-1161

  [If you make your ratfink call from a phone with automatic calling
  identification, do they store YOUR phone number as well?  PGN]

------------------------------

Date: Wed, 25 Jan 89 12:02:54 GMT
From: acwf@doc.imperial.ac.uk
Subject: Censorship and computers

The following is taken from an advertisement which appeared in The Spectator (a
conservative review and comment journal of high repute) 14 Jan 1989.  The
advertisement was placed by INDEX ON CENSORSHIP a magazine which publishes
banned literature from all over the world, factual reports on writers and
journalists who have been silenced, as well as comment, interviews and a
country-by-country chronicle of censorship.

  "Dear Spectator Reader,

  Vaclav Havel, the well known Czechoslovak playwright, had his personal
  computer/word processor confiscated by police on 27 October 1988. I wonder
  if you would like to join with others in providing him with a replacement?

  Havel had the computer for just over a year and had been using it - for
  work and correspondence for only a month or two. It was obtained
  perfectly legally. He has written to the authorities to ask for his
  property back, but it has not yet been returned, nor is there any sign
  that it will be.

The letter continues by requesting contributions for a replacement. This is of
interest reflecting the risk that computers pose to oppressive states, the risk
of confiscation by the police of a vital tool of modern work and communication.

Anthony Finkelstein, Imperial College of Science, Technoloy & Medicine
(University of London). UK.

   [I imagine replacements would be confiscated even more quickly, especially
   if more continue to arrive.  The police may be developing a taste for
   computers.  Besides, they may have discovered that the storage provides a
   convenient record of what he has written.  I wonder whether Glasnostradamus
   predicted things like this.  PGN]

------------------------------

Date: Wed, 25 Jan 89 9:23:17 PST
From: Benjamin Ellsworth <ben%hpcvxben@hp-sde.sde.hp.com>
Subject: Re: Object Oriented Programming (Risks-8.14)

[Regarding adding functionality without changing the code:]
> I should hope the risks are obvious. [Ellsworth, RISKS-8.14]
 >>MBR@PELICAN-SPIT.ACA.MCC.COM        [Message to Ellsworth and RISKS]
 >>from "Mark Rosenstein" at Jan 25, 89 6:07 am
 >> ...Oh dear. They're not obvious to me. If change means modify existing
 >> code, then I can't quite see the problem, if change means add code, 
 >> yep you'll have to add code to get more functionality.          Mark.

To detail the exchange seems to me to be a bit maudlin, so let me just say that
we were talking about adding/changing functionality to an object.  The
professor's statements were cleary pointed toward no change neccessary to the
code comprising the object.

The risks are:

	- Merely parroting the party line (OOP eliminates changes to
	  operational code), and not thinking carefully about the question.
	  This seems especially dangerous when instructing the empowered
	  naive.  There were managers and engineers who were receiving
	  their first exposure to OOP in that class.  They were going to
	  try to use the information from that class in real products.
	  Their (the empowered naive) perceptions and beliefs are soon
	  going to effect other people's lives.
	
	- Management hearing the party line and accepting a "panacea" type
	  solution.  This is an "oldie-but-goodie" (maybe even in the
	  all-time top ten) in the category of "Engineer's Gripes," and
	  it's currently getting a thorough flogging in RISKS.

The above in no way reflects the views of Hewlett-Packard Company.

Benjamin Ellsworth    ben%hp-pcd@hp-sde.sde.hp.com

   [There are indeed lots of ways to get a program to do something else
   without modifying the code.  Moving it from one directory to another
   can have all sorts of side-effects, especially in a system with search
   strategies.  Not moving it but altering the search strategy for subtended
   programs is another way.  Redefining parameters, abbreviations, user
   profiles, etc., is another.  How about inadvertent effects resulting from
   someone else innocently introducing an operating system change?  All of this
   relates to the old saw about hardware degrades but software does not.  Not
   true.  PGN]

------------------------------

Date:         Wed, 25 Jan 89 17:08-0500
From: John.Spragge@QueensU.CA
Subject:      Structuring large systems

By all means, we need new structures. There's no question about that.
The only question is, what should we build those new structures on?

I believe that the problem of relating the behaviour of a program to
it's (human-readable) static representation has been solved at a
"micro" level. And, pace the disbelievers in structured programming,
I believe that structured techniques represent the best solution at
the procedure level. The question is the a matter of tying a large
number of procedures into a workable, consistent, large system.

The answer to that, it seems to me, is to envisage the system as a machine
(needless to say, programs are, in the strict sense, machines in the same way
computers are). The starting point for fulfilling the requirements of an
end-user who wants a particular software product is to ask what sort of
"special purpose" computer would be best at solving that problem. The program
can then be structured as an attempt to simulate that system on a
general-purpose computer.

For example, a good analogy for writing a spreadsheet would probably be a large
array (or "matrix") processor, in which every cell could simulate a "processor"
having access to a central series of processing functions. A windowing system
can be written as an "ideal" terminal device.

This approach has the advantage of encouraging the same sort of "generality" in
design that computer hardware benefits from; the adders on a system generally
work because the system has just one general purpose adder, not a vast series
of different adders.  In the same way, a wide variety of the functions in a
system which appear to be very different share many (if not most) critical
attributes, and sufficiently flexible routines can be devised, in many cases,
to apply to all the disparate functions required.

This is only one approach to the "larger" structure of systems design. But when
building a large building, it does no good at all to discard the girders.
Orthodox "structured" programming techniques are, I am convinced, at the heart
of building reliable procedures, without which no large system can be built. It
isn't possible to build a giant program on nothing but the knowledge of Ifs,
Whiles, and Cases; but they are essential components of good programming.

John G. Spragge, Computing Consultant, Box 2042, Kingston Ont. (SPRAGGEJ@QUCDN)

------------------------------

Date: Wed, 25 Jan 89 11:14:04 EST
From: Elizabeth D. Zwicky <zwicky@cis.ohio-state.edu>
Subject: About non-redundant redudant systems        (Re: RISKS-8.14)

Of our many computer rooms and labs, 2 have redundant air-conditioning systems.
One of them has two separate systems installed at two completely different
times by two different companies; it gained redundancy out of necessity because
the first air conditioner barely had the capacity.  The second one started out
with two air conditioners, because it seemed like a good idea. They were
installed at the same time, by the same company.  Less than a month later, that
room started getting hotter and hotter and hotter.  We called A/C repair. They
said they would log it as non-emergency, due to the second A/C in the room. We
pointed out that the second A/C was not air conditioning any more than the
first was. They grudgingly updated it to an emergency call, and in short order
one of Ohio State's people arrived. 5 minutes later he developed an amazed/
appalled look, and began to curse.  "What the hell sort of a redundant system
is this?  What do those jerks think they are playing at?" It seems that our two
A/Cs had but one thermostat, which had duly failed. Needless to say, Ohio State
made all sorts of grief for the vendor, who eventually managed to make the
systems more redundant. Nevertheless, reliability is *still* higher in the
cobbled-together, afterthought-redundant system, than in the "properly"
designed one.

Elizabeth D. Zwicky, Ohio State University Computer and Information Science

------------------------------

Date: Sat, 21 Jan 89 01:29:46 PST
From: xanadu!michael@uunet.UU.NET (Michael McClary)
Subject: Engine-count and the Spirit of St. Louis
Organization: Xanadu Operating Company, Palo Alto, CA

The more-is-less phenomenon of aircraft engine reliability has been noted
previously.

During the push to extend aircraft technology to non-stop trans-Atlantic
flight, most of the designs were multi-engined.  The designers of the
Spirit of St. Louis recognized:

 - they were on the edge of the technology, therefore
 - there was insufficient spare capacity to carry a dead engine, and
 - there was nowhere to land for repairs, therefore
 - all the engines would have to run for essentially the whole flight, so
 - assuming roughly equal engine mean-time-to-failure, the more engines,
   the greater the risk of failure (with loss of craft and pilot).

Thus the Spirit of St. Louis was designed with a single engine.

It's a classic examples of the counter-intuitive nature of probability
theory and risk assessment.

(Of course, practical service had to wait a bit, until aircraft capacity
and airport availability improvements made single-engine-failure survivable.)

------------------------------

Date: Sun, 22 Jan 89 19:55:01 PDT
From: Jordan Brown <jbrown@herron.UUCP> <jbrown@jato.Jpl.Nasa.Gov>
Subject: Counting engines

Don Alvarez <boomer@space.mit.edu> writes:
> Imagine two planes which are identical except that one plane has 2
> Bratt&Zittley Foobar-900 engines, and the other has 3 B&Z F-900 engines.
> Well, clearly the second will fly better on n-1 engines, ...

This is the misconception that I'm trying to point out.  If you have an
airplane which flies fine on two B&Z F-900s (meets single-engine performance
requirements, etc) then no manufacturer would ever put another engine on that
airplane.  It just wouldn't make sense.  (This is for civilian applications;
military apps have other issues.)  The three-engine airplane discussed will
either be bigger or have wimpier engines.  The controlling factor is engine-out
performance.  The two-engine airplane with one out will have performance
comparable to the three-engine airplane with one out.

727 engines (3/airplane) are wimpy compared to DC-9 engines (2/airplane).
BAe-146 engines (4/airplane) are *really* wimpy.  (This assumes that
727s are approximately the same size as DC-9s.  Bae-146's are smaller.)

Jordan Brown 

------------------------------

Date: Tue, 24 Jan 89 00:25:52 -0500
From: attcan!utzoo!henry@uunet.UU.NET
Subject: Re: Space shuttle computer problems, 1981--1985

>STS-6, April 4, 1983: ... Landing gear must be manually deployed after computer
>fails to trigger its descent.

I wonder if this is not mistaken reporting at some level.  My recollection,
possibly incorrect, is that lowering of landing gear is specifically not
under computer control in the space shuttle -- it *has* to be done manually.
The reason is that once lowered, the shuttle's landing gear is *down* --
it can't be raised again in flight.

Possibly the problem was that the computer did not say "time to lower the gear"?
                                     Henry Spencer at U of Toronto Zoology
                                 uunet!attcan!utzoo!henry henry@zoo.toronto.edu

------------------------------

Date:     Wed 25 Jan 1989 09:34 CDT
From: Bob Barger <CFRNB@ECNCDC.BITNET>
Subject:  Revised Computer Ethics Course Proposal

The following revision is based on critiques received on a proposal published
in RISKS digest 7.75. Comments are still welcome (send to CFRNB@ECNCDC.BITNET).
Course Description: The course will investigate current ethical issues
involving computers.  While it is not a "computer course," students will make
frequent use of postings on the electronic bulletin board of the ECN mainframe
computer to research and discuss ethical issues. Prerequisites: 75 Semester
Hours and previous experience with computers. [Class size limit = 15 students
for Fall, 1989, semester]. Outline of topics: Week 1: Orientation to the course
(introduction, explanation of course content, class procedures, and evaluation
methodology). Consideration of ethical theory. Week 2: Consideration of ethical
theory (continued). Week 3:  On-line reading of the "Discussion of Ethics in
Computing" list, the "Forum on Risks to the Public in Computers and Related
Systems" digest, and the "Computers and Society" list (all are available on the
ECN bulletin board); written reactions to these readings, and written
commentary on other students' reactions. [The instructor will insure that these
activities equate to the activities of a traditional two hour class meeting].
Week 4: Consideration of professional ethics. Week 5: Same activities as for
Week 3. Week 6:  Consideration of liability for software design, manufacture,
and use.  Week 7: Same activities as for Week 3. Week 8: Consideration of
privacy issues. Week 9: Same activities as for Week 3. Week 10:  Consideration
of power/control issues. Week 11: Same activities as for Week 3. Week 12:
Consideration of ownership and theft issues.  Weeks 13 & 14: Same activities as
for Week 3. Week 15: Seminar members will reconvene as a group for the last
meeting to allow for group reflection on the seminar experience and course
evaluation.  Semester Exam week: Final Examination. Writing component: Students
will type thirteen 30-to-50 line (i.e., one-to-two page) reactions to the
on-line electronic bulletin board readings. Students will "post" these
reactions (i.e., electronically send them to the mainframe computer bulletin
board set aside for members of this seminar). In their reactions, students
will: 1) identify the particular publication or publications to which they are
reacting, 2) identify the particular issue or issues raised in the
publication(s), 3) identify the ethical implications of the issue or issues, 4)
identify the ethical paradigm used by the author, 5) add their own reasons for
agreement or disagreement with the viewpoint of the publication's author, 6)
and, finally, offer an alternative solution or viewpoint to that presented by
the author, or present other appropriate considerations not raised by the
author or covered in their own (i.e., the student's own) previous comments. The
instructor will send weekly, by confidential electronic mail, a grade on the
student's posted reaction, together with whatever comments the instructor
thinks helpful. The student's original posted reaction will also be open to
public comment by the other students in the seminar [this is accomplished by
posting notes to the bulletin board, referencing the original posted reaction].
These latter comments by the other students in the seminar will be considered
along with classroom discussion in computing the "participation" factor of the
student's semester grade. Evaluation: Each student's semester grade for the
seminar will be calculated according to the following weighted formula: 13
posted reactions (at 5% each) = 65% ; Participation (based on class discussion
and posted comments on other students' reactions) = 20%; Final Exam = 15%.
Materials in the course will include: 1) Texts: Deborah Johnson, Computer
Ethics (Englewood Cliffs, NJ: Prentice Hall, 1985); privately published notes
on systematic ethics from Dr. Barger's Philosophy 1800 class (furnished free to
seminar members); postings on the above-mentioned ECN electronic bulletin board
lists. 2) Resource people: Computer professionals (e.g., administrators,
systems analysts, programmers, etc.) will be utilized as guest contributors to
the class. This will be accomplished by personal appearances, as well as by
electronically mediated conferencing (e.g., postings, e-mail, relay
round-tables, etc.).

------------------------------

End of RISKS-FORUM Digest 8.15
************************
-------