[comp.risks] RISKS DIGEST 9.6

RISKS@KL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (07/19/89)

RISKS-LIST: RISKS-FORUM Digest  Monday 17 July 1989   Volume 9 : Issue 6

        FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Mitnick sentenced as an addict (Rodney Hoffman)
  Long addresses confuse bank's computer (Paul Leyland)
  Town Hall's computer snags trouble old age pensioners (Olivier Crepin-Leblond)
  Re: Automobile Electronic Performance Management (Charles Rader)
  Re: UK Defence Software Standard, non-determinism, recursion and armageddon
    (Victor Yodaiken, anonymous via Tim Shimeall, Bob Estell, Martin Minow)
  Telephone technicians tapping into other phone lines (Olivier Crepin-Leblond)
  Re: New Yorker Article on "radiation" risks (Gordon Hester)

RISKS SUMMER SLOWDOWN AHEAD.  CONTRIBUTORS PLEASE BE PATIENT.  THANKS.  PGN

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, and nonrepetitious.  Diversity is welcome.
* RISKS MOVES SOON TO csl.sri.com.  FTPable ARCHIVES WILL REMAIN ON KL.sri.com.
CONTRIBUTIONS to RISKS@CSL.SRI.COM, with relevant, substantive "Subject:" line
(otherwise they may be ignored).  REQUESTS to RISKS-Request@CSL.SRI.COM.
FOR VOL i ISSUE j, ftp KL.sri.com[CR]login anonymous (ANY NONNULL PASSWORD)[CR]
  get stripe:<risks>risks-i.j ... (OR TRY cd stripe:<risks>[CR]get risks-i.j 
  Vol summaries (i.j)=(1.46),(2.57),(3.92),(4.97),(5.85),(6.95),(7.99),(8.88).

----------------------------------------------------------------------

Date: 18 Jul 89 08:49:34 PDT (Tuesday)
From: Rodney Hoffman <Hoffman.ElSegundo@Xerox.com>
Subject: Mitnick sentenced as an addict

Kevin Mitnick is the hacker once called "as dangerous with a keyboard as a bank
robber with a gun."  (See RISKS 7.95, 8.1, 8.3, 8.43, 8.65, 8.70, and 8.76.)

His first plea bargain was rejected by U.S. District Judge Mariana R.
Pfaelzer as too lenient.  He subsequently reached a new agreement, with no
agreed-upon prison sentence, in which pleaded guilty to stealing a DEC
security program and illegal possession of 16 long-distance telephone codes
belonging to MCI Telecommunications Corp.  If convicted of all counts,
Mitnick faced a maximum sentence of 20 years and a fine of $750,000.  

According to a story by Henry Weinstein in the 18 July 1989 'Los Angeles
Times', Judge Pfaelzer said Monday that she will sentence Mitnick to a year
in a rehabilitation center, where he can be treated for his "addiction."
It is believed to be the first time a person indicted for a computer
hacking - related crime will be treated as an addict.

Harriet Rossetto, the director of the rehabilitation center said that
Mitnick would benefit from the program.  She said that Mitnick's "hacking
gives a sense of self-esteem he doesn't get in the real world.... This is a
new and growing addiction .  There was no greed involved.  There was no
sabotage involved.... He's like a kid playing Dungeons and Dragons."

Asst. U.S. Attorney James R. Asperger told Pfaelzer that he was amenable to
the rehabilitation plan, in part because Mitnick has cooperated extensively
with the government in its case against DiCicco, Mitnick's one-time friend
who turned him in.  Asperger said that Mitnick had turned out to be
considerably less harmful than the government had originally thought,
particularly since he not broken into DEC's computer system out of malice
or to make money.

Judge Pfaelzer said she will rule today on whether Mitnick should serve any
additional prison time, beyond the seven months he has so far spent in
federal custody.  DiCicco still faces one federal charge of illegally
transporting a stolen program (!).

------------------------------

Date: Tue, 18 Jul 89 15:04:00 BST
From: Paul Leyland <pcl@robots.oxford.ac.uk>
Subject: Long addresses confuse bank's computer

In today's copy of "The Times" (of London), there is a sketchy description of
problems which arose from the country's first flotation on the stock exchange
of a building society.  [For the benefit of non-UK readers, a "building
society" is an organisation whose purpose is to collect deposits from its
members; pay them interest on the money; provide mortgages secured on property
and collect the interest due on the loan.  Sort of like a bank, but more
restricted in what it can and cannot do.  I forget the name of the US
analogue.]  It is only recently that building societies have become allowed to
raise finance by public flotation and there has been much heated debate about
the morality of the operation.  In particular, people with accounts at the
Abbey National Building Society were guaranteed cheap shares at the flotation.
Lloyds Bank, who handled the flotation, are the third largest bank in the UK --
certainly *not* a tin-pot outfit.

The following is from "The Times", Tuesday 18 July 1989.


	Compensation offer for Abbey delay.

   Lloyds Bank Registrars are offering to compensate 120,000 Abbey shareholders
whose share certificates and return cheques have been held up in the post.
This group suffered because their addresses were jumbled up by the computer,
which was unable to read addresses with more than five lines.  It was
originally thought no more than 10,000 people were affected by this error, but
it emerged last night that 120,000 people are involved.

   The Abbey National has already said it will back-date -- to Wednesday, July
12 -- delayed cheques that are returning sums over-paid for shares, if they are
paid into an Abbey National Account.  But many would-be share owners say they
have borrowed large sums to buy shares and the delay in returning cheques is
costing them missed interest and also interest on loans.  All Abbey members who
applied for more than 600 shares at 130p were scaled down to 775 shares.  As a
result, some people who made massive applications are awaiting the return of
hefty cheques.

   Mr Charles Wootton, a member of the Abbey National Protest Group, who
applied for 100,000 shares at a cost of \pounds 130,000 [[approximately US$
200k]], said that he had lost interest on the money taken out of a higher
interest account.  "They have had our money for an awfully long time", he said.

   It is unprecedented for Lloyds Bank Registrars to offer compensation for a
bungled share allocation, but a Lloyds spokesman said: "The scale of the thing
was unprecedented".

   Lloyds is writing to the 120,000 people whose addresses were jumbled by the
computer asking if they had any special problems.  Each compensation claim will
be dealt with individually on its own merits.  Lloyds will want proof that a
loan was outstanding against the cash used to apply for shares.

   There is no question of any compensation for the slide in the Abbey share
price from a brief high on the first day's trading of 161p to 145p yesterday.


[I wonder what 120,000 personal letters and investigations are going to
cost....  
                                        Paul Leyland]

------------------------------

Date: 		Mon, 17 JUL 89 19:35:08 GMT
From: ZDEE699@elm.cc.kcl.ac.uk
Subject:        Town Hall's computer snags trouble old age pensioners

TOWN HALL'S COMPUTER SNAGS TROUBLE OAP'S    [OCL. Old Age Pensioners]
[From the Kensington and Chelsea Times, a [very] local newspaper]

    Many private tenants in Kensington & Chelsea are living in poverty as
a result of the council's inability to cope with its housing benefit work-
load, claims Kensington & Chelsea "Put for the Elderly".

   "Everyone who claims housing benefit must fill-in a new benefic appli-
cation form every year", says the group. "If the application is not received 
by the Housing Benefit Office within four weeks, a reminder is sent-out and
if this is ignored, housing benefit is stopped."
   "However, in Kensington & Chelsea, payment is bein gstopped because council
staff are unable to feed the information into the computer quickly enough.
There is a huge backing of such information waiting to be logged into the
computer's memory."
   It is alleged that Housing Benefit payments have been stopped for "hundreds"
of people while their re-applications wait to be dealt-with. "It can be weeks
before payments resume."
   "This is particularly hard on senior citizens who are known as society's
most conscientious bill-payers. Many older people will go without, rather
than get into debt."
   The Royal Borough Council admits that there is a problem, and expresses
its intention to remedy the situation. "There has been a backlog but we
hope to have the situation under control by the end of the month".
   A report by the Royal Borough Council's Benefit Working Party analyses
the reasons for the build-up of work in the benefit's division:
" The computer is not powerful enough to cope with the heavy demands that are
being placed upon it. Consequently, the officers are struggling to process
assessments sufficiently quickly to match incoming work.
There have been a number of computer crashes in recent months, resulting
in a lot of 'down' time."

    The report continues:
"The computer software developed a fault which resulted in our inability
to produce the weekly notification letters that have to be sent to the
claimants. This continued for some eight weeks during March and April this
year, and meant that when the fault was rectified, there was a large
backlog of letters produced.
   "Statuory case reviews have brought about large numbers of claims being
processed. Between April 1988 and March 1989 it was not possible to undertake
these reviews because of computer difficulties. 

Olivier Crepin-Leblond,  Computer Systems & Electronics,
Electrical & Electronics Engineering, King's College London, England 

------------------------------

Date: Mon Jul 17 01:13:23 1989
From: Charles Rader <cmr@carp.uucp>
Subject: Re: Automobile Electronic Performance Management

I have three anecdotes related to this topic.

First:

A personal experience several years ago suggests General Motors enforces 
the "performance envelope" on its cars. 

While descending a steep mountain grade in a 1981 Chevrolet Cavalier using 
engine braking to control speed, the automatic transmission shifted into 
second gear even though the gearshift lever remained locked in first gear.  

The engine speed was near the "yellow line" when it shifted, so I suspect 
this was a feature intended to prevent engine damage.  The behavior was 
reproducible.

I had left GM for my present job and didn't ask my GM contacts about it.  

In this case, I had good brakes.  What if the brakes had failed? 

Second:

I heard of an incident several years ago where electronic component failure 
caused a vehicle to exceed the design envelope.  During quality assurance 
testing at an assembly plant (on dynamometer rollers), a cruise control 
component failure caused wide-open throttle and loss of brakes.  The 
technician cut the ignition before the engine destroyed itself. 

The design problem was apparently fixed before it recurred.

I wasn't present when this happened so I'd rather not name the company or 
plant, but the story was reported by individuals I trust to have the facts. 

Third:

Bootleg PROMs exist in North America, too.  Some auto company engineers 
have mentioned programming them for personal use. 

Charles Rader, Systems Manager, Univ. of Detroit Computer Services, 313-927-1349  

------------------------------

Date: Sat, 15 Jul 89 20:14:52 EDT
From: yodaiken%ccs2@cs.umass.edu (victor yodaiken)
Subject: non-determinism, recursion and armageddon

I'm puzzled as to what Nancy Leveson means when she uses the term
``non-deterministic".  Leveson seems to be arguing that any program with hidden
side effects is non-deterministic. The dangerous techniques she mentions,
distributed processing, interrupts, dynamic memory allocation, recursion, have
in common an effect on program state that are not exposed in the program text.
That is, to verify a recursive subroutine written in Pascal, one needs to also
verify the stack management methods of the compiler, and the memory limits of
the machine. As the entire Hoare/Floyd etc. approach to verification is based
on reasoning about program text, these techniques can pose problems for the
verification method. I have 4 critiques of this analysis (3 technical, 1
horrified):
	1. The inability of a verification system to handle a programming
	technique does not imply that the technique is at fault. The
	verification method might just be too weak.
	2. Even if we give up all these very useful techniques, there are
	still hidden side effects in programs, especially real-time
	programs,  which cannot be deduced from the program text. Subroutine
	call stacks may overflow, even  without recursion,  and access to 
	statically allocated memory is not necessarily uniform ( suppose
	that  A[i] is compiled to a single machine operation 
	for i < 256 and requires segments and some other mess for i >255).
	3. Hidden side effects != non-deterministic. A non-deterministic
	system can react to one input in more than one way --- there is
	no way, even in principle, to deduce output from input. But, even
	poorly written programs are deterministic --- the same environment
	will cause the same execution trace. Am I confused here, or is
	non-determinism being used in some other way.
	4.  If X (name your favorite agency or company)
	is going to have the gall to develop programs that
	might kill millions of people when they fail,  e.g. nuclear reactor
	control programs or  missile control programs,  then X should at 
	least have the decency to hire the best  programmers for the
	job and have the entire system checked out by technically
	competent experts ( at least several independant certifications
	would also seem reasonable)
	Leveson seems to be saying that since military programmers, 
	and the people who certify military programs 
	are sometimes bad, we should force them to use very simple
	programming methods. The following quote (from Leveson)
	gave me the shakes.	
	>Although I have great confidence that David 
	>Parnas or Edsgar Dijkstra could use ANY techniques with great skill, I 
	>have less confidence that this is true for all the programmers writing 
	>military software.  
	...
	>Remember, we are talking about a potential nuclear 
	>armageddon here and with certifiers who may be no more knowledgeable 
	>than the programmers.  
	
	The rational approach here is to send a check to SANE (CND for
	Brits), not to try to forbid recursion.

------------------------------

Date: Mon, 17 Jul 89 12:17:17 PDT
From: shimeall@cs.nps.navy.mil (Tim Shimeall x2509)
Subject: Re: UK Defence Software Standard

[Forwarder's Note: The following is a statement by a close friend, who wishes
to remain anonymous but asked me to forward this statement to risks.  His
comments do not apply to his current employer, and any identification of him
might generate misunderstandings.  My friend does read RISKS, and any personal
replies may be addressed to me and I will relay them.
                              					Tim]

I have some experience in working in a RISKy (non-aerospace, non-military)
branch of the software industry, and for this reason I have read with interest
and increasing concern the debate in risks over UK Defence Software Standard.

My concerns are not so much with the standard itself, as with the academic
viewpoint reflected in the arguments stated there and their broader
implications for those of us who worry about software quality in the real
day-to-day world.  The basis of my concern is the assumption that either all
software engineers working on RISKy projects would understand well such
concepts as recursion, multi-tasking, and dynamic memory allocation, or that
only people with these qualifications would be assigned to work on such
projects.

I do not have the academic credentials of Professor Parnas, but I do have
a number of years of experience in QA and Test of software in the real world.
I have been responsible for the evaluation of well over 100 pieces of software.
Only a small fraction of the engineers working these projects
understood well the nuances of the methods listed above.

Most of them have never heard of Dijkstra, and of the few who have, none
to my reasonably accurate knowledge have ever read anything he has written.
Many of them have only high school diplomas or AA degrees in electronics.
The point I am trying to make here is that, rightly or wrongly, the real
world does not reflect the academic one in these areas.  In my experience,
the aerospace companies generally have much better educated software engineers,
but even there, many of them have not been to school in over 20 years.

For this reason, I think that instructing or allowing the average software
enginner in industry to use some of these techniques in life-critical
applications is a lot like putting a loaded 45 into the hands of a child.
Thus, in my humble and very pragmatic opinion, statements by Professor
Parnas such as:

>   Nor, would I agree that non-determinism is bad.  Non-determinism has been
>   demonstrated by Dijkstra (and much earlier by Robert Floyd) to allow
>   programs that are much more easily verified than some deterministic ones.

>   I believe that organisations such as MoD would be better advised
>   to introduce regulations requiring the use of certain good
>   programming techniques, requiring the use of highly qualified
>   people, requiring systematic, formal, and detailed documentation,
>   requiring thorough inspection, requiring thorough testing, etc.
>   than to introduce regulations forbidding out the use of perfectly
>   reasonable techniques. 

To be very naive.

 Professor Parnas has experience with aerospace, but in many other software
industries with life-critical applications things are much worse.  These
industries would hire people with the kind of qualifications he discusses were
they available.  The fact of the matter is that the people are simply not
there.  Thus it seems to me that the best approach to take is to develop
standards designed to work with the present environment, rather than try to
build standards designed to work in the environment that should be.

Comments to me may be made via Dr. Shimeall.  Thank you for considering the
views of one eminently less qualified on academic grounds.  These opinions are
my own, and are not the opinions of either Dr. Shimeall or my present or
previous employers, both of whom would be very upset to find that I had
expressed them in public.

------------------------------

Date: 17 Jul 89 15:31:00 PDT
From: "FIDLER::ESTELL" <estell%fidler.decnet@nwc.navy.mil>
Subject: polling vs. interrupts: some perspective'

A comment on the "interrupt vs. polling" debate, if I may.
I submit that which is better is very much a matter of perspective; and 
further, that the perspective is scenario or (environment) dependent.

If one takes the point of view of the "operating system kernel" looking
from inside ONE processor, out to the world, AND *IF* that world is
simple and small, and *docile*, then polling is very straightforward.
Using Prof. Einstein's rule [... simple as possible, but no more ...]
perhaps then polling is to be preferred, in those cases.

However, as the world [system] grows larger, more complex, and less
well behaved, polling becomes enormously more complex.  Not the least
of the problems is, what algorithms does one use, when and how,
to handle emergencies [call them "interrupts" is you wish] that occur
randomly, in such a way that the priority of handling them MUST change?

Example: In a tactical combat system, implemented on several processors,
distributed among several sites (with at least 2 but at most say 7
processors per site), a normal goal is to share messages in real time;
under fire, however, most especially when one or more processors at any
given site may suffer damage, priority swiftly shifts towards defending
oneself; i.e., the "data link" module becomes somewhat less important than
the "track and shoot" modules.  However, since coordinated fire is often
more effective than "self defense" fire, some link should be maintained.

Perhaps we can learn from successful (?) systems significantly more complex
than those we seek to build; e.g., ourselves.  SOME of our sensors usually do
polling; e.g., our eyes; occasionally we get interrupted by flashes of light;
but more often, we scan the scene.  However, our ears most often operate on
interrupts.  And our brains use adequate algorithms to process and correlate
those diverse data.
                                             Bob

------------------------------

Date: 17 Jul 89 21:34
From: minow@bolt.enet.dec.com (Repent! Godot is coming soon! Repent!)
Subject: re: Non-deterministic interrupt handling

In risks 9.5, Dave Parnas writes:
>The non-determinism associated with interrupt handling comes from the unknown
>timing of the external events and is not affected by the replacement of
>interrupts with polling.

Much of the variation in interrupt handling comes from other operating
system processing, such as (in modern computer systems) page-faulting
and job/swap/memory management.  Also, in very modern systems, the hardware
instruction and data cache mechanisms introduce additional variation.
This is true in all aspects of interrupt processing: not just in the
response time as measured in the user's program.

It should also be noted that there are "real-time" operating systems
that are carefully designed to allow interrupt-driven I/O with minimal
variation.  Variation caused by the system itself (for example
"uninterruptable" instruction sequences for managing system queues)
is still a problem, however.
 
> If, because I am forbidden to
>use recursion, I write a complex program that does the stacking and
>backtracking hidden by recursion, I am likely to introduce more errors than
>were present in the compiler's well-tested implementation of recursion.

Perhaps, but these are usually the kinds of bugs that are caught during
initial testing.  "Recursion bugs" in bounded systems are often not
caught, but discovered when some combination of data causes the stack
to overflow into another variable's (or task's) data storage.  One "real-time"
system that I developed had a stack boundary check routine in its "idle
task." (This was *not* a safety-critical system, by the way.) [Confession: I
added these checks after taking Nancy Leveson's Software Safety seminar.]

>Similarly, if I introduce busy waiting and polling in my programs because I
>cannot use interrupts, I may again make things worse rather than better.
 
Good software engineering will make low-level mechanisms (I/O strategies)
invisible to the high-level programs.  On Unix, for example, device drivers
log errors by calling a printf() function that busy-waits output to the
console.  The format of that function is identical to the interrupt-driven
printf() that normal user programs call.

Martin Minow                               minow%thundr.dec@decwrl.dec.com
The above does not represent the position of Digital Equipment Corporation

------------------------------

Date: 		Mon, 17 JUL 89 19:48:19 GMT
From: ZDEE699@elm.cc.kcl.ac.uk
Subject:        Telephone technicians tapping into other phone lines

There is a rumour going round in the South of France (French Riviera) 
concerning telephone technicians servicing grouping boxes, and since
"there is never smoke without a fire", I believe it has to be taken seriously.

	The technician traces a friend's line and calls him. He then connects
the line to someone else's line, and tells his friend that he can phone free
for the next 30 minutes. As a result, that person can call free of charge.
The person who pays is the owner of the other line. After the 30 minutes,
the techinician interrupts the conversation, and eventually connects his
friend's line to another line, etc. etc. 
        The bills received by the people paying for the conversation are only
slightly higher than usual and the whole thing goes un-noticed. But the
word spread round, and it seems that the trick has gone out of hand. Normal
users are complaining of being often cut-off, or having a third person joining
in their conversation, and of varying phone bills.
        The answer from the French Telecom, is to ask for itemised bills.
And believe me, it comes as a shock when your bill says that you've been
calling St. Bartholomew and you don't even know where it is !

Olivier Crepin-Leblond,  Computer Systems & Electronics,
Electrical & Electronic Engineering, King's College London, UK

------------------------------

Date: Fri, 30 Jun 89 20:45:47 -0400 (EDT)
From: Gordon Hester <gh0t+@andrew.cmu.edu>
Subject: Re: New Yorker Article on "radiation" risks

There have been a couple of postings about the series of New Yorker articles by
Paul Brodeur recently. I happen to work (as a researcher) in the area of
electric and magnetic fields (EMF) risks.  (The use of the term "radiation" by
Brodeur is a complete misnomer, by the way - he's talking about fields.)  I'm
not a health effects researcher - my field is risk communication and public
policies for risk management. Anyway, I would like to pass along a couple of
comments.

The first comment is that no one really knows at this point whether there are
risks (i.e., adverse health effects) from EMF.  Scientific investigation in
this field is very complex and difficult.  There have been a lot of flawed
studies (with both positive and negative results.) There are clearly
demonstrated biological effects, but no one knows whether these cause health
effects. (The alternative is that the body adjusts to the biological effects,
or that they are somehow unimportant.) There is certainly reason for sufficient
concern to continue to fund research, although money has been hard to come by
in this area as a rule. A little more support seems to be forthcoming recently.

Second, Brodeur's descriptions of the ways scientists have supposedly "cooked"
their data to suit the preferences of funding sources are, in large part, B.S.
Not that none of this has ever occurred - it has, and on both sides of the
issue. But it is the exception, not the rule by any means. There are some good
scientists working in this area, and they are men and women of high integrity.
Their's are the results worth taking seriously, and fortunately they are
getting the lion's share of the funding these days.

The third thing, in case it's not obvious to anyone who read the articles, is
that Brodeur is anything but an unbiased observer.  He has reached his own
conclusions, and is seemingly out to convince people no matter what it requires
saying. Notice that, for example, according to him it's only the scientists who
come up with negative results (on health effects) who are cooking their data. I
would caution net readers against relying on these articles as your sole source
of information if you have a serious interest in this topic. If you do, send me
email if you want recommendations for other sources.)

BTW, despite my reservations about the Brodeur articles, the most recent
posting to the net on this topic (sorry, I don't have the name handy) did pick
out some interesting points from them.

gordon hester, carnegie mellon u., department of engineering and public policy
pittsburgh, pa 15213                                      gh0t+@andrew.cmu.edu

------------------------------

End of RISKS-FORUM Digest 9.6
************************
-------