[comp.risks] RISKS DIGEST 7.93

RISKS@KL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (12/14/88)

RISKS-LIST: RISKS-FORUM Digest  Tuesday 13 December 1988   Volume 7 : Issue 93

        FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  Overrides of train controls in Japan (Jeff Schriebman)
  Re: Vincennes and over-reliance on automation (Victor Riley)
  Fake ATMs (Rick Adams) 
  `Trapdoor' -- War by Computer Virus (Rodney Hoffman)
  Re: "Hackers", "crackers", "snackers", and ethics (Douglas Jones)
  Hacking the etymology (Nigel Roberts)
  Re: design intent of worm (Rich Thomson)
  It's NOT a computer! (Martin Minow)
  There's no excuse (Aaron Harber via Martin Minow)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, and nonrepetitious.  Diversity is welcome.
CONTRIBUTIONS to RISKS@CSL.SRI.COM, with relevant, substantive "Subject:" line
(otherwise they may be ignored).  REQUESTS to RISKS-Request@CSL.SRI.COM.
FOR VOL i ISSUE j / ftp kl.sri.com / login anonymous (ANY NONNULL PASSWORD) /
  get stripe:<risks>risks-i.j ... (OR TRY cd stripe:<risks> / get risks-i.j ...
  Volume summaries in (i, max j) = (1,46),(2,57),(3,92),(4,97),(5,85),(6,95).

----------------------------------------------------------------------

Date: Tue, 13 Dec 88 09:04:24+0900
From: jeff@jusoft.jusoft.junet (Jeff Schriebman)
Subject: Overrides of train controls in Japan
                            [@ucbvax.Berkeley.EDU,@unisoft:jeff@jusoft.UUCP]

Drivers Often Override ATS on Chuo-Sobu Line
Asahi Evening News, Thursday December 8, 1988

Tokyo, Japan -- Police investigating the Dec. 5 rear-end collision at
Higashi-Nakano Station on the East Japan Railway's Chuo Line, which killed
two [the driver and a passenger] and injured 102, have found that the train
drivers override the ATS (automatic train stop) system over 10 times during
the run between Chiba and Mitaka stations.

	This is because the heavy train schedules result in trains becoming
close-packed on the tracks. Police also found that the drivers in many cases
do not apply the hand brakes after overriding the ATS, as they are required
to do, because applying the brakes would delay the trains further.

	This practice of overriding the ATS and not applying the hand brakes
is believed to be in the background of the Dec. 5 rear-end collision.

	The investigations up to Wednesday showed that the brakes and ATS on
the train that ran into the stopped train were in good working order.

	JR East penalizes train drivers for being late more than 30 seconds
when calculating pay hikes and bonuses. Drivers say JR East is stricter on this
point than the old Japanese National Railways (JNR).  The catchword is, "Don't
be late!".

	The train that ran into the stopped train was about four minutes behind
schedule and JR people say that driver Teruki Hirano could have been trying to
make up for lost time.

	The union claims that the tight schedules produced the accident, but JR
East points out that similar schedules have been used for more than 20 years.
During the peak commuting hour in the morning, there are trains every two and a
half minutes, but at the time of the accident, the interval was one every three
and a half minutes.

	This was the third accident on the down line between Okubo and
Higashi-Nakano stations, and all three were rear-end collisions. They all
occurred between 9 and 10 a.m. after the morning rush hour. Most passengers get
off at Shinjuku Station, and drivers tend to relax because they have only a
little more to go to Nakano Station or Mitaka Station [the end of their run].

	No improvements were made in signal and safety facilities after the
second accident in 1980. Some experts point out that there should be more
signals installed at shorter intervals because of the sharp curve in front of
the station and the short distance between the end of the curve and the
station.

------------------------------

Date: Tue, 13 Dec 88 15:16 CST
From: RILEY@csvax.src.honeywell.com
Subject: Re: Vincennes and over-reliance on automation

In RISKS 7.92, Randall Davis writes:

> Can automation and reliance on remote sensing be overdone? Of course.
> Is this an example of it, or an example of the opposite? [...] ...within
> almost any reasonable definition, the system worked to supply accurate
> and useful information in a form available in a "quick reference"; every
> report that comes out continues to make that clear.

And earlier in the same submission, "When the report came out that the system
had supplied accurate information, the silence [among RISKS contributors] was
deafening."

As I understand the final Pentagon-issued report on the Vincennes incident,
the initial classification of the aircraft's identity and altitude vector
were indeed in error.  The October 15 Science News summarizes the findings
thus: "The computerized surveillance system on the Vincennes first misread
the plane's altitude and identified it as an F-14 fighter jet, but then
corrected itself.  The Navy report concludes crewmen responsible for
evaluating surveillance did not closely analyze the initial computer mistake.
Furthermore, the skipper paid more attention to their increasingly heated
reports of an emergency than to new displays generated by the computer."

That being the case, one could make a case for this being an example of
over-reliance on automation.  The crew involved believed the initial
system identification and altitude reading and did not double check
them, nor did they change their evaluation when given new, conflicting
information.  However, when faced with an over-the-horizon threat, the
crew has no choice but to rely on remote sensing and automated target
identification, so over-reliance is hardly an option.

I think this incident was primarily the result of an interaction
between automation and crew that system developers did not predict, and
in fact have no way of anticipating without extremely extensive scenario
generation and analysis.  The crew was "primed" to accept the initial
misidentification because a fire-fight with Iranian gunboats twenty
minutes prior to the aircraft encounter had raised their expectation of
attack and, in effect, lowered their target detection "threshold" to
the point where the original misidentification was easily accepted.

As automation becomes more complex, and as decision-making becomes more
automated, I think we'll see more of these types of incident.  System
developers need to realize that complex automation can produce subtle
and unintended consequences, due to the interactions between automation
and crew or between automated systems, and that these consequences can
lead to major errors.  I believe the major outcome of the Vincennes
incident should not be to assign blame to either automation or humans,
but rather to recognize that new analytic approaches should be developed
to uncover potential problems before such systems are fielded.

-Victor Riley, Honeywell Systems and Research Center
(all usual disclamers apply)

------------------------------

Date: 13 Dec 88 03:47:20 GMT
From: rick@uunet.UU.NET (Rick Adams) 
Subject: Fake ATMs 
Organization: UUNET Communications Services, Arlington, VA

From Communications Week International, 21 Nov 88

One night last year, Italian banking clients using the ''bancomat,'' an
electronic teller, were pleased to discover that their bank branch had added an
extra terminal.  Delighted to be spared waiting time, the clients inserted
their cards into the machine. The bancomat presented the usual menu and
requested the client's personal identification numbers.  The machine, however,
withheld the client's cards, informing them that the cards were invalid and
that they should request information during banking hours.  It was only when
the clients returned to the bank the following day, ready to complain, that
they learned that they had been victims of a new kind of fraud. What had
appeared to be a bancomat was, in fact, a personal computer, placed on the bank
wall by an independent operator.  The thief/entrepeneur used the cards and
identification numbers to clean out their accounts.

This anecdote, revealed recently during a conference of banking security
experts, indicates the kind of problems faced by value-added services operators
in a country that has serious organizational difficulties.

    [This is the old system-spoof problem.  Authentication is a two-way street.
    The system needs some sort of authentication that the user identity is
    authentic.  But the user also needs some sort of authentication that the
    system identity is authentic.  PGN]
    
------------------------------

Date: 12 Dec 88 17:51:52 PST (Monday)
From: Rodney Hoffman <Hoffman.ElSegundo@Xerox.com>
Subject: `Trapdoor' -- War by Computer Virus

In the 11 Dec 88 'Los Angeles Times Book Review' Times Book Editor Jack
Miles reviews a new novel, "Trapdoor" by Bernard J. O'Keefe (Houghton
Mifflin).  The headline for the review is WAR BY COMPUTER VIRUS.  It quotes
from the epilogue to the novel, in which the author calls it a "parable to
point out the complexity of modern technology and to demonstrate how one
error, one misjudgment, or one act of sabotage could lead to actions that
would annihilate civilization."

I have not read the book, but according to the review, an inside saboteur
plants a delayed-action "virus" (the review calls it one, although the
description doesn't really sound like one) in a Pentagon computer which
messes with the public-key encryption codes required to fire US missiles
equipped with permissive action links.  "The result:  America no longer
knows the code necessary to launch its own weapons.  The nation is
defenseless."  (In the epilogue, the author points out that it can't quite
happen that way.)  Much more transpires.

The author, O'Keefe, earlier wrote the non-fiction "Nuclear Hostages,"
worked with Fermi and Oppenheimer on the Manhattan Project, and heads EG&G,
the company which the publisher says has "conducted all nuclear weapons
tests for the US government for the last 40 years and is the operating
contractor for the Kennedy Space Center."

The reviewer says, "there remains a lean and gripping parable hiding inside
an only slightly overweight thriller.  For all its novelistic faults, I
can't imagine a timelier read or a better Christmas gift for anyone serious
about computers or math.... If the computers that control our nuclear
weapons can be disabled, what about the computer that control our nuclear
power plants?  What about the computers that control our vote-counting and
our stock transactions?..."

------------------------------

Date: Tue, 13 Dec 88 09:05:08 CST
From: Douglas Jones <jones@herky.cs.uiowa.edu>
Subject: Re: "Hackers", "crackers", "snackers", and ethics

In Andy Goldstein's contribution of 12 Dec 88, he says, in reaction to
my previous comments:

> Many of the computer systems involved are crucial to a business's
> operation; some are critical to human life. The potential (and in some
> cases the actuality) is there for major losses from disruption of
> service and theft. The hackers are unknown outsiders in whom a serious
> organization can place no trust whatsoever.

One of my examples was Com-Share Incorporated.  This description exactly fits
Com-Share.  Com-Share had only one buisness:  Selling timesharing services.
Com-Share had more customers than any other timesharing service in the early
'70s, and the customers were distributed nationwide.  Security was of the
utmost importance; without it, the company would surely have failed in the
marketplace.

In this context, the reward offered by Com-Share to anyone discovering
a loophole in the system security served an important role.  Goldstein
describes the hackers who threaten a commercial service as follows:

> The hackers are unknown outsiders in whom a serious
> organization can place no trust whatsoever.

Without the reward, the company would have clearly had to react to hackers as
the above quote indicates.  With the reward, the company could not exactly
trust hackers, but rather, the company could make it more rewarding for a
hacker to tell the company what was wrong than to take a less desirable path
such as selling improperly obtained information to a third party.  A reward
does not automatically make all hacking constructive, but it offers an
incentive for constructive hacking.  By the same token, legal dis-incentives
for destructive hacking also can help.

What I oppose are blindly applied blanket actions taken against all hackers.
These provide an incentive to stay away from hacking, but if someone insists on
hacking, they provide no incentive towards constructive behavior.  I feel that
hacking is a sufficiently attractive activity that some people will hack
whether it is legal or not, and we must keep incentives in the system to direct
the behavior of such people to constructive ends.
                              				Douglas W. Jones

------------------------------

Date: Tue, 13 Dec 88 07:00:26 PST
From: roberts%untadh.DEC@decwrl.dec.com (Nigel Roberts, D-8043 Unterfoehring)
Subject: Hacking the etymology

The recent discussions of the etymology of the terms "hacker", "cracker", 
_et al_ & the recent spirited defence of the activity by one or two 
contributors (at least one of them being a self-confessed "hacker") has 
set me to thinking.

In RISKS & elsewhere, I see a "generation gap" between what, for want of a
better term, I would describe as the "old-time hackers", who were
experimenters, and the current cyberpunks, the "hackers" of popular mediaspeak,
the eponymous "shatterers".

I think this apparent generation gap is fundamental to the discussion. 

The "old-style hackers" (of whom I am vain enough to claim I belong) learned
their computing in the 60s and 70s, often in a university or similar multiuser
environment, where, as often as not, hacking involved programming.

Today's stainless steel rats are much more likely to have discovered computers
in the home, courtesy of Apple, Commodore or IBM, and started their "network
tourist" activities by purchasing a modem.

The old school (& I include myself here) resents the way the term "hacker" 
has been hi-jacked and is today being used to connotate anti-social activity. 
This is despite the ambiguous roots of the term (described by Weizenbaum
in _Computer Power & Human Reason_).

Today's cyberpunks are computer burglars, who are attempting to justify their
activities by claiming a common motivation with their arguably less anti-social
predecessors.

Like any story of generation conflict, there are elements of truth in the
claims of both sides.

It is going to be impossible to prevent the media from using the word "hacker"
in a way that the "old school" dislike. It would almost be easier to claim that
the word "gay" meant "happy, carefree".

But maybe the media and the collective unconscious understand the evolution 
of hackerism better than we do.

For just as there is at least a tiny thread of commonality with the hackers of
old in the network rats of the 80s, and I would say that there was some small
element of today's network rats in the hackers of old.

But of course, there IS a distinction between hacking around a system whose
sole reason of being is to teach people about computers, and hacking into
systems which are being used for serious business purposes and where outsiders
no right to be.

That difference is ethical, and has well expounded here in RISKS already.

Seeing as we can't get rid of "hackers" in the popular media, I would like 
to coin the term "punk hackers" (an abbreviation of 'cyberpunk hackers')
to describe their anti-social activities.

It seems to fit only too well, just like "punk rock" is rock music with 
swearing & spitting at the audience. 

And using it would let us "old hackers" keep our self-respect!

	Nigel Roberts,  	Munich, W. Germany.

------------------------------

Date: Mon, 12 Dec 88 22:17:59 MST
From: thomson@wasatch.utah.edu (Rich Thomson)
Subject: Re: design intent of worm
Organization: Oasis Technologies

In RISKS DIGEST 7.92 cjosta@taux01.UUCP (Jonathan Sweedler) writes
>It seems that Robert Morris Jr. would not have done anything illegal
>(even under these new bills) if his virus had worked as it was designed
>to work: to propagate quietly from machine to machine.

Several times I've seen it discussed here on RISKS about what the worm
would have done had "it worked as it was designed to work".  One of our
local compiler gurus, Donn Seeley, was the person who decompiled the worm
code from which Gene Spafford wrote his paper.  Donn Seeley has also
written a paper on the worm and I attended a talk he gave on the worm
here at the University of Utah.

He was asked the question "On USENET there has been discussion to the
effect that the author intended the worm to propagate slowly from machine
to machine, but a programming error caused the worm to replicate out of
control.  What evidence did you see in the code to support this?"

His answer was "None."  This business about the worm "doing what it was
designed to do" is merely a rumor going around USENET and has no
substantiation in fact, unless RTM himself starts claiming that there was a
design mistake.  Since RTM has so far remained silent, I'm inclined to
believe Donn Seeley.

There is NO EVIDENCE in the decompiled code to indicate that the worm was
intended to propagate slowly.  In fact, the minimum lifetime the worm could
have is fifteen minutes.  There is code in the worm that deals with "population
control", but this is oriented more towards making sure that not too many
copies of the worm are running so that the worm can get some "work" done.
Otherwise, the worm would never propagate out of the first few machines because
it would be so busy re-infecting them.
                        			-- Rich

------------------------------

Date: 13 Dec 88 14:17
From: minow%thundr.DEC@decwrl.dec.com
Subject: It's NOT a computer!

Reading the recent risks discussion (and listening to conversation at parties)
was an education.  So much magic:
	cr--r-----  1 root     sys        3,   0 Jan 10  1987 /dev/mem
	chown root /dev/mem /dev/kmem ...
	chgrp sys /dev/mem /dev/kmem ...
	chmod 440 /dev/mem /dev/kmem ...
(From Paul McKenney's note in Risks 7.87)

Friends, this thing under my desk isn't a computer, and I'm not a computer
programmer.  Of course, it looks like a computer, and the woman who services
it probably assumes it's a computer, and the guy in the next office who
designed it is quite certain it's a computer, (and the folks who pay
my salary hope I'm a computer programmer), but they're all wrong.

It's a tool, that's all; and, when I'm reading my mail or writing my programs,
I'm every bit as naive as the folks who say "hello" when the login program
tells them to.  It's a smart typewriter, and that's all it should be.

I don't want to program my workstation.  I don't want to become a Unix guru
to get my work done, and I don't want to have to play hide-and-seek with every
snake to slither out of a C programming course.  I want to take the stuff out
of the box, plug it in, and get some work done without having to worry whether
/dev/mem is owned by root.

Telling me how to repair the problem is missing the point altogether.
In fact, there is so much software inside of that workstation and the
comptuter network it connects to: Ultrix, X-windows, Microemacs, Ethernet,
VMS, more Ethernet, disk servers, and whathaveyou, that I doubt that there
is any single person who can navigate through the entire collection.  I have
to trust the people who have "privileges" to do their job responsibly, and
the people who design the systems to limit my risk.

Now, where did I put my copy of Normal Accidents?

Martin Minow
minow%thundr.dec@decwrl.dec.com

The above does not represent the position of Digital Equipment Corporation.

------------------------------

Date: 13 Dec 88 13:58
From: minow%thundr.DEC@decwrl.dec.com
Subject: There's no excuse

Excerpts from an op-ed piece in the business section of the Boston Globe,
Tuesday, Dec 13, 1988:

  For Robert T. Morris Jr., hacker, there's no excuse      
  By Aaron Harber

[Harber teaches at the Kennedy School of Government at Harvard University
and is a director of two software companies.]

... With hackers around the country proclaiming Morris a superstar, he is
on the path to becoming a folk hero.  We must see that his punishment is
swift and severe so that his actions are immediately seen and understood
as undesirable and unacceptable.  To do any less will sow the seeds for
further undertakings by those who are as "bright and bored" as Morris.

... Morris' good intentions failed in two respects.  First, once he realized
his error, he had many opportunites to correct it... He panicked and failed
to seriously attempt to correct his monster.

More importantly, Morris knew he should not have made the attempt at all.
It was not only that something might go wrong.  He had many other ways of
proving his theories and, ironically, was someone people would listen to had
he raised the concern in a legitimate forum.  In a supervised demonstration,
he could have made his point and received the attention and accolades he
may have sought.

His attempt was based on the premise accepted by far too many people: Computer
systems are different from other forms of property... It is considered by all
to be unethical and illegal to enter someone's business and examine records
without permission.... Yet hackers see invasion of a system as a challenge and
are often rewarded for their "successes."

... Students are taught skills that give them the power to do both positive
and negative deeds, yet they are rarely, if ever, schooled in the ethical
deployment of that power.  Given the constant demonstrations of a hacker's
potential power, why are computer ethics courses not mandatory? ...

Robert Tappan Morris Jr. is an example of how we have failed and his example
is one that will be followed until we change society.  Hackers will continue
to see breaking into systems and implanting viruses as a game.  They know
they would not physically ever harm someone, yet do not comprehend the
violence of their seemingly benign actions.  They rarely see, in person, the
results of their activities and this distance promotes their insensitivity.

... The possibilities [for harm] are endless.  Unless and until new standards
are set and accepted by the country, we will continue to suffer from people
such as Robert Tappan Morris Jr. and their computer viruses.

[Excerpted by Martin Minow minow%thundr.dec@decwrl.dec.com
The above does not represent the position of Digital Equipment Corporation.]

------------------------------

End of RISKS-FORUM Digest 7.93
************************
-------