[comp.protocols.tcp-ip] "Morris did it"--the new excuse?

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (11/13/88)

In article <1570@valhalla.ee.rochester.edu>, deke@valhalla (Dikran Kassabian) writes:
>Consider some of the less obvious consequences of his actions.

OK.

>Scientists and researchers at a university like mine were unable to use
>their computers and network links during the virus attack, and lost
>valuable time.  As always, some were up against deadlines and may well
>be hindered now in their chances for getting results before a confer-
>ence, or in getting a grant proposal out before deadlines.

When I've taught courses that use computers, I told students that under
almost all circumstances, computer downtime would not be an excuse for
lateness.  The one exception I've ever made involved granting everyone
a week's extension.  I've never worked assuming that the machines I use
are 100% reliable.  Do the scientists/researchers at your site do so--
even on critical stuff?  If someone has a grant proposal riding on get-
ting something done by a certain deadline, what happens if there's a
major disk crash at your site?

>The medical center/teaching hospital at my university is also network
>connected.  What if the network overload caused patient monitoring systems
>there to be sluggish and inadequate?  Would that be OK because Mr. Morris
>"did not do it on purpose"?  As it turns out, this was not a problem here,
>but it's not out of the question... it could have happened somewhere.

Are you saying that the patients at your university are in possible trouble
on days when the ARPANET is slow?  That if a machine crashes unexpectedly
that patients have nothing to rely on but prayer?  I find it frightening
that hospitals exist which have perhaps decided to rely heavily on some
computers working according to a perfect schedule.  Don't you?

Hospitals generally have a backup power supply.  For a very good reason.

>This is serious business!

Yes this is *all* serious business.  Computers used primarily for USENET
or hack or what-not can be dead for awhile and merely inconvenience lots
of people.  But now you cite computers where users cannot afford to have
computers to be down for long--do the sites that run them without having
any contingency plans whatsoever?  Such sites are irresponsible.

I find it remarkable that in such a computer-literate group that we all
supposedly represent, and thus all know that "the computer did it" is NOT
an acceptable excuse, that anyone, let alone the apparent horders here,
would quickly adopt "the worm did it".

What is the difference between:

	I'm sorry, Mrs Brown, your husband died because of a
	computer power failure.

and

	I'm sorry, Mrs Brown, your husband died because the
	Morris worm knocked out our computers.

?  To Mrs Brown, I would expect none whatsoever.

And you seem to be implying that the latter is to be blamed solely on RTM;
I believe the hospital that would or should be held culpable in the first
case is just as negligent in the latter, and should not be allowed to pass
the responsibility buck.

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

hbo@sbphy.ucsb.edu (Howard B. Owen) (11/14/88)

In article <16915@agate.BERKELEY.EDU>, weemba@garnet.berkeley.edu 
(Obnoxious Math Grad Student) writes...

>   ...             I've never worked assuming that the machines I use
>are 100% reliable.  Do the scientists/researchers at your site do so--
>even on critical stuff?   ...

   Scientists at my site know that computers and networks go up and down. 
Nevertheless, they tend to depend on both to get their work done. One group
here does a lot of montecarlo type work. They use Cray time at SDSC. If the
internet link is down, their work stops. Without supercomputers, and the
high speed networks to connect them, a lot of physics research simply wouldn't
happen. It doesn't matter that computers aren't 100% reliable; they are the only
tool for the job.

   While I agree with the idea that tool reliability should be carefully
considered when undertaking a job, I don't think failure to do so contributed
greatly to the damage done by the recent unpleasantness. The blame for lost
computer time and disrupted research lies not with unreasonable expectations
on the part of users, but with the originator of the worm.

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (11/14/88)

In article <978@hub.ucsb.edu>, hbo@sbphy (Howard B. Owen) writes:
>   Scientists at my site know that computers and networks go up and
>down.  Nevertheless, they tend to depend on both to get their work
>done. One group here does a lot of montecarlo type work. They use Cray
>time at SDSC. If the internet link is down, their work stops.

So the work stops.  Is this something that happens once every four
years?  No.  So I don't understand why you bring this up.

>							       Without
>supercomputers, and the high speed networks to connect them, a lot of
>physics research simply wouldn't happen. It doesn't matter that
>computers aren't 100% reliable; they are the only tool for the job.

Again, what's your point?

>   While I agree with the idea that tool reliability should be
>carefully considered when undertaking a job, I don't think failure to
>do so contributed greatly to the damage done by the recent
>unpleasantness. The blame for lost computer time and disrupted research
>lies not with unreasonable expectations on the part of users, but with
>the originator of the worm.

Again, what's your point?  From the user's point of view, it's always
one reason or another why the computers/networks are not available a
certain XX% of the time.  Every time they go down, do your users hunt
around for "whom" to blame?

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

robert@SPAM.ISTC.SRI.COM (Robert Allen) (11/15/88)

	
	>Scientists and researchers at a university like mine were unable to use
	>their computers and network links during the virus attack, and lost
	>valuable time.  As always, some were up against deadlines and may well
	>be hindered now in their chances for getting results before a confer-
	>ence, or in getting a grant proposal out before deadlines.
	
	When I've taught courses that use computers, I told students that under
	almost all circumstances, computer downtime would not be an excuse for
	lateness.  The one exception I've ever made involved granting everyone
	a week's extension.  I've never worked assuming that the machines I use
	are 100% reliable.  Do the scientists/researchers at your site do so--
	even on critical stuff?  If someone has a grant proposal riding on get-
	ting something done by a certain deadline, what happens if there's a
	major disk crash at your site?

    I would expect, and have in fact seen, professors give extensions in those
    cases where the loss of computing facilites CLEARLY had an unrecovereable
    impact on exams/assignments.  In most cases the loss of facilites was not
    CLEARLY to blame, since people often wait until the last minute before
    starting to work.  This is seldom the case with proposals or contracts
    deadlines (in my experience).

	>This is serious business!
	
	Yes this is *all* serious business.  Computers used primarily for USENET
	or hack or what-not can be dead for awhile and merely inconvenience lots
	of people.  But now you cite computers where users cannot afford to have
	computers to be down for long--do the sites that run them without having
	any contingency plans whatsoever?  Such sites are irresponsible.

    My site is rather well equipped with computing facilites.  Most every person
    has a Sun on their desk, and we also have several VAXen.  Even with this
    plethora of equipment, we were UNABLE to continue work for about 2 days
    when the virus hit.  If a group that has adequate facilies cannot survive
    such an outage, then certainly places such as Universities, which usually
    have inadequate quantities of computers, will be very hard hit by such a
    virus as we are discussing.  Short of maintaining a seperate hardware fall-
    back, it is my contention that it is IMPOSSIBLE to have a contingency plan
    other than what was used across the U.S., namely, lots of late night work
    by system staff people who were trying to second guess the designer of the
    virus/worm.

    I have not posted anything about the virus since others were posting plenty.
    Just this once however I'll make my opinion known.

    The designer of the virus clearly intended for it to secretly infiltrate
    other computers, and sit there, using up a quantity of CPU and memory.
    The design of the code that implemented the infecting agent was designed
    to prevent anyone from deducing what the process was doing.  Although
    the virus did crash a few systems from swap space problems, it apparently
    (as far as we know) did nothing overtly malicious.  The reason the virus
    was so damaging, was that the perpetrator DIDN'T TELL ANYONE what it was
    doing.  For that reason we had to keep our systems down until we determined,
    as best we were able, that no trojan horses had been planted.

    If the perpetrator had "gone public" with the code, the fix, or even an
    overview of what the virus DIDN'T do, then perhaps people would be more
    willing to cut the guy some slack.

    I've known more than a few people who broke UNIX security at one time or
    another.  Some of them did it to get even with some system administrators,
    most did it to see if it could be done.  I do not automatically call for a
    pound of flesh from all "crackers" or "hackers" who break security.  In
    this case however, I think that the negligence demonstrated by the per-
    petrator is rather gross.  He was playing with a dangerous thing to start
    with.  He also INTENDED that it `infect' other machines on a semi-permanent
    basis.  He DIDN`t tell anyone how to combat it when it got out of control,
    nor did he come forth to assure people that the virus was benign.  This
    is a mistake that I might expect of a freshman student, but certainly not
    a grad student.  This single fact is the most damning of the perpetrator
    in my opinion.  Finally, I don't think that what he did required any great
    amount of brilliance.  As someone who spent some amount of time in stat
    labs with people who could break security of UNIX at will, I can tell you
    that all that is really required is an inquisitive mind, lots of patience,
    and decent C programming experience.  It also requires a certain kind of
    mind set.  If the perpetrator discovered the sendmail bug and the fingerd
    bug WITHOUT source code access, then I would consider using the word
    "brilliant" to describe him.  As it is, I would say he was a competant C
    and UNIX programmer.

    As for punishment?  I agree with others that jail time is counter productive
    in a case like this.  Community service of some type relating to computers,
    plus perhaps a fine, would be more conducive to getting the point across
    that some of us can't afford to sit around a stat lab (anymore) figuring out
    how to screw the system (not a really great challenge), and we can't afford
    to have our machines down for 2 days at a stretch.

    Robert Allen,
    robert@spam.istc.sri.com

pavlov@hscfvax.harvard.edu (G.Pavlov) (11/22/88)

In article <8811142005.AA02573@milk10>, robert@SPAM.ISTC.SRI.COM (Robert Allen) writes:
>       (re the autjor of the worm): 
> 	
>   ......  He DIDN`t tell anyone how to combat it when it got out of control,
>   nor did he come forth to assure people that the virus was benign.  This
>   is a mistake that I might expect of a freshman student, but certainly not
>   a grad student.  This single fact is the most damning of the perpetrator
>   in my opinion.  

    I don't defend the individual.  But I would not interpret his failure to
    communicate as a sign of malevolence.  I assume that he became scared and
    panicked.  It's happened to the best of us.

    greg pavlov, fstrf, amherst, ny