[news.sysadmin] Possible Fines for Virus Perpetrator

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (11/07/88)

In article <12081@dscatl.UUCP>, lindsay@dscatl (Lindsay Cleveland) writes:
>>		  So, it was Robert T. Morris Jr., was it?

>I would surmise that a lot of the sites who *were* damaged by the
>virus and expended much real cash in man-hours (overtime!) chasing it
>down would be interested in proceeding with a class-action suit against
>the fellow to recover damages.

Well gee.  Divide $10K say by 10K computers say, and they each win $1.
Next you subtract off the lawyers' fees...  Hmmm...  Economics wasn't
your major, I presume?

>			         Whether or not it made it through all
>the courts, appeals, etc. is perhaps not as useful as the scare it
>would throw into some other clowns who might think of trying a similar
>worm/virus "just for a bit of fun!"

I see that you, like thousands of others, don't really understand.  Robert
T Morris Jr has done everyone a FAVOR.  Instead of thanking him for maybe
waking up people on the ARPANET to how DAMN EASY IT IS TO INFILTRATE, you,
like thousands of others, just think he's some annoying clown out there
who gets off on crashing the net.

Guess what?  Well, maybe he is an annoying clown, but that's irrelevant.

There are thousands of computers out there extremely vulnerable to attack.
Instead of wailing on about class-action suits to recover "damages", all
these sites that just maybe have woken up and plan to actually take secur-
ity seriously should pay RTM in moneys saved from the potential *BILLIONS*
that could be lost for being so many ostriches.  WAKE UP FOLKS!  This may
very well prove to be your last warning.

>Let me join in the chorus of applause for those many net.people who
>quickly came up with answers and solutions, and for their great
>efforts in spreading the word to the rest of the net.

Yup, good show there.  I hope you're not smugly counting on the next rogue
code to be so easy to notice and eliminate by some of my fellow Berkeley
grad students?  DO SOMETHING **NOW** TO PROTECT YOURSELVES!  WAKE UP FOLKS!

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

c91a-ra@franny.Berkeley.EDU (john kawakami reader) (11/07/88)

Hear hear!  I agree with weemba@garnet.  We should be glad the virus was not
intended to be malicious.  I heard Morris had managed to be root on some 
machines.  The potential for damage is frightening.

John Kawakami:::::c91a-ra@franny.berkeley.edu:::::[      ]:::::::::::::::::::::
:::::::::::::::::::::::::::::::::::::::::::::::::[      ]::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::[      ]:::::::::::::::::::::::

spaf@cs.purdue.edu (Gene Spafford) (11/07/88)

In article <16600@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
>In article <12081@dscatl.UUCP>, lindsay@dscatl (Lindsay Cleveland) writes:
>>>		  So, it was Robert T. Morris Jr., was it?
>
>>I would surmise that a lot of the sites who *were* damaged by the
>>virus and expended much real cash in man-hours (overtime!) chasing it
>>down would be interested in proceeding with a class-action suit against
>>the fellow to recover damages.
>
>Well gee.  Divide $10K say by 10K computers say, and they each win $1.
>Next you subtract off the lawyers' fees...  Hmmm...  Economics wasn't
>your major, I presume?

That was an unkind comment, Weemba.  It also misses the fact that such
a class action suit could be filed for millions, not $10K.  I suspect
that Sun Microsystems will expend a few $100K on this -- not only to
eradicate the worm in their internal network, but they will have the
expense of FedEx'ing copies of patches to all their sites under
maintenance.  DEC will have similar costs.  Then there is BBN and....

Get the idea?  This was not a small-time problem.  The losses could
amount to millions.  I would not be surprised if Cornell was named
as a part to such suits, and maybe even AT&T.  Lawyers like
to name everybody that has deep pockets and might be partially at
fault.  Morris may not be able to pay a judgment that large,
but he may not be the only one sued.

>Yup, good show there.  I hope you're not smugly counting on the next rogue
>code to be so easy to notice and eliminate by some of my fellow Berkeley
>grad students?  DO SOMETHING **NOW** TO PROTECT YOURSELVES!  WAKE UP FOLKS!

It is nice to take pride in your fellow Berkeley-oids, but you are
insulting the professional staff and students at other locations where
the worm was cracked.  The folks at MIT did a lot of work with the
folks at Berkeley, for instance.  Here at Purdue we had a fix in place
before the ones were published from Berkeley.  So it goes at many other
locations.

-- 
Gene Spafford
NSF/Purdue/U of Florida  Software Engineering Research Center,
Dept. of Computer Sciences, Purdue University, W. Lafayette IN 47907-2004
Internet:  spaf@cs.purdue.edu	uucp:	...!{decwrl,gatech,ucbvax}!purdue!spaf

spaf@cs.purdue.edu (Gene Spafford) (11/07/88)

In article <16600@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
>In article <12081@dscatl.UUCP>, lindsay@dscatl (Lindsay Cleveland) writes:
>>			         Whether or not it made it through all
>>the courts, appeals, etc. is perhaps not as useful as the scare it
>>would throw into some other clowns who might think of trying a similar
>>worm/virus "just for a bit of fun!"
>
>I see that you, like thousands of others, don't really understand.  Robert
>T Morris Jr has done everyone a FAVOR.  Instead of thanking him for maybe
>waking up people on the ARPANET to how DAMN EASY IT IS TO INFILTRATE, you,
>like thousands of others, just think he's some annoying clown out there
>who gets off on crashing the net.
>
>Guess what?  Well, maybe he is an annoying clown, but that's irrelevant.

That attitude is completely reprehensible!  That is the exact same
attitude that places the blame for a rape on the victim; I find it
morally repugnant.

Consider an analogy:

Locks built in to the handle of a door are usually quite poor;
deadbolts are a preferred lock, although they too are not always
secure.  These standard, non deadbolt locks can be opened in a few
seconds with a screwdriver or a piece of plastic by someone with little
training.

Now, if you have such a lock on your door, and you wake up in the
middle of the night to find that a stranger has broken into your home
and is wandering about, bumping into things in the dark and breaking
them, how do you react?  Do you excuse him because the lock is easy to
circumvent?  Do you thank him because he has shown you how poor your
locks are?  Do you think *you* should be blamed because you never got
around to replacing the lock with a better one and installing a
burgler alarm?

We have failed to imbue society with the understanding that computers
contain property, and that they are a form of business location.  If
someone breaks our computers, they put us out of work.  If someone
steals our information, it is really theft -- not some prank gone
awry, and it certainly isn't some public service!

We cannot depend on making our systems completely secure.  To do so
would require that we disconnect them from each other.  There will
always be bugs and flaws, but we try to cover that by creating a sense
of responsibility and social mores that say that breaking and cracking
are bad things to do.  Now we have to demonstrate to the world that
this is the case, and we will back it up with legal action, or we'll
continue to risk having bored students and anti-social elements
cracking whatever we replace the systems with until there is no longer
any network.  

-- 
Gene Spafford
NSF/Purdue/U of Florida  Software Engineering Research Center,
Dept. of Computer Sciences, Purdue University, W. Lafayette IN 47907-2004
Internet:  spaf@cs.purdue.edu	uucp:	...!{decwrl,gatech,ucbvax}!purdue!spaf

night@pawl11.pawl.rpi.edu (Trip Martin) (11/08/88)

In article <5332@medusa.cs.purdue.edu> spaf@cs.purdue.edu (Gene Spafford) writes:
>That attitude is completely reprehensible!  That is the exact same
>attitude that places the blame for a rape on the victim; I find it
>morally repugnant.

And by this same logic, someone who walks into the middle of a
battlefield isn't at fault if he gets shot.

We do have to take precautions if we expect to be reasonably safe 
from life's disasters.
 
>
>Consider an analogy:
>
>Locks built in to the handle of a door are usually quite poor;
>deadbolts are a preferred lock, although they too are not always
>secure.  These standard, non deadbolt locks can be opened in a few
>seconds with a screwdriver or a piece of plastic by someone with little
>training.
>
>Now, if you have such a lock on your door, and you wake up in the
>middle of the night to find that a stranger has broken into your home
>and is wandering about, bumping into things in the dark and breaking
>them, how do you react?  Do you excuse him because the lock is easy to
>circumvent?  Do you thank him because he has shown you how poor your
>locks are?  Do you think *you* should be blamed because you never got
>around to replacing the lock with a better one and installing a
>burgler alarm?
>
Okay, suppose a bank follows this logic and has generally poor locks
on their place of business.  While the guy who breaks into a bank is
still a criminal, the bank is also to blame, since it holds lots of 
money and is a very attractive target to criminals.  Security should
be a function of the value of the objects being protected.

Now think about how valuable the information stored on your computer
is...  If you don't think that there are people who would love to get
their hands on that information, or use your computer for their own
purposes, you have another thing coming...  Add to this the fact that 
the internet offers an unlimited supply of computers to hack and steal 
from...

Now from the logic from a paragraph ago, we should be taking great pains
to see that people can't get into our systems.  While it won't stop the
determined hacker, like bank security systems won't always stop the 
determined criminal, it will go a long way in stopping the casual
hacker.  And that alone could save us lots of grief.

>We have failed to imbue society with the understanding that computers
>contain property, and that they are a form of business location.  If
>someone breaks our computers, they put us out of work.  If someone
>steals our information, it is really theft -- not some prank gone
>awry, and it certainly isn't some public service!

You think that kind of logic is going to stop a criminal with real
goals?  The idea that murder is a serious crime has been passed down
for thousands of years, yet that hasn't stopped people from doing it.
 
What this guy did was a crime, but he also did us a real service.  
He got our attention in a big way.  He succeeded in breaking into
hundreds of computers in a matter of days.  Next time the intrusion
may not be so obvious, nor the damage done...

>We cannot depend on making our systems completely secure.  To do so
>would require that we disconnect them from each other.  There will
>always be bugs and flaws, but we try to cover that by creating a sense
>of responsibility and social mores that say that breaking and cracking
>are bad things to do.  Now we have to demonstrate to the world that
>this is the case, and we will back it up with legal action, or we'll
>continue to risk having bored students and anti-social elements
>cracking whatever we replace the systems with until there is no longer
>any network.  

Relying on social mores to protect your systems is a sorry policy.  We
certainly should have stiff legal penalties for hacking, but as everyone
knows, to be punished, you have to be caught.  And catching hackers
can often be near impossible.
--
Trip Martin
night@pawl.rpi.edu
userffs7@rpitsmts.bitnet

Trip Martin
night@pawl.rpi.edu
userffs7@rpitsmts.bitnet

karn@jupiter..bellcore.com (Phil R. Karn) (11/08/88)

The many discussions I've heard on the morality of Robert Morris's actions
inevitably seem to include arguments based on analogies with other security
breaches.  Robbing banks to demonstrate lax security, tossing matches into
gasoline tanks, jimmying door locks in houses, etc, have all been mentioned.

Without necessarily taking issue with any particular analogy, I would like
to point out the pitfalls of such arguments. Laws, codes of ethics and moral
behavior, etc, have evolved over a much longer time than have computers and
computer networks. For most traditional acts, everyone has a pretty clear
idea of the difference between right and and wrong. For example, the notion
that it is wrong to enter someone's house without permission has been well
established for many years. Most kids learn and understand this very early.

But what constitutes "permission" to "access" a computer system is not
always clear.  Logging into someone's account without their permission and
rummaging through their private files is now generally considered wrong.
But what about rummaging through a system's anonymous FTP directory?  In my
own mind, putting something in an anonymous FTP directory is tantamount to
placing a stack of copies by the curb next to a sign that says "FREE -- take
one", or posting it on a (physical) bulletin board for all to see. Does
everyone feel this way? How about the naive user who puts a file in
/usr/spool/ftp without knowing the convention? Can he later flame with any
justification about the "back door" in FTP that made his file freely
available to one and all?  Suppose someone finds something in an anonymous
FTP directory that the owner didn't really want made public -- is it still
up for grabs? That's not so easy to answer.  One man's "standard convention"
is another man's security hole.

Here's an analogy of my own that, in my opinion, is just as good as any I've
heard. Someone calls up a random person on the phone and says the following:
"Tape record what I am saying, and play it to three of your friends.  Then
get your gun and shoot yourself."  Suppose that a sizeable fraction of the
population actually *obeys* these instructions.  (Note that no threat was
expressed or implied).  Irresponsible?  Of course. Immoral?  Yes. But can
you really completely exempt the victim from at *all* of the blame?  Of
course not!  And Morris's worm didn't ask the recipient to shoot itself.

Computers and networks are a whole new realm, and analogies made with more
conventional acts may be misleading.  New traditions as to "fault" when
something unpleasant happens in a computer network will have to evolve over
time, just as the rules and laws have for motor vehicle operation.  I am in
no way defending Morris or his actions; I am as angry at him as anyone else,
although some of my anger is also directed at those who created the holes he
exploited, either deliberately or through sloppy coding.

I am only asking that people look at things as they are, without arguing
solely by analogy.  Hopefully one of the results of this incident will be a
better defined consensus as to what the difference between right and wrong
is in the computer world, one that we can expect people to learn and respect
in the future.

Phil

peo@pinocchio.Encore.COM (Paul Orman) (11/08/88)

In article <5332@medusa.cs.purdue.edu> spaf@cs.purdue.edu (Gene Spafford)
writes:
> We cannot depend on making our systems completely secure.  To do so
> would require that we disconnect them from each other.  There will
> always be bugs and flaws, but we try to cover that by creating a sense
> of responsibility and social mores that say that breaking and cracking
> are bad things to do.  Now we have to demonstrate to the world that
> this is the case, and we will back it up with legal action, or we'll
> continue to risk having bored students and anti-social elements
> cracking whatever we replace the systems with until there is no longer
> any network.  

I agree 100%, but only to the extent that we know there will be the bugs
and flaws you mention.  This should never mean we do not strive twoards
higher and better security on our systems.  As you are well aware the
Govt. has declared that all systems purchased by them must be C2 rated
by 1992.  This will include networks too.  As far as your lock analogy
goes - people usually lock their houses with what they can afford tempered
by the area they live in (ie - high crime rate or not).  If this type
of incident were to be continually repeated I'm sure most of us would
decide its time to invest in *deadbolts* for our gateways at least.  I
have seen postings reflecting the extremes of reactions here.  While both
extremes may be valid to a point, I really hope the majority of us take a
more centered approach in the weeks to come and develop a balanced
attitude towards security and its implementations.  We should continue to
take steps to plug known holes while at the same time keeping the proper
emphasis on the social morals and responsiblities of all who exist off the
net.

...............................................................................
Paul Orman - peo@multimax.ARPA         |  I don't know enough to speak for
ENCORE Computer Corporation            |  myself, let alone my employer.
Marlborough, MA (508) 460-0500         |

joel@arizona.edu (Joel Snyder) (11/08/88)

sending out patches are completely irrelevent.  Are you saying that if
I discovered the same bug and brought it to the attention of Digital 
that their cost of sending out emergency patches wouldn't be the same?

This is not a "lock" which *I* put on my system and which someone has
forced.  This is a lock which I bought from a very large computer company
with the reasonable assurance that any J. Random Hacker couldn't pick
it in the time it takes me to pick my nose.  No matter how the fact that
there is a bug in the "lock" was brought to the attention of our various
and sundry computer vendors, they still have the same obligation to get
a fix out to their users as soon as possible.  

The only argument I would be willing to accept is that if I brought
the matter up with vendors privately, they might have more time to
thoroughly test things and make sure that there aren't other problems
of a similar ilk in other pieces of code.  As it is, a couple of programmers
are going to spend some long hours, some software distribution centers
are going to put in some overtime, and we're going to get a marginally
more expensive patch.  

I'm not necessarily in favor of thanking Morris for pointing out the
security hole; I think he should get the sh*t beaten out of him by
the people who had to spend long hours last week because he didn't have
the decency to bring up his discovery (which I guess was reasonably
well known to the sendmail gurus) in a little less sensational way.
But talking of fines, sentences, and class action suits sounds like a lot
of economic nonsense to me.

Joel Snyder
University of Arizona MIS Dep't
jms@mis.arizona.edu

brad@looking.UUCP (Brad Templeton) (11/08/88)

This is why I said the virus was a good thing.  If this bug had simply
been reported, what would have taken place?

 o Attempts would be made to make sure the information was never broadcast.

 o People would try to send the fix out to various sysadmins, half of whom
   would not fix it because they're lazy, and 1/4 of whom would not fix it
   because sysadmins are the only ones to know about it.

 o The fix would go in the next release, and after a few years, most people
   will have upgraded, except perhaps their server machines which run just
   fine and don't need the extensive work of an upgrade.

Now everybody has worked to plug it, and plug it fast.

To those who corrected me about this bug allowing root access:  You are
right, most sites do not run sendmail as root.  But remember the principle
that a corrupted system program is a corrupted system.  (Particularly
a mail program.)   How many systems make absolutely sure that system programs
owned by the sendmail owner *never* get executed by root, or the root-only
cron, etc.
-- 
Brad Templeton, Looking Glass Software Ltd.  --  Waterloo, Ontario 519/884-7473

spaf@cs.purdue.edu (Gene Spafford) (11/08/88)

In article <1676@imagine.PAWL.RPI.EDU> night@pawl11.pawl.rpi.edu (Trip Martin) writes:
>And by this same logic, someone who walks into the middle of a
>battlefield isn't at fault if he gets shot.

Whoa there... that's nonsense.  Are you equating working with computers
to war?  If so, you have a view pretty different than mine, and I
suspect quite different from most people.  We don't sign on to the
computer expecting a worm or virus to have corrupted the system.  I
expect some disasters like disk crashes and power outages, but criminal
vandalism is not expected behavior in this venue.

>Okay, suppose a bank follows this logic and has generally poor locks
>on their place of business.  While the guy who breaks into a bank is
>still a criminal, the bank is also to blame, since it holds lots of 
>money and is a very attractive target to criminals.  Security should
>be a function of the value of the objects being protected.

I sure hope you never serve on a jury for a criminal trial.

In both law and philosophy, the bank is *not* to blame.  Simply because
they don't have a vault adequate to stop a certain class of criminal
does not make them to blame in any way.  No matter how good the vault
is, enough criminals and enough determination can be found to crack it
and that does not put any blame on the bank.  To say otherwise is to
say that victims are always culpable for the crime -- and that is both
ludicrous and insulting.

>Now think about how valuable the information stored on your computer
>is...  If you don't think that there are people who would love to get
>their hands on that information, or use your computer for their own
>purposes, you have another thing coming...  Add to this the fact that 
>the internet offers an unlimited supply of computers to hack and steal 
>from...

You are espousing the philosophy of the cracker, not that of a
responsible user.  This is my work environment, not some
unlimited playground for immature individuals.  And the information
on my computer is of interest to very few, if any, people

>>We have failed to imbue society with the understanding that computers
>>contain property, and that they are a form of business location.  If
>>someone breaks our computers, they put us out of work.  If someone
>>steals our information, it is really theft -- not some prank gone
>>awry, and it certainly isn't some public service!
>
>You think that kind of logic is going to stop a criminal with real
>goals?  The idea that murder is a serious crime has been passed down
>for thousands of years, yet that hasn't stopped people from doing it.

Get real!!  Breaking into computers is a different kind of crime than
murder.  Murder is usually a spur-of-the-moment crime.  Further, I
didn't claim that we should protect our systems *only* with education
and public mores.  But if we don't emphasize that outlook, no matter
how much security we try to put into place we won't have reasonably
safe systems because there will be no reason for crackers not to try.

>What this guy did was a crime, but he also did us a real service.  
>He got our attention in a big way.  He succeeded in breaking into
>hundreds of computers in a matter of days.  Next time the intrusion
>may not be so obvious, nor the damage done...

So?  Some of us have found or been one of the first to be informed of
ways to break into Unix systems.  We could have broken into hundreds of
systems, but instead arranged to inform people of how to plug the
holes.  Who has done the bigger service?

>Relying on social mores to protect your systems is a sorry policy.  We
>certainly should have stiff legal penalties for hacking, but as everyone
>knows, to be punished, you have to be caught.  And catching hackers
>can often be near impossible.

I never proposed that we rely solely on mores to protect our systems.
However, we need to emphasize that aspect more than we have.  Certainly
we need to take a more formal approach to securing our systems, but as
I said before we can *never* completely secure our systems if we wish
to continue the Internet.  As long as computers are connected together
and users allowed on them, there will be ways to break "security." What
we want is to increase the level of trust as well, and that includes
trusting other users.  I'd much rather make it clear that hacking is
just plain wrong rather than have to punish someone after the fact --
it is a better path for everyone involved.

guy@auspex.UUCP (Guy Harris) (11/09/88)

>Uh, this is the actual reason I responded. Do you *really* believe that
>they'll respond that quickly?

I can - they posted patches quite recently.

sewilco@datapg.MN.ORG (Scot E Wilcoxon) (11/09/88)

In article <5331@medusa.cs.purdue.edu> spaf@cs.purdue.edu (Gene Spafford) writes:
...
>>In article <12081@dscatl.UUCP>, lindsay@dscatl (Lindsay Cleveland) writes:
>>>I would surmise that a lot of the sites who *were* damaged by the
>>>virus and expended much real cash in man-hours (overtime!) chasing it

Each site should record how much the Internet worm cost them, and someone
(on the Internet--I'm too distant) should volunteer to collect that
information.  This information should be calculated now before it is lost
to history.

...
>that Sun Microsystems will expend a few $100K on this -- not only to
>eradicate the worm in their internal network, but they will have the
>expense of FedEx'ing copies of patches to all their sites under
>maintenance.  DEC will have similar costs.  Then there is BBN and....

Costs of eradicating the worm should be separated from costs of fixing
a bug, particularly known bugs.
-- 
Scot E. Wilcoxon  sewilco@DataPg.MN.ORG    {amdahl|hpda}!bungia!datapg!sewilco
Data Progress 	 UNIX masts & rigging  +1 612-825-2607
	I'm just reversing entropy while waiting for the Big Crunch.

clb@loci.UUCP (Charles Brunow) (11/09/88)

In article <16600@agate.BERKELEY.EDU>, weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
> 
> Yup, good show there.  I hope you're not smugly counting on the next rogue
> code to be so easy to notice and eliminate by some of my fellow Berkeley
> grad students?  DO SOMETHING **NOW** TO PROTECT YOURSELVES!  WAKE UP FOLKS!
> 

	Wouldn't it help to intermix computer types so that there wouldn't
	be so many like systems talking to each other?  If the worm works
	on Sun's and BSD's and is passed between them then it seems that
	putting some non-target system between them could interfer with
	it's spread.  I don't know enough about the problem to really say
	but this sounds like the same problem that can occur in agriculture
	if crops aren't rotated.


-- 
			CLBrunow - KA5SOF
	clb@loci.uucp, loci@csccat.uucp, loci@killer.dallas.tx.us
	  Loci Products, POB 833846-131, Richardson, Texas 75083

mason@tmsoft.UUCP (Dave Mason) (11/09/88)

In article <16600@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
>?In article <12081@dscatl.UUCP>, lindsay@dscatl (Lindsay Cleveland) writes:
>>>		  So, it was Robert T. Morris Jr., was it?
?>There are thousands of computers out there extremely vulnerable to attack.
>Instead of wailing on about class-action suits to recover "damages", all
>these sites that just maybe have woken up and plan to actually take secur-
>ity seriously should pay RTM in moneys saved from the potential *BILLIONS*
Hmmmm.....................^^^

I wonder if Mr. Morris really has a second middle name, like Fred :-)
?
Just to add a little content to this posting, I think spaf & weemba are
both right (did I hear 2 simultaneous gagging sounds? :-).

Yes this particular episode was expensive, yes our modern society (and its
logical extension, the net) lives by a set of morals and standards, and yes
we should enforce laws to make people realize that computer innards are REAL
ASSETS, just like BMW's and Lalique Crystal, and yes a lot of these problems
were known.....BUT

There are either:
a) a lot of sysadmins out there who don't think there's much point in taking
REASONABLE security precautions, like making sure that trusted programs like
mailers don't have wide-open DEBUG modes installed on production machines
-or-
b) a lot of sysadmins who's bosses don't think there's much point ....
and therefore have the sysadmins spend time & effort elsewhere.
-plus, of course-
c) sysadmins who haven't had the time/training to realize there are security
holes that need plugging.

I claim that this episode has helped (or at least should help) all 3 groups
to see the potential dangers and hopefully people will respond in a
positive way and work to plug OBVIOUS, WELL-KNOWN holes like this. 

Someone should apply to NSF or ARPA for an ongoing grant to produce a set of
worms/viruses every year or so that would go out into the net, nose around,
and finally send mail back home & to root on the machines affected warning
about holes it has managed to wriggle into....if I were running a military
network (even a wide-open-friendly military *research* network), I'd certainly
do something like that.

Just to put in some perspective on Gene's analogy of people using simple
locks on their front doors (and how you'd probably not appreciate people
breaking in to show you how lax the security was), consider another analogy
(which I should point out is not necessarily MORE accurate):

If you left your BMW 7xx sitting unlocked on the street in front of your house,
and some neighbourhood kid started playing in it, slamming the doors, got a
little mud on the seats, you'd be pretty ticked off, and you'd probably start
locking the car, even though it's a little less convenient.  This would
doubtless iritate you...at least until your nextdoor neighbour's unlocked
Caddy is ripped off by an amateur car theft ring.

Just some ambiguous thoughts on recent events.
	../Dave

seibel@cgl.ucsf.edu (George Seibel) (11/09/88)

In article <7735@megaron.arizona.edu> jms@mis.arizona.edu (Joel Snyder) writes:
>
>security hole; I think he should get the sh*t beaten out of him by
>the people who had to spend long hours last week because he didn't have

   I keep seeing this kind of garbage on the net.  What's the matter,
your hack game get disrupted?   You had to stay late?  Oooohhhh...

George Seibel

jbrown@herron.uucp (Jordan Brown) (11/09/88)

spaf@cs.purdue.edu (Gene Spafford) writes:

> That was an unkind comment, Weemba.  It also misses the fact that such
> a class action suit could be filed for millions, not $10K.  I suspect
> that Sun Microsystems will expend a few $100K on this -- not only to
> eradicate the worm in their internal network, but they will have the
> expense of FedEx'ing copies of patches to all their sites under
> maintenance.  DEC will have similar costs.  Then there is BBN and....

It's not fair to count patch costs against the worm.  The patch costs
would have occurred even if the fellow had never run the program, only
written a letter describing the problem.

Too often you see a newspaper article which claims that some computer
break-in "cost $100K", and when you look closely there was little direct
direct damage and the $100K was to fix security so it couldn't happen again.
This is akin to claiming that the burglar is responsible for paying for
you putting in a security system.

Of course, if it hadn't been so spectacular, maybe they wouldn't have
distributed the patches.  Then when somebody does something actually
malicious using the hole, Sun (or whoever) would be up for one whopping
gross negligence-style suit.

By all means send the guy the bill for the manpower wasted eradicating
the worm, but don't ask him to pay for fixing all the systems in the
world so it can't happen again.

sl148033@silver.bacs.indiana.edu (Kevin_Clendenien) (11/09/88)

I've seen some pretty crazy things written here.  I think everyone, with
the exception of Matthew P. Wiener/Brahms Gang/Berkeley CA 94720, has
expressed valid points.  Mr. Wiener has been just plain rude, as well
as ridiculous.

I think both camps have valid points.  When an organization, or individual
links into this network, they are accepting the responsibility for 
looking after security on their system.  After all, if their system is
not secure, then they are compromising the security of all those systems
that they a linked with.  It seems apparent from the success of this
worm, that many organizations/individuals have not lived up to their
responsibility to bring a secure system to the network.  Likewise, the
vendors have not lived up to their responsibility.

This does not lesson what the perpetrator of this worm has done.  They
have, without permission, eaten up thousands of CPU hours, and thousands
of man hours.  I find it hard to believe that anyone, even the
perpetrator, can reason that it was okay to use someone else's computer,
without permission.  Now, I know some of you are going to say that there
was implied consent, since the security on the violated systems allowed
the worm in.  This is hogwash.  Releasing the worm onto the network
was wrong!

Of course, these are just my opinions.  They aren't that important.  What
is important, is to realize that everyone is at fault.  The vendors,
the organizations using the network, and the creator of the worm.
Punishment should be metted out to all.  Well, the organizations using
the network have the man hours and CPU hours as punishment.  The
worm's creator should meet some punishment.  I would favor community
service.  I don't want to lesson the offense, but any judge will have
to look at the fact that the worm was not destructive.  The only ones
getting off here are the vendors.  How should they be punished?
----------------------------------------------------------------------
sl148033@silver.UUCP                          -Kevin Clendenien
"The best way to fight fire, is with fire..."
----------------------------------------------------------------------
 

mark@drd.UUCP (Mark Lawrence) (11/09/88)

(AP) WASHINGTON Nov 9, 1988 (w/o permission, of course)

...The FBI pressed forward with its criminal investigation considering
and then rejecting the idea of seeking grand jury subpoenas for
documents at Cornell which could help shed some light on the computer
virus incident, federal law enforcement sources said.
	University officials seem anxious to resolve the matter by swiftly
supplying federal investigators with as much information and as many 
documents as requested, said the sources, speaking on condition of anonymity.
...
	Charles Steinmetz, an FBI spokesman, had said Monday that the
preliminary inquiry of the computer virus incident was being upgraded to
a full-scale criminal investigation and that the bureau was examining
possible violations of the Computer Fraud and Abuse Act of 1986.
	The law carries a one-year maximum prison term on conviction for
intentionally gaining unauthorized access to a computer used by the U.S.
government and affecting the operationof the computer.  The law also
carries a five-year maximum prison term fro intentionally gaining
unauthorized access to two or more computers in different states and
preventing authorized use of such computers or information.[In Spades!]
	The virus paralyzed more than 6000 university and military computers
nationwide last Wednesday and Thursday.
...

zeeff@b-tech.ann-arbor.mi.us (Jon Zeeff) (11/09/88)

In article <5347@medusa.cs.purdue.edu> spaf@cs.purdue.edu (Gene Spafford) writes:
>In article <1676@imagine.PAWL.RPI.EDU> night@pawl11.pawl.rpi.edu (Trip Martin) writes:

>>Okay, suppose a bank follows this logic and has generally poor locks
>>on their place of business.  While the guy who breaks into a bank is
>>still a criminal, the bank is also to blame, since it holds lots of 

>In both law and philosophy, the bank is *not* to blame.  Simply because
>they don't have a vault adequate to stop a certain class of criminal
>does not make them to blame in any way.  No matter how good the vault
>is, enough criminals and enough determination can be found to crack it
>and that does not put any blame on the bank.  To say otherwise is to

The law says that you have to take reasonable precautions.  For 
example, landlords *are* liable in cases where they didn't put locks on
doors in areas where there are security risks.  Of course the person
breaking in is guilty too.









--
I never thought much of sendmail anyway.
Jon Zeeff      			Ann Arbor, MI
umix!b-tech!zeeff  		zeeff@b-tech.ann-arbor.mi.us

gsh7w@astsun1.acc.virginia.edu (Greg Hennessy) (11/10/88)

In article <418@auspex.UUCP> guy@auspex.UUCP (Guy Harris) writes:
#>Uh, this is the actual reason I responded. Do you *really* believe that
#>they'll respond that quickly?
#
#I can - they posted patches quite recently.

When do I get the patch to make "at" safe?
-Greg Hennessy, University of Virginia
 USPS Mail:     Astronomy Department, Charlottesville, VA 22903-2475 USA
 Internet:      gsh7w@virginia.edu  
 UUCP:		...!uunet!virginia!gsh7w

rick@seismo.CSS.GOV (Rick Adams) (11/10/88)

In article <1988Nov9.033444.20788@tmsoft.uucp>,  writes:
> I claim that this episode has helped (or at least should help) all 3 groups
> to see the potential dangers and hopefully people will respond in a
> positive way and work to plug OBVIOUS, WELL-KNOWN holes like this. 

In my continuing quest....

I still have not been able to get the name of ONE person who knew
that this was an "OBVIOUS" hole.

Name one and my quest will be over.

The fact that sendmail has a debug mode is not a bug. The fact that you
can run an arbitrary process from debug mode IS a bug. I hear lots of
people claiming that it was a well known problem, yet no one can name
one person who will admit to knowing it.

Another urban legend in the making...

---rick

gz@spt.entity.com (Gail Zacharias) (11/10/88)

In <44441@beno.seismo.CSS.GOV> rick@seismo.CSS.GOV (Rick Adams) writes:
>I still have not been able to get the name of ONE person who knew
>that this was an "OBVIOUS" hole.
>
>Name one and my quest will be over.

mdc@ht.ai.mit.edu.

--
gz@entity.com					...!mit-eddie!spt!gz
	  Unix: when you can't afford the very best.

honey@mailrus.cc.umich.edu (peter honeyman) (11/10/88)

Rick Adams writes:
>I hear lots of
>people claiming that it was a well known problem, yet no one can name
>one person who will admit to knowing it.

rick, paul vixie claims he has known about it for some time, and
someone says there is a years-old piece of paper by matt bishop
listing this and other bugs.  (haven't heard from matt, though.)

	peter

mark@sickkids.UUCP (Mark Bartelt) (11/10/88)

In article <5331@medusa.cs.purdue.edu> spaf@cs.purdue.edu (Gene Spafford)
comments, somewhat disingenuously, ...

>                                                             I suspect
> that Sun Microsystems will expend a few $100K on this -- not only to
> eradicate the worm in their internal network, but they will have the
> expense of FedEx'ing copies of patches to all their sites under
> maintenance.  DEC will have similar costs.  Then there is BBN and....

Perhaps true, but totally irrelevant to the question of whether the person
responsible for the virus/worm/whatever (actually, based on its behaviour,
I prefer to call it a caterpillar) is a bad boy or a hero.  Not true in any
event in the case of DEC, from what I've heard.  If Sun has to spend big
bucks to get fixes out to their customers, I'm afraid I won't be the least
bit inclined to shed a tear, since they'll be paying the price now for past
laziness.  As Geoff Goodfellow observed (quoted from comp.risks):

>                        Look how many manufacturers [...] just took the
> original computer-science-department developed code willy-nilly, put their
> wrapper and corporate logo on it, and resold it to customers.  That's the
> real travesty here, we build these systems, OK, that's great, but we rarely
> build them and then ask how they might be abused, broken, or circumvented.

Excellent point.  And it seems that if a large, successful company takes
the easy way out, namely grabbing a gigantic collection of software, much
of it written by graduate students (even undergraduates?) of widely varying
talent, and calling it a "product" without spending the time and effort to
apply the same quality control standards to it that they would normally use
for products written in-house, they're simply asking for trouble.  To their
credit (and, presumably, with gratitude from their customers), Digital seems
to have done the Right Thing with Ultrix.  Whether this was by design or by
accident, I have no way of knowing.

Of course, one can argue that sendmail is so huge, complex, and difficult
to understand that it's impossible to assess its quality (or lack of same).
This is certainly true.  For that reason, our site is, always has been, and
always will be a sendmail-free zone (my thanks to Geoff Collyer for coining
a good term to go along with a sensible concept).  One of the first things
I did when we brought up our system was to remove sendmail (and a number of
other setuid things as well).  It's just common sense to minimize the number
of things that run setuid (especially setuid root).  I don't bother worrying
about programs that I can get a mental handle on.  For example, su.c is all
of 187 lines long, so I can be certain that it does what it's supposed to do.
But sendmail is *seventeen thousand* lines of code!  As you old-timers will
recall, that's nearly twice the size of the Sixth Edition kernel!  We have
seen other security-related sendmail bugs in the past, and I wouldn't be at
all surprised if there will be more in the future.  How can anybody really
know for certain?  It's an atrocity and a monstrosity (hey, do I sound like
Jesse Jackson yet?), and should be avoided at all costs.  End of editorial.

Mark Bartelt                          UUCP: {utzoo,decvax}!sickkids!mark
Hospital for Sick Children, Toronto   BITNET: mark@sickkids.utoronto
416/598-6442                          INTERNET: mark@sickkids.toronto.edu

karl@triceratops.cis.ohio-state.edu (Karl Kleinpaste) (11/11/88)

brad@looking.UUCP (Brad Templeton) writes:
   This is why I said the virus was a good thing.  If this bug had simply
   been reported, what would have taken place?

Lots of good things, that's what.

    o Attempts would be made to make sure the information was never broadcast.

No.

    o People would try to send the fix out to various sysadmins, half of whom
      would not fix it because they're lazy, and 1/4 of whom would not fix it
      because sysadmins are the only ones to know about it.

No again.

    o The fix would go in the next release, and after a few years, most people
      will have upgraded, except perhaps their server machines which run just
      fine and don't need the extensive work of an upgrade.

No a third time.

   Now everybody has worked to plug it, and plug it fast.

I submit as an example, yet again, the recent discovery of a security
hole in ftpd.  In juxtaposition against your 3 suggestions above:

	o The information was broadcast, "quietly," to a LOT of people
	  who had the wherewithall to do something about it.

	o There were fixes available FAST.  I learned of the mess
	  less than 6 hours after first report.  I had a sample fix
	  less than 30 minutes after that.  And that was at 11:30pm on
	  a Saturday night!  I eventually got 2 more fixes, unsolicited,
	  from other friends around the Internet.  Everyone I knew was
	  installing it as fast as they could get their shell to exec
	  /bin/make.

	o The fix was made public via a posting in ...ucb-fixes so
	  that everyone with a C compiler can upgrade NOW and not
	  wait for slow-as-molasses vendors to decide that it's
	  worth getting around to.  And I think it's important to
	  note that not all vendors are slow-as-molasses, either; I
	  sent a copy of what I initially received to Pyramid and
	  had the attention of csg@pyramid FAST - they began a
	  distribution of their fix within (I think) 2 days.

Consider that the ftpd bug was initially reported over the weekend
when X11R3 was released.  Did everyone notice lots of anon ftp sites
announcing that they were down/ftp-disabled for a while until they
could close down a `small security problem?'  Yeah, I thought you
noticed expo.lcs.mit.edu disappearing for a while, not to mention
<pick-your-favorite-archive-site>.  We had to do so, mentioning it in
gnu.something.

The system works when given a decent chance to try to work.

guy@auspex.UUCP (Guy Harris) (11/11/88)

>#>Uh, this is the actual reason I responded. Do you *really* believe that
>#>they'll respond that quickly?
>#
>#I can - they posted patches quite recently.
>
>When do I get the patch to make "at" safe?

To which patch are you referring?  No, you can't crack the SunOS 4.0
"at" by "chown"ing the job files, and I don't know that you can crack
the version in earlier releases, either.

jc@heart-of-gold (John M Chambers) (11/11/88)

In article <7159@pasteur.Berkeley.EDU>, c91a-ra@franny.Berkeley.EDU (john kawakami reader) writes:
> Hear hear!  I agree with weemba@garnet.  We should be glad the virus was not
> intended to be malicious.  I heard Morris had managed to be root on some 
> machines.  The potential for damage is frightening.
> 

Well, actually, if you are on the internet, it is trivially easy to be
root on just about any host that's running sendmail as a daemon:
	telnet <hostname> 25

This will get you a connection to the local sendmail, which is almost
always running as root.  You can now treat this sendmail as a "shell"
with a somewhat limited set of commands.  The first one you want to
type probably starts with "HELO"; consult the appropriate RFC for
further details.

What Morris's virus did, in fact, was take advantage of an undocumented
command in sendmail's repertoire that allowed the calling user to start
up a shell.  That should require no further comment.

But just becoming root, well, that's not necessarily a security hole.
You can also get to be root briefly on most Unix systems by calling in 
and answering the "login:" prompt with "sync".  You will be root for a 
few milliseconds, and then you'll be logged off.  Can anyone give any
reason that this is a security problem?  

-- 
From:	John Chambers <mitre-bedford.arpa!heart-of-gold!jc>
From	...!linus!!heart-of-gold!jc (John Chambers)
Phone	617/217-7780
[Send flames; they keep it cool in this lab :-]

klg@dukeac.UUCP (Kim Greer) (11/13/88)

In article <159@loci.UUCP= clb@loci.UUCP (Charles Brunow) writes:
=In article <16600@agate.BERKELEY.EDU=, weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
== 
== Yup, good show there.  I hope you're not smugly counting on the next rogue
== code to be so easy to notice and eliminate by some of my fellow Berkeley
== grad students?  DO SOMETHING **NOW** TO PROTECT YOURSELVES!  WAKE UP FOLKS!
== 
=
=	Wouldn't it help to intermix computer types so that there wouldn't
=	be so many like systems talking to each other?  If the worm works
=	on Sun's and BSD's and is passed between them then it seems that
=	putting some non-target system between them could interfer with
=	it's spread.  I don't know enough about the problem to really say
=	but this sounds like the same problem that can occur in agriculture
=	if crops aren't rotated.
=-- 
=			CLBrunow - KA5SOF
=	clb@loci.uucp, loci@csccat.uucp, loci@killer.dallas.tx.us
=	  Loci Products, POB 833846-131, Richardson, Texas 75083

Yes, let's try this suggestion.  In a field, do not plant just corn.  Plant
corn, pineapple, oranges, grapes and kiwi-fruit.  That way, a plant disease
will not be able to spread from like plants to another.  Yes, this requires
that no two like-kind plants be side by side, but I guess that's the price
you have to pay.  Yes, I guess you would have more trouble trying to 
individually fertilize, water, till, plant, harvest ... , but (insert
previously used phrase here).

And let's not forget the computers.  Make every manufacturer have his own
operating system that is entirely different from every other manufacturer.
Do not let commonality be the medium by which computer diseases spread.

And don't let any person on the planet speak the same language as another.
That way we never have one person insult another.

And don't let any automobile use the same type of gasoline as another.  That
way, every manufacturer of cars can have his own tank at the service station
so that a "bad batch" of gasoline never gets into both a Ford and a Chevie.

End-sarcasm-mode.

Come on, grow up and get real.  There's too much freaking incompatibility
in computer stuff as there is now.

If something is busted, then fix it, and get back to some real work.

-KG
-- 
Kim L. Greer                       
Duke University Medical Center		try: klg@orion.mc.duke.edu
Div. Nuclear Medicine  POB 3949            dukeac!klg@ecsgate
Durham, NC 27710  919-681-2711x223      ...!mcnc!ecsgate!dukeac!klg           		fax: 919-681-5636

david@ms.uky.edu (David Herron -- One of the vertebrae) (11/15/88)

But Karl ...

What about the anectdote that <of f*rt, I can't remember the name of
that guy at oddjob ... him tho ... Pooh's boyfriend ... yeah, that guy,
but what's his name?? Oh, Matt Crawford> posted to "phage" about "a friend
who works for some securities firm"  (Pooh?)  who found out that someone
had reported the ftpd bug to the vendor (if Pooh, that'd be Sun) over
a year ago ... AND THE VENDOR NEVER FIXED IT!

A lot of places won't fix this bug because they don't trust anything
their vendor doesn't provide.  Or maybe the vendor did something that's
on the verge of proprietary and you can't just plug Berkeley stuff
in and go on.

I've fixed the ftp&finger bugs here, had to have a small fight in order
to be able to do it, and even told the other people on campus that
it exists.  Have they fixed it?  Weeell... that would be telling.

It doesn't work very well.  Yes it works for those of us who have
the time to interact with the network all the time.  (Rather, are
paid to interact with the network all the time).  It doesn't work
for "the rest of them".
-- 
<-- David Herron; an MMDF guy                              <david@ms.uky.edu>
<-- ska: David le casse\*'      {rutgers,uunet}!ukma!david, david@UKMA.BITNET
<--
<-- Controlled anarchy -- the essence of the net.

karl@triceratops.cis.ohio-state.edu (Karl Kleinpaste) (11/16/88)

david@ms.uky.edu writes:
   A lot of places won't fix this bug because they don't trust anything
   their vendor doesn't provide.

And they run Berkeley[-derived] UNIX?