[comp.protocols.tcp-ip] Virus - did it infect "secure" machines

root@sbcs.sunysb.edu (root) (11/07/88)

Does anyone know whether the sendmail virus was able to infect
the machines protected by Kerebos?  No flames, please, the question
isn't a statement against Kerebos per se; I just wonder whether
clever people will always find ways into "secure" Unix boxes.
What about machines that have met with tempest specs?

					Rick Spanbauer
					SUNY/Stony Brook

wesommer@athena.mit.edu (William Sommerfeld) (11/08/88)

[FYI: it's spelled "Kerberos", not "Kerebos"]

In article <1792@sbcs.sunysb.edu> root@sbcs.sunysb.edu (root) writes:
>Does anyone know whether the sendmail virus was able to infect
>the machines protected by Kerberos? 

First of all, machines aren't (directly) protected by Kerberos;
network services are.  So, if you run sendmail with debug turned on,
or a fingerd without the range check, or a normal rlogind while
.rhosts files abound, you're vulnerable.  So, yes, a few people who
administer systems here at Athena were a little careless, and
installed mailers with "debug" enabled, and some even left .rhosts in
places.  

The virus didn't get very far at Athena, mostly thanks to from "second
order effects" of kerberos--our fileservers don't run any more daemons
than they have to, or allow remote logins to mere mortals, and most of
our operations staff have been educated about using passwords which
are in a dictionary.  

>No flames, please, the question
>isn't a statement against Kerberos per se; I just wonder whether
>clever people will always find ways into "secure" Unix boxes.

If you want to have some hope of containing things while connected to
a network, be _very_ careful about the network services you run, and
don't run any more servers than you need.

					- Bill

gkn@SDS.SDSC.EDU (Gerard K. Newman) (11/08/88)

>From:	 root@sbcs.sunysb.edu  (root)
>To:	 tcp-ip@sri-nic.arpa
>Subject: Virus - did it infect "secure" machines
>Date:	 7 Nov 88 13:29:10 GMT
>Organization: State University of New York at Stony Brook
>
>Does anyone know whether the sendmail virus was able to infect
>the machines protected by Kerebos?  No flames, please, the question
>isn't a statement against Kerebos per se; I just wonder whether
>clever people will always find ways into "secure" Unix boxes.
>What about machines that have met with tempest specs?
>
>					Rick Spanbauer
>					SUNY/Stony Brook

Rick:

TEMPEST is a specification for the controlling of electromagentic
emissions through which data on a computer system can be compromized.
TEMPEST cerfified systems are usually housed in some sort of enclosure
(ranging in size from slightly larger than the machine to a computer
room) which prevents someone from being able to intercept these
emissions and make sense from them.  This in and of itself does not
make it immune from the kind of virus (worm) which infected the
interenet last week.

Typically, a TEMPEST certified machine processes classified data.
It is ILLEGAL (a federal offense) to have a machine connected to the
interenet which contains classified data.  Thus, machines which process
classified data do not in general have network connections to unclassified
networks.

If the virus managed to infect a machine which contains classified data
then someone (the CSSO in DOE-speak) is not doing their job, and is,
as they say in the south, in a heap of trouble.

gkn
----------------------------------------
Internet: GKN@SDS.SDSC.EDU
Bitnet:   GKN@SDSC
Span:	  SDSC::GKN (27.1)
MFEnet:   GKN@SDS
USPS:	  Gerard K. Newman
	  San Diego Supercomputer Center
	  P.O. Box 85608
	  San Diego, CA 92138-5608
Phone:	  619.534.5076

root@SBCS.SUNYSB.EDU (11/08/88)

Yes, I knew tempest covered EM emissions.  I was referring to a
product that Sun recently announced which was said to meet
".....", a DOD security specification.  I mistakenly inserted
tempest...

					Rick

root@sbcs.sunysb.edu (root) (11/14/88)

In article <881107224915.20c01427@Sds.Sdsc.Edu>, gkn@SDS.SDSC.EDU (Gerard K. Newman) writes:
> >What about machines that have met with tempest specs?
> >
> >					Rick Spanbauer
> >					SUNY/Stony Brook
> TEMPEST is a specification for the controlling of electromagentic
> emissions through which data on a computer system can be compromized.

	Gerard, yes thanks, I know tempest is an EMI spec.  My apologies
	for incorrect usage in the original article - I had mistakenly
	mixed another spec with TEMPEST.

> gkn
> ----------------------------------------
> Internet: GKN@SDS.SDSC.EDU
> Phone:	  619.534.5076

					Rick Spanbauer
					SUNY/Stony Brook

smb@ulysses.homer.nj.att.com (Steven M. Bellovin) (11/14/88)

I've been thinking a lot about that question; tentatively, I don't see how
most ``secure'' machines would have escaped.  Consider, for example, a
B1-level UNIX system -- there are several, such as System V/MLS, undergoing
certification.  What would be accomplished by equipping such a system with
a TCP/IP that adhered to the Trusted Network Interpretation of the Orange
Book?

B1 provides two notable capabilities:  extensive logging, and ``mandatory
access controls''.  The logging might have helped trace the bug, or may
have helped alert system administrators, but obviously wouldn't have blocked
it.  What about the access controls?  Would they have helped?  Probably not,
except in a minor way.

Mandatory access controls prevent a process from reading a file ``more
classified'' than the process's label, or writing to a file less classified
(in order to prevent leakage of classified information).  For the most
part, no such information was used by the worm.  It couldn't have gotten
at hashed passwords -- they're in a shadow file, not /etc/passwd -- nor,
most likely, could it have looked at .rhosts files or .forward files.
But the major means of transmission were the fingerd bug and the sendmail
bug, and unless /etc/hosts were marked classified -- not likely, unless
you want to say that only classified applications can talk over the net! --
attempts to exploit those bugs would not have been affected.

The IP security option(s) can carry classification labelling information.
A process can only talk to a peer at the same level.  If fingerd or
sendmail were eligible to run at the unclassified level, the worm could
have infiltrated itself via those channels.  To be sure, the worm
executable would only have access to unclassified channels -- but that's
all it needs to spread further.  Fundamentally, what we had was a denial
of service attack, which is very difficult to guard against.

The heart of any secure system is a small, simple, ``security kernel''.
*All* access decisions must be made in this kernel; with luck, it's small
enough, and simple enough, that one can have reasonable confidence in
its correctness.  The danger points are in the other ``trusted programs'' --
programs (like mailers) that of necessity must cross security boundaries
of some sort.  But this worm didn't use any trusted programs, nor did
it call the security kernel.  Rather, it exploited bugs -- which we
can't eliminate -- in two network applications, and then behaved as
an ordinary user process.  The TNI would (assuming correct implementation)
have kept the worm out of the classified areas of the system, but would
not have kept the system functional.  (I don't accept the argument that
the sendmail bug was known, and that fingered wouldn't be run by a secure
system.  True but irrelevant -- the real lesson here is that a competent
and determined individual can find bugs; the exact location of these
particular ones is mostly irrelevant.  Remember that this worm did not
use root privileges; as such, arguments about the inherent insecurity
of the UNIX system are not germane.)

I keep looking for a system model that would have blocked this sort of
attack.  Except for some sort of ``fairness scheduler'' -- one that would
have kept any one user, such as daemon or nobody from chewing up the
whole CPU -- I don't see one.  I'd like to, though.

		--Steve Bellovin

renglish@hpisod1.HP.COM (Bob English) (11/15/88)

> / smb@ulysses.homer.nj.att.com (Steven M. Bellovin) /  3:08 pm  Nov 13, 1988 /

> I keep looking for a system model that would have blocked this sort of
> attack.  Except for some sort of ``fairness scheduler'' -- one that would
> have kept any one user, such as daemon or nobody from chewing up the
> whole CPU -- I don't see one.  I'd like to, though.

Since the "damage" that this worm produced was denial of service through
overload, a fairness scheduler would indeed have prevented the damage,
though it would not have prevented the worm from arriving.  Since such a
scheduler would be required in order to avoid denial of service in a
trusted system (I'm not sure what the appropriate level would be), such
a system would have behaved reasonably under the worm attack, though
mail service might have been interrupted for a time.

--bob--

tneff@dasys1.UUCP (Tom Neff) (11/15/88)

In article <10846@ulysses.homer.nj.att.com> smb@ulysses.homer.nj.att.com (Steven M. Bellovin) writes:
>I keep looking for a system model that would have blocked this sort of
>attack.  Except for some sort of ``fairness scheduler'' -- one that would
>have kept any one user, such as daemon or nobody from chewing up the
>whole CPU -- I don't see one.  I'd like to, though.

How about a daemon to kill orphan processes?  The Morris attack tried to
obscure its origins once installed.



-- 
Tom Neff			UUCP: ...!cmcl2!phri!dasys1!tneff
	"None of your toys	CIS: 76556,2536	       MCI: TNEFF
	 will function..."	GEnie: TOMNEFF	       BIX: t.neff (no kidding)

jc@heart-of-gold (John M Chambers) (11/15/88)

In article <1792@sbcs.sunysb.edu>, root@sbcs.sunysb.edu (root) writes:
> Does anyone know whether the sendmail virus was able to infect
> the machines protected by Kerebos?  No flames, please, the question
> isn't a statement against Kerebos per se; I just wonder whether
> clever people will always find ways into "secure" Unix boxes.
> What about machines that have met with tempest specs?
> 
You should be a bit wary of accepting the answers.  I personally know of
several MilNet systems that were infected, but the public answer from their
administrators is "Of course not!"  Ask yourself:  Why should they tell the
truth?  I suspect that nobody (not even those in security agencies) will
ever know how widespread the infection really was.  Considering that no
real damage was done, it's very easy to cover up an infection and pretend
it never happened.

-- 
From:	John Chambers <mitre-bedford.arpa!heart-of-gold!jc>
From	...!linus!!heart-of-gold!jc (John Chambers)
Phone	617/217-7780
[Send flames; they keep it cool in this lab :-]

dsg@mitre-bedford.ARPA (David S. Goldberg) (11/16/88)

In article <170@heart-of-gold> jc@heart-of-gold (John M Chambers) writes:
>> 
>You should be a bit wary of accepting the answers.  I personally know of
>several MilNet systems that were infected, but the public answer from their
>administrators is "Of course not!"  Ask yourself:  Why should they tell the
>truth?  I suspect that nobody (not even those in security agencies) will
>ever know how widespread the infection really was.  Considering that no
>real damage was done, it's very easy to cover up an infection and pretend
>it never happened.
>
>-- 
>From:	John Chambers <mitre-bedford.arpa!heart-of-gold!jc>

John,

	It is not the case that all MILnet hosts are denying that they were
affected.  Mbunix (MITRE's corporate Ultrix systems for those of you who are
not with MITRE) was attacked, although the worm didn't replicate itself
there (ie the connections were made, but symptoms never felt), and at least
one local Sun network was infected.  I even spoke to a reporter about it, so
I know that we are not denying anything about being hit.  If the question is
whether or not machines containing classified info were hit, then the answer
is probably no, because (granted, as I understand it) those machines are not
even allowed on MILnet or any other wide network.

-dave
--------------------------------------------------------------------------
Dave Goldberg	             ARPA: dsg@mitre-bedford.arpa
The Mitre Corporation        or    dsg@mbunix.mitre.org
MS B020                      UUCP: linus!mbunix!dsg
Bedford, MA 01730
617-271-2460

budden@tetra.NOSC.MIL (Rex A. Buddenberg) (11/16/88)

Steve,

Your observation that the B1 criteria, by itself, would
not have stopped the worm is probably correct (sounds plausible
to me) as far as you've taken it.  But a real security system
goes farther.

The secure portion of Defense Data Ne is currently segregated from the
rest of the internet, and will remain so indefinitely.  In the near
future, the access control system will use an authentication node
who checks to see who you are upon connection opening; then orders
a key distribution node to issue you and your other party a
unique end-to-end password which evaporates at the conclusion
of your session.  

More important than the technical aspects are the personnel management
ones.  If you have a job that does not require access to a secure
system, then you lack a need to know and hence do not get in.
Regardless of clearance level.  Every time I've had a clearance
issued, recertified, upgraded or terminated, I get some indoctrination
regarding the importance of classified information and system integrity
for the structure that we use to contain it (sometimes I give
the indoctrination).

Link encryption, end-to-end encryption, multi-level secure systems,
necessary segregation and personnel management/training/leadership
are all important parts of a classified system and none can do the
job alone.

Rex Buddenberg

smb@ulysses.homer.nj.att.com (Steven M. Bellovin) (11/16/88)

In article <713@tetra.NOSC.MIL>, budden@tetra.NOSC.MIL (Rex A. Buddenberg) writes:
.....
> Link encryption, end-to-end encryption, multi-level secure systems,
> necessary segregation and personnel management/training/leadership
> are all important parts of a classified system and none can do the
> job alone.

You raise some good points that are worth stating another way:  DoD does
not trust computer security that much; their policies rely on administrative
measures to complement technical ones.  Thus, computers containing classified
material are not connected to unclassified networks.  If a link must be
established across such a net (such as the public phone network), link-layer
encryption of the appropriate strength is used; that way, security is
guaranteed by the encryption unit, a much smaller, simpler, and hence
more trustworthy device than an entire computer system.

The Orange Book is well-known; there's a little-cited companion book that
deserves equal attention.  It could be called the Yellow Book; it's title
is something like ``Technical Rationale for Applying the Computer Security
Criteria'', and it explains (among other things) how strong a system must
be for a given mix of user classification levels and data sensitivity.
I don't have the book (and hence the charts) handy, but one example is
worth mentioning:  for data classified as TOP SECRET -- MULTIPLE COMPARTMENTS
(a compartment is something like ``atomic submarines'', ``cryptology'', etc.),
even an A1 system may not have users with less than a SECRET clearance on
it.  Put another way, if uncleared users have access to the system, even
an A1 security rating does not permit storage of highly-classified data
on that machine.  That book makes another point:  the computer's security
is rated higher if it was developed only by cleared personnel.  There is
the assumption, of course, that security clearances are in some way related
to trustworthiness, but too often the question of ``who wrote the code''
is overlooked.  Often, people are the weakest link in the security chain;
if members of Congress can be bought (or at least rented), what do computer
operators or janitors in computer rooms cost?  (Aside:  as someone I know
once remarked about ABSCAM, ``I always knew politicians could be bought;
I didn't realize that I could afford one.'')

		--Steve Bellovin