[comp.unix.wizards] UNIX-WIZARDS Digest V3#078

black@ee.UCLA.EDU (Rex Black) (03/10/87)

> 	Would you wizards out there please send me (Chairman of local ACM)
> some detailed possible attacks on holes in UNIX OS's? I will pass the info
> on to our attack team, and to no one else (as I am also manager of an academic
> facility machine, I have a vested interest in maintaining such info secure).

I feel that Gould was *extremely* ill-advised to post such a challenge,
much less allow someone to take them up on it.  This so-called contest
really boils down into nothing more than an extremely advanced seminar
in how to destroy a Unix system.  By the time this ACM "attack team"
is finished with their "project", every one of these people is going
to be a veritable black-belt in system destruction.  It speaks pretty
poorly of Gould that they feel no compunction about encouraging people
to obtain this type of knowledge.  

Suppose that a nuclear energy facility had developed what they considered
an "unbreakable" security system for a plutonium reprocessing plant.
Would it then behoove the company to seek out a collection of Palestinian 
terrorists and dare them to steal 150 kilos of weapons-grade Pu?  I dare 
say that any company doing this would soon find that its management was 
cooling their heels in a max. sec. prison.  With Unix branching out into more
and more critical operations (banking, hospitals, national security, etc.),
what possible right does Gould have to assemble a team of "super-hackers",
no matter how reliable these people are?

I hope that Dr. Tullis is very careful in the screening of who gets into
his attack team...I personally would refuse because I *know* the temptation
would be too much for me; I wonder if every person to whom such power
would be an immense temptation would have the same scruples.  (Just how
often would *you* drive 55 if you had a Ferrari?)

Rex Black
	
black@ee.ucla.edu                                          ARPA        
...!{ihnp4,ucbvax,sdcrdcf,trwspp}!ucla-cs!uclaee!black     UUCP

Disclaimer:  The following are my own opinions and may or may not reflect
the official view of the University of California or any of its employees.

MRC%PANDA@SUMEX-AIM.Stanford.EDU (Mark Crispin) (03/10/87)

     Back in prehistoric times when us ancient timers were dealing with
dinosaur operating systems such as TOPS-20, there was a strong feeling
that we should *fix* all security bugs.  Of course, any security features
could be compromised by having an idiot as a system manager, but did our
damned best to close security holes.  Oh, doubtless a TOPS-20 system
would eventually fall to a determined attack by someone at my level of
expertise, but it would take a fair amount of time.

     "Security through obscurity" is no security at all.  If you are
aware of a Unix security bug, you MUST assume that the crackers know of
it and take action to fix or at least work around it.  If you fail to
fix a known security bug, then you deserve to have your system trashed
by a cracker.  You knew the potential consequences of your actions when
you decided the security bug was "too obscure for anyone else to find
out about."  When you failed to publicize the bug, you are indirectly
responsible for some other system getting trashed.  If you discovered
it, you should assume a cracker has discovered it (or is in the process
of discovering it).

     Of course, such an attitude would wipe out the fly-by-night vendors
of boxes running ancient versions of an old BSD tape.  Everyone would
know how to crack such systems, and only vendors who keep up on the
software technology will survive.

     I call it Natural Selection and A Good Thing.
-------

preece%mycroft@gswd-vms.arpa (Scott E. Preece) (03/10/87)

  black@ee.UCLA.E:
  [responding to a note about Gould challenging the local student
   ACM chapter to try to break into our Secure Unix product]
> By the time this ACM "attack team" is finished with their "project",
> every one of these people is going to be a veritable black-belt in
> system destruction.  It speaks pretty poorly of Gould that they feel no
> compunction about encouraging people to obtain this type of knowledge.
----------
Well, if they don't learn about what holes in operating systems look
like, they can't reasonably be expected to avoid them when they get
the chance to design systems themselves.  I presume their advisors
will counsel them on the appropriate use of this knowledge.  I guess
I generally favor the acquisition of knowledge, even if that knowledge
has potentially evil applications.
----------
> Suppose that a nuclear energy facility had developed what they
> considered an "unbreakable" security system for a plutonium reprocessing
> plant.  Would it then behoove the company to seek out a collection of
> Palestinian terrorists and dare them to steal 150 kilos of weapons-grade
> Pu?  I dare say that any company doing this would soon find that its
> management was cooling their heels in a max. sec. prison.
----------
I don't know about seeking out practicing terrorists to test your
security, but the hiring of tiger teams to test security systems on
computer systems and physical plant facilities is well known.  If your
security can be broken, you'd prefer to find out under controlled
circumstances rather than as the result of a real break in.
----------
> With Unix branching out into more and more critical operations (banking,
> hospitals, national security, etc.), what possible right does Gould have
> to assemble a team of "super-hackers", no matter how reliable these
> people are?
----------
I don't really think that's what the challenge is doing, but what I
said before still applies.  The use of break in attempts by independent
teams is a fairly normal thing to do.

What Black really doesn't like is (1) that the knowledge acquired
by the team in trying to break into our system can then be applied
to other, probably less secure, Unix systems and (2) that the
team will be made up of students, who he apparently considers less
trustworthy than himself.  I don't see the problems he does.  The
knowledge of how to break Unix systems is already spread far and
wide; from the paper on Unix security that accompanies the standard
documentation to the discussions in books like Tanenbaum's, this is
hardly arcane stuff.  As to the people involved, I can only point to
the many examples available of people thought to be irreproachable
professionals who turned out to be spies, embezzlers, and cheats.
The student chapter of the ACM at the University is made of people
who in a year or two will be functioning computer professionals,
just like the rest of us; I trust them as much as I do Rex Black.

[DISCLAIMER: Though I work for Gould, I don't speak for Gould in
this note or in general.]

-- 
scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@gswd-vms

mark@ems.UUCP (03/11/87)

In article <4836@brl-adm.ARPA> black@ee.UCLA.EDU (Rex Black) writes:
>> 	[ A request of details on holes in UNIX ]
>
>I feel that Gould was *extremely* ill-advised to post such a challenge,
>much less allow someone to take them up on it.  This so-called contest
>really boils down into nothing more than an extremely advanced seminar
>in how to destroy a Unix system.  By the time this ACM "attack team"
>is finished with their "project", every one of these people is going
>to be a veritable black-belt in system destruction.  It speaks pretty
>poorly of Gould that they feel no compunction about encouraging people
>to obtain this type of knowledge.  
>

Althought I agree with Mr. Black about his concerns about the possible
implications of allowing access to this kind of sensitive information,
I can also see Gould's and ACM's point of view.

I think that Mr. Black's is concerned that once the information about 
breaking a Unix system is shared with this 'attack team' that these same
people will go around breaking system's for the fun of it may be taking
the issue a little too far.  Face it someone out there knows how to break
the system, that is why ACM has solicited the response of the Unix community.
They KNOW that there are people out there who can break a system.  However,
much care must be taken to make sure that the people who form this attack
team will not use the technics that they learn to harm other people.
After all, most black belts in martial arts do not run around killing people
just for the hell of it.

Gould is saying that they have produced the tightest system that they know
how.  However, they may have missed some holes, and they want to make sure
that these holes are plugged.  I think that it is admirable (if somewhat
cocky, based on there past 'competition') of Gould to do this kind of QA.

Only by breaking a system can you hope to fix the hole in it.  How do you
break it?  Trial and error.  It's similar to the fact that if nobody broke
into houses, they would not be equiped with locks.  If a locks is made to
keep out burglars, what better way to test it than to have the best 
burgular try to pick it.  Obviously, you must be able to have some sort of
trust in the burgular first...

This is done all the time in real life.  Who do you think banks hire to 
reposess things?  Ex-cons.

Once again, I must reiterate that I do agree that care must be taken in the
selection of the attack team.  I would hope that whoever does the selection
is aware of the magnitude of the information that they are dealing with.
-- 
Mark H. Colburn          mark@ems.uucp      
EMS/McGraw-Hill          {rutgers|amdahl|ihnp4}!{dayton|meccts}!ems!mark
9855 West 78th Street     
Eden Prairie, MN 55344   (612) 829-8200 x235

m5d@bobkat.UUCP (03/11/87)

In article <4836@brl-adm.ARPA> black@ee.UCLA.EDU (Rex Black) writes:
-> [ ... Mr. Black thinks Gould was wrong to challenge hackers...]
->
->Suppose that a nuclear energy facility had developed what they considered
->an "unbreakable" security system for a plutonium reprocessing plant.
->Would it then behoove the company to seek out a collection of Palestinian 
->terrorists and dare them to steal 150 kilos of weapons-grade Pu?  I dare 
->say that any company doing this would soon find that its management was 
->cooling their heels in a max. sec. prison.  With Unix branching out into more
->and more critical operations (banking, hospitals, national security, etc.),
->what possible right does Gould have to assemble a team of "super-hackers",
->no matter how reliable these people are?
->
->Rex Black
->black@ee.ucla.edu                                          ARPA        
->...!{ihnp4,ucbvax,sdcrdcf,trwspp}!ucla-cs!uclaee!black     UUCP

A friend of mine who went to the Air Force Academy told me that the
Air Force (maybe the military in general) employs groups called
"tiger teams" to do exactly what Gould is encouraging.  I don't
know about stealing plutonium, though.  That would be dangerous,
and the military certainly wouldn't want to do something dangerous.

-- 
Mike McNally, mercifully employed at Digital Lynx ---
    Where Plano Road the Mighty Flood of Forest Lane doth meet,
    And Garland fair, whose perfumed air flows soft about my feet...
uucp: {texsun,killer,infotel}!pollux!bobkat!m5d (214) 238-7474

Cherry.XSIS@Xerox.COM (03/11/87)

Rex,  The way Unix is distributed, it is NOT a secure system.  It takes
a lot of time to close all the entrances or atleast require a key
(Permission) or password to pass through them.  Since the environment is
designed to be a programmers paradise, and in addition, designed with a
multitude of tools, utilities, and *COMPILERS*, anybody with a sincere
notion of entering and accessing another system with intent to do harm,
can generally do so.

When I was setting up my system which, required a great deal of
security, I removed anything of 'value' and put traces on all processes.
Then, I +advertised+ it on various Bulletin Boards in the greater Los
Angeles area and let the hackers have at it.  The idea was to close any
form of entry that they could find.  Let me tell you, there are some
sharp, unix wise, and creative people out there.  They were also a great
asset in securing the system and they even taught me a thing or two.

I imagine Gould was taught something also.


    For every action, event or entity of the universe there is a good
side 
    and there is a bad side.  A wise man knows how to look at both.  An
    optimist sees the good side.  A pessimest only sees the bad.
B.C. & Zot

       /|		PUP/GV : Cherry.XSIS
   \`o_O'		   XNS : Robert Cherry:XSIS:Xerox
     ( )  Aachk! Phfut!   ARPA |
rocksanne!anb02!cherry%rochester:ARPA:Xerox
      U 		       : rocksanne!bob
			       : cherry.xsis@Xerox.COM
			  UUCP : rocksanne!bob
       :=work, |=home	       | {rocksanne | gryphon |
wright}!anb02!cherry
       
	   		   TPC : (818) 351-2351 Ext. 2098
			   XPC : 8 * 844-2098

black@ee.UCLA.EDU (03/12/87)

> What Black really doesn't like is (1) that the knowledge acquired
> by the team in trying to break into our system can then be applied
> to other, probably less secure, Unix systems and (2) that the
> team will be made up of students, who he apparently considers less
> trustworthy than himself.  

I have a feeling I'm gonna get shredded on this issue, but I've
got to stick by my guns.  My main failure was not suggesting a
reasonable alternative; as usual, that resulted in misunderstanding.

The things I *really* don't like are:  

1) Gould is not going to take the results of these experiments and
   pass them on to other UNIX OS writers.  (I may be wrong.  However, 
   the posting did not mention any planned distribution of results.)  
   Under ordinary circumstances, Gould would be under no obligation to 
   share trade secrets that it had spent money to obtain.  However, in 
   this case it *is* obligated to share this info because, by the very 
   act of obtaining it, it has placed other, less secure sites in greater
   potential danger than they were in before it assembled this team.

2) I deliberately pointed out that I would personally refuse to be 
   involved in such an experiment.  It's kind of like Pandora's box--
   it's quite possible that everyone involved in this project will 
   find that this knowledge is not a temptation.  But, as a very
   wise fortune cookie once told me:  "The problem with resisting
   temptation is that it may never come again."

My solution would not be to "stick your head in the sand" as one
person suggested.  I would think that Gould could find a group of
excellent programmers--perhaps hire some professors or professionals
as consultants--and organize their own, paid attack team.  These
people would then have a vested interest in not misusing the 
information they'd obtained.  

'Nough said.

Rex Black
	
black@ee.ucla.edu                                          ARPA        
...!{ihnp4,ucbvax,sdcrdcf,trwspp}!ucla-cs!uclaee!black     UUCP

Disclaimer:  Once again, these opinions are my own and may or may not
be shared by the UCLA Administration or any of its employees.

gwyn@brl-smoke.ARPA (Doug Gwyn ) (03/15/87)

Attempts to attain security by enforcing ignorance are misguided
and actively dangerous (in that they lull the enforcers into a
false sense of security).  This mistake is often made, for example
by White House advisers.  One would think that people would learn
from history (except that history shows that they often don't!).