df@sei.cmu.edu (Dan Farmer) (09/17/90)
Yes, it's the "S" word. I know you don't want to think about, much less talk about it. But I'd like to hear what you do at your site about security. Problems and solutions that you have dealing with multiple architectures, getting security patches out to all of your machines, security audits and auditing software, etc. Tools -- what kind of security tools do you want/need? Do you use any tools to sweep for known holes or problems? Do you want to see such tools made publically available, as in, say, posted to the net? If any of you use COPS, are there things you'd like to see changed, or tuned for large sites? Ok... thanks. Now you can go back to your tapes and optical disks and backups and stuff. dan df@sei.cmu.edu
wojcik@crl.dec.com (Ted Wojcik) (09/27/90)
In article <8611@fy.sei.cmu.edu>, df@sei.cmu.edu (Dan Farmer) writes: |> |> Yes, it's the "S" word. I know you don't want to think about, much |> less talk about it. But I'd like to hear what you do at your site about |> security. Problems and solutions that you have dealing with multiple |> architectures, getting security patches out to all of your machines, security |> audits and auditing software, etc. ........ |> |> dan |> df@sei.cmu.edu Actually Dan, while I suspect that it's more of a reflection of the type of organization that own them, IMHO large systems have more severe security problems than small Unix systems just because of the scale. Unfortunately, the relative lack of security in Unix-based systems has scaled up poorly to large installations, either mainframe or many-workstationed. In other words, scaling up the size has magnified the problem. Somewhere along the line though, there was a non-linearity that messed up the scaling so that just doing more of what you were already doing didn't hack it anymore. I think that in the same way you cannot test for the absence of bugs, (only the presence) you cannot test for a secure system. You only get secure (or bug-free) systems by design. Since Unix-based systems were designed to be fairly non-intrusive security-wise, it's damned near impossible to get any satisfactory security added on. In general, I've found that corporate security folks don't care that you have no tools - they just want a secure system - whatever that is - which you can't demonstrate to them to their satisfaction. I've got COPS but by itself COPS isn't sufficient. My user community considers that security is my problem and they aren't interested in any more security - until, of course, we get broken into - then it hits the fan. On the other hand, my user community wants to give access to anyone who asks. In a large organization, it's a problem just to get informed when someone leaves the company, never mind that they've been transferred to Nome, AK and won't be needing their account. It's also tough to get everyone to agree to allow an inactive account's files to be deleted. Someone usually wants to "just keep it around, just in case". Under these circumstances, even the best managed system will get out of control and leave lots of windows of vulnerability open. A couple of thoughts: Computer accounts need to be kept track of just like machine tools are in a machine shop. When someone is terminated, the systems administrator should get notified just like payroll, and the tool crib, etc. This gets accounts closed before an angry (ex)employee can delete the payroll database. Second, directory trees ought to get archived when the account is closed. This might keep any viruses or worms from activating. (Yes, I know that sounds paranoid. So what? IMHO computer security is an exercise in applied paranoia.) You say Joe used to work on the Payroll system? Did anyone audit the changes made to the payroll programs? No? How do you know that he didn't put a timebomb into the payroll system that activates when his employee number disappears from the data? You don't. You pays your money and you takes your chances - a poor bet. Network connections are difficult to control in a secure way. My current opinion is that security in a networked environment is a dangerous fiction. Show me a connection and I'll show you a loophole. Security isn't something you add on, it has to be designed into the organizational and computational systems we use. Further, you've got to have policies and procedures and those procedures have to be followed - every time, to the letter, no exceptions or they're useless. Unfortunately people are human and do make mistakes - makes it tough to guarantee security. Summary: many organizations haven't yet internalized that information systems are just as valuable as physical things and require more care to ensure that they continue to operate and the data contained therein is correct. Adding many users and network connections to an organizational system without adding additional checks and balances is a recipe for disaster - yet many companies do - because they don't understand what the possible results might be. Companies who will chase a terminated employee to the ends of the earth for a $25 hard hat will also neglect to tell the MIS folks that the employee is gone and would they please disable the account - until something happens. Fix the mindset - fix the problem. Just my $.02 /Ted -- Standard Disclaimer: The opinions expressed above are those of the author and do not represent the official views of Digital Equipment Corporation. Ted Wojcik, Systems Manager ( wojcik@crl.dec.com ) Digital Equipment Corporation Cambridge Research Lab 1 Kendall Sq. Bldg. 700 Flr. 2 Cambridge, MA 02139, USA (617)621-6652
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (09/27/90)
In article <1990Sep26.180538.9484@crl.dec.com> wojcik@crl.dec.com writes: > I think that > in the same way you cannot test for the absence of bugs, (only the presence) > you cannot test for a secure system. That isn't always true. I can, for example, inspect a directory tree, observe that the directory tree has no setuid files, and be sure that a chroot()ed process with one uid will not be able to affect files with a different uid unless kernel security is flawed. > Security isn't something you add on, it has to be designed into the > organizational and computational systems we use. Not necessarily. The system with the simplest security rules has the best chance of obeying those rules to the letter, and is easiest to test for a particular security policy. I don't disagree with the point you're making, but some of your arguments are a little weak. ---Dan
bernie@DIALix.UUCP (Bernd Felsche) (09/28/90)
In article <1990Sep26.180538.9484@crl.dec.com> wojcik@crl.dec.com writes: > >Actually Dan, while I suspect that it's more of a reflection of the type of >organization that own them, IMHO large systems have more severe security >problems than small Unix systems just because of the scale. Unfortunately, >the relative lack of security in Unix-based systems has scaled up poorly to ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I suggest that you read Kochan & Wood's "UNIX System Security" to get informed. UNIX system security is largely a matter of management. If your system lacks security, the reason is self-evident. The only security which does not scale well is the superuser. Compartmentalised administrative system access can be achieved via setuid programs which refer to access control lists. For any installation, at any time, there should only be one person who knows the root password. Installation size is irrelevant. In case of DDD (disaster, disease or death) the password can be retrieved from a sealed envelope, stored in a secure but visible location. When the seal has been broken, then the password has been compromised. >large installations, either mainframe or many-workstationed. In other words, >scaling up the size has magnified the problem. Somewhere along the line >though, there was a non-linearity that messed up the scaling so that just >doing more of what you were already doing didn't hack it anymore. I think that >in the same way you cannot test for the absence of bugs, (only the presence) >you cannot test for a secure system. You only get secure (or bug-free) systems ^^^^^^^ Where do you get one of those? One cannot reasonably design for all possible cases (unless one is wearing blinkers). >by design. Since Unix-based systems were designed to be fairly non-intrusive >security-wise, it's damned near impossible to get any satisfactory security >added on. In general, I've found that corporate security folks don't care Security is built into the kernel. It's a matter for you to determine how much of it you want to use. >that you have no tools - they just want a secure system - whatever that is - >which you can't demonstrate to them to their satisfaction. I've >got COPS but by itself COPS isn't sufficient. My user community considers that >security is my problem and they aren't interested in any more security - until, They couldn't be more wrong! I don't remember how many times I've walked into an office to find stickers with password, and account names plastered on terminal bezels. Heck, I've seen those fancy document holders with account names and passwords laminated onto them. Of course, this all happens on secure systems, like government departments and large corporations, doesn't it? Sure, they have those fancy keys stuck in the side to stop un-authorised use... who _are_ they kidding? >of course, we get broken into - then it hits the fan. On the other hand, my >user community wants to give access to anyone who asks. In a large >organization, it's a problem just to get informed when someone leaves the >company, never mind that they've been transferred to Nome, AK and won't be >needing their account. It's also tough to get everyone to agree to allow an >inactive account's files to be deleted. Someone usually wants to "just keep it >around, just in case". Under these circumstances, even the best managed system >will get out of control and leave lots of windows of vulnerability open. > >A couple of thoughts: Computer accounts need to be kept track of just like >machine tools are in a machine shop. When someone is terminated, the systems >administrator should get notified just like payroll, and the tool crib, etc. >This gets accounts closed before an angry (ex)employee can delete the payroll >database. You can monitor the password expiration date on all users, daily. If a password has expired for over a week, change it. The account is not being used. First, try to contact the user by other means, e.g. phone. If the user is on vacation, you'll hear from him/her. For occasions like this, it's wise to save the original encrypted password in a non-public file, so that it can be restored easily... What do you mean, you can't log in... you must be typing it in wrong... try it again... now :-) Change the password to tick over, so that you can tell how often it's expired since the account became dormant. (use digits... you can count to a high number in the password field) When the number of changes reaches a threshold, get in touch with the pay office, and ask them what's happened. All of the above can be done automatically, using cron to run jobs and to mail you exception reports. > >Second, directory trees ought to get archived when the account is closed. This >might keep any viruses or worms from activating. (Yes, I know that >sounds paranoid. So what? IMHO computer security is an exercise in applied >paranoia.) You say Joe used to work on the Payroll system? Did anyone audit >the changes made to the payroll programs? No? How do you know that he didn't >put a timebomb into the payroll system that activates when his employee number >disappears from the data? You don't. You pays your money and you takes your >chances - a poor bet. Application programs often have many more security holes than the UNIX operating system. To make matters worse, applications often require permissions to be set up in such a way as to compromise security and application integrity. >Network connections are difficult to control in a secure way. My >current opinion is that security in a networked environment >is a dangerous fiction. Show me a connection and I'll show you a loophole. In practice, network security in an oxymoron. At present. Until the technology arrives to support such things as public-key encryption, protecting the network is an expensive exercise. >Security isn't something you add on, it has to be designed into the >organizational and computational systems we use. Further, you've got to have >policies and procedures and those procedures have to be followed - every time, >to the letter, no exceptions or they're useless. Unfortunately people are >human and do make mistakes - makes it tough to guarantee security. > >Summary: many organizations haven't yet internalized that information systems >are just as valuable as physical things and require more care to ensure >that they continue to operate and the data contained therein is correct. > >Adding many users and network connections to an organizational system without >adding additional checks and balances is a recipe for disaster - yet many >companies do - because they don't understand what the possible results might >be. Companies who will chase a terminated employee to the ends of the earth >for a $25 hard hat will also neglect to tell the MIS folks that the employee >is gone and would they please disable the account - until something happens. > >Fix the mindset - fix the problem. > >Just my $.02 And now mine... although we wont have the coins much longer. bernie
shwake@raysnec.UUCP (Ray Shwake) (10/02/90)
bernie@DIALix.UUCP (Bernd Felsche) writes: > I suggest that you read Kochan & Wood's "UNIX System Security" to > get informed. ABSOLUTELY! I picked up a copy shortly after its appearance, and found much on which to build. [Query: Anyone know what's been added/changed in the Second Edition?] > UNIX system security is largely a matter of management. If your > system lacks security, the reason is self-evident. VERY TRUE! Admittedly, one can do more with C2/B1/... systems, and others designed specifically to enhance the essential security provided in UNIX. A security guide developed years ago by our security task force included, up front, guidance for Managers, Administrators and Users in support of computer security. On the other hand, some "secure" implementations are such administrative headaches and require so much in the way of additional resources that people do what they can to keep it out of their way - i.e. they compromise it. Any comments from System V/MLS users? > For any installation, at any time, there should only be one > person who knows the root password. Installation size is > irrelevant. In case of DDD (disaster, disease or death) the > password can be retrieved from a sealed envelope, stored in a > secure but visible location. In many organizations, this is simply unrealistic. I served for several years as LEAD administrator over a small group that I could rely on as necessary. "Sealed envelopes" may serve Karnak's requirements, but don't usually serve those of system administrators.
rxxgap@minyos.xx.rmit.oz (Greg Price) (10/03/90)
> On the other hand, some "secure" implementations are such administrative > headaches and require so much in the way of additional resources that > people do what they can to keep it out of their way - i.e. they compromise > it. Any comments from System V/MLS users? > > > For any installation, at any time, there should only be one > > person who knows the root password. Installation size is > > irrelevant. In case of DDD (disaster, disease or death) the > > password can be retrieved from a sealed envelope, stored in a > > secure but visible location. I would have to agree....The problem I get when chasing a problem is first establishing whether the problem comes from other SU's, the system, and maybe an uninvited guest. As for System V/MLS and System V/Enhanced security, it would be nice if educational facilities could get AT&T source like SVR4. Anyone from AT&T (apart from my rep) listening out there? ;-) Greg ---------------------------------------------------------------------------- Greg Price, Computer Centre, Systems Programmer. Royal Melbourne Institute of Technology, P.O. Box 2476V, Melbourne. 3001. ACSnet: rxxgap@minyos.xx.rmit.oz (124 Latrobe St., Melbourne.) CSNET: rxxgap@minyos.xx.rmit.oz.au Australia. ARPA: rxxgap@minyos.xx.rmit.oz.au@uunet.uu.net BITNET: rxxgap@minyos.xx.rmit.oz.au@CSNET-RELAY PHONE +61 3 660 2934 UUCP: ...!uunet!munnari!minyos.xx.rmit.oz.au!rxxgap FAX +61 3 663 5652 ----------------------------------------------------------------------------