nevin1@ihlpf.ATT.COM (00704a-Liber) (04/09/88)
In the recent version of the Unix vs. VMS war (I do NOT want to start up THAT again!!), I stated that DEC believes in 'security through obscurity'; i.e., what the users (and system administrators) don't know can't hurt them. In a recent posting to comp.risks, Andy Goldstein from DEC's VMS Development group stated what DEC's policy is. I am posting his article in its' entirety here, to see what the VMS user community thinks of DEC's security bug policy. (I will save my own comments on this subject for a followup posting. When quoting, please change the author from N. Liber to Andy Goldstein.) ------------------------------ Date: Mon, 4 Apr 88 15:28:53 PDT From: goldstein%star.DEC@decwrl.dec.com (Andy Goldstein) Subject: Re: Notifying users of security problems > Date: 31 Mar 88 01:25:29 PST (Thursday) > From: "hugh_davies.WGC1RX"@Xerox.COM > Subject: Re: Notifying users of security problems > In RISKS 6.50, Andy Goldstein (goldstein%star.DEC@decwrl.dec.com) states.. > "Sending out notice of the presence of a bug without a correction or > workaround is of course even more irresponsible." ... > When I first saw this I couldn't believe what I was reading. ... > Please, Andy, tell me I've got it wrong! Maybe you did misunderstand me; I should have been more precise in the statement you quoted. I was referring specifically to security bugs. That said, I stand by my statement. Let me try to explain... When a piece of software is shipped containing a bug, knowledge of that bug is contained in the software, in a manner of speaking. At the same time, in most cases knowledge of the bug is not held by any person. That is, the bug was created inadvertantly and unknowingly by the author(s) of the software, and no one has discovered it yet. A bug does its damage when it is somehow invoked, by use or misuse of a certain feature, or by the unhappy confluence of certain conditions. By and large, ordinary bugs are encountered by users innocently going about their business. That is, no prior knowledge of the bug by the user is involved in encountering the bug; knowledge of the bug by the system is sufficient. Furthermore, the effect of the bug is in general to cause system behavior which is undesirable to the user. Consequently, knowledge of the bug will often permit the user to work around it or defend against it. Since a virus spreads without knowledge of the user, it too falls into this category. Sharing information about most types of bugs, including the existence and nature of particular viruses, is productive and worthwhile. Now let us compare security bugs to ordinary bugs. I define a security bug as one which permits a user to violate a system's security controls in some significant way (e.g., allowing an ordinary user to become superuser or whatever). Security bugs are by and large not encountered by people innocently going about their business. They are usually found by the adventurous by inspecting system sources, and are invoked only through creative abuse of obscure system features. (I cannot argue this point with logic, but many years of experience dealing with security bugs tell me it is so.) Most system users (I mean users, not administrators) do not care about security bugs. They do not stand in the way of their getting their work done. The people who care about security bugs are hackers (and of course the system managers trying to fend them off). From the point of view of the hacker, a security bug is an undocumented feature of the system that allows him to do what he wants to do. So we get to the critical distinction between security bugs and others: Because invocation of a security bug requires a deliberate, unusual action, a security bug is only harmful to an installation when malicious users gain knowledge of the bug. The best analogy I can think of is a lock manufacturer discovering that one of its locks can be easily picked using a previously unknown technique. The challenge we have with security bugs, therefore, is (1) not shipping them to begin with (2) fixing them as promptly as possible when they are discovered (3) keeping knowledge out of the hands of the bad guys until they can be fixed. Points (1) and (2) are of course mere matters of engineering, manufacturing and distribution. Because we will never achieve instantaneous development and distribution of bug fixes, (3) is the kicker. I have heard many arguments that system managers should be permitted to learn about security bugs, either from the manufacturer or informally via the grapevine. With respect to the VAX/VMS user community, I disagree with this conclusion for several reasons: (1) The knowledge won't do them any good. We are long past the time when every computer installation had its wizard who knew (or thought he knew) how to fix every problem that might come up. [Digression: I'm sure half the university system managers have just hit the ceiling. Universities are unique in having available a large pool of cheap, highly talented labor. Among our engineering and commercial customers, technically skilled labor is expensive and hard to come by. Our working assumption is that the majority of our customer base does not, or would rather not have to, understand the internals of VMS to use it.] (2) The news may do them harm. Would you, as DP manager of Bank of America, install a "security patch" that originated from, say, UC Berkeley? (3) The knowledge may do them harm. Nowadays, any fairly well-off high school kid can buy himself a microvax and be a bona-fide system manager. There is no practical way to tell the good guys from the bad guys anymore. The larger the number of people know of the existence of a security problem, the more likely it is that a bad guy will gain the necessary knowledge to exploit it. Consequently, DEC has taken the following approach to dealing with security bugs in the future: (1) When a security bug is discovered, engineering will develop a fix as rapidly as possible. The fix will be distributed to customers as rapidly as circumstances warrant. To the extent possible, the fix will be constructed so as to make it difficult to reverse-engineer the bug from the fix. (2) Once the fix has been distributed, all customers will be notified of the existence of the problem and informed of the urgency of installing the fix. Thus we let the cat out of the bag (hopefully) only after users have been given the tools with which to skin it. While this policy of secrecy does carry the possibility that a small number of users may incur duplicated effort investigating a security bug, we feel this is a worthwhile trade towards ensuring the safety of the majority of the customer base. I also emphasize that this policy applies only to security bugs that have no operational workaround. Andy Goldstein, VMS Development ------------------------------ -- _ __ NEVIN J. LIBER ..!ihnp4!ihlpf!nevin1 (312) 510-6194 ' ) ) "The secret compartment of my ring I fill / / _ , __o ____ with an Underdog super-energy pill." / (_</_\/ <__/ / <_ These are solely MY opinions, not AT&T's, blah blah blah