[comp.protocols.tcp-ip] the worm and internet security

romkey@asylum.sf.ca.us (John Romkey) (07/28/89)

I agree and I disagree. Mostly agree, and wish to agree more.

Yeah, there're a lot of problems left and the government agencies are
*not* being useful. I think that centralized administration of the
Internet is absolutely the wrong thing, and that government
administration would be a disaster.

On the other hand, I'm scared of throwing open the whole Internet for
security testing. The Internet Engineering Task Force met this week at
Stanford. According to the NIC, an automated survey of the domain
system returned more than 118,000 host names, and several major sites,
such as Stanford and CMU, didn't return any data. Probably a better
estimate of the number of hosts on the Internet is 150,000 [my
opinion]. Right now I just don't think the system is good enough to
be able to coordinate that many systems. I mean, we can't even get a
lot of system maintainers to install the latest version of sendmail.
I'm afraid that declaring next Tuesday open season on the Internet
would cause utter chaos. 

I don't think this is a *good* state to be in, mind you. Just that
this is where we are now, and I can't change the state of the world
overnight. Maybe periodically declaring open season for a day would
help the world grow up faster, but I afraid it might only hinder the
growth and get the government involved more, and that, I really don't
want.

Some people are recognizing the need for testing. The IAB is pushing
to get funding for the "Internet testbed" where they can have an
Internet in miniature and do this kind of testing. Some statements
from them today made that concern pretty clear. God, this paragraph
sounds like politicalese. Anyway, I don't know if they'll really do
it. I don't know if it'll really be effective. But they do seem to be
pushing for it, and I'd feel a lot more comfortable about doing the
testing in a smaller, more controlled environment.

There's some senator who's trying to introduce legislation that would
make it illegal to write a worm or virus. I think worms could actually
be very interesting for doing certain kinds of distributed computation
or network management.

I also think more vendor responsibility would go a long way. Some of
the problems that the worm took advantage of were well known. Sun and
DEC shouldn't have released sendmail with debug mode left in. An awful
lot of vendors pick up 4.x TCP, get it running on their system, and
never really understand what's in it. I do not blame Berkeley for
this.

And I don't know how much security is enough. I don't tend to like
much at all, myself. At some point to gain the security, you'll have
to start making some really big changes. If you want real security,
you'll end up not sending passwords and userid's in plain text over
telnet and rlogin. You'll probably end up with link encryption, and
even stronger authentication than what MIT is doing with Kerberos. And
those are pretty big changes to the way things are done now. It's
going to take a while, and still won't cope with the kind of hardcore
password cracking you can attempt when you've got a 1Mbit/s channel
into my computer instead of a 1200 baud dialin, and can finger @asylum
to find out user names...it's not the network that's insecure there,
it's just that the existence of the connection makes it easier to
exploit (what didn't used to be) "weaknesses" in the existing
operating system.

These issues give me headaches. Yes, I wish we could do open testing
all over the Internet. We could test security; we could also take pot
shots with finger of death packets to find old releases of software
that are running on systems and encourage their administrators to run
up to date stuff. And more. I don't think it's practical in the
current environment, but I do think it is important, regardless.
				- john

jkp@cs.HUT.FI (Jyrki Kuoppala) (08/06/89)

In article <8907280211.AA09340@asylum.sf.ca.us>, romkey@asylum (John Romkey) writes:
>On the other hand, I'm scared of throwing open the whole Internet for
>security testing. The Internet Engineering Task Force met this week at
>Stanford. According to the NIC, an automated survey of the domain
>system returned more than 118,000 host names, and several major sites,
>such as Stanford and CMU, didn't return any data. Probably a better
>estimate of the number of hosts on the Internet is 150,000 [my
>opinion]. Right now I just don't think the system is good enough to
>be able to coordinate that many systems. I mean, we can't even get a
>lot of system maintainers to install the latest version of sendmail.
>I'm afraid that declaring next Tuesday open season on the Internet
>would cause utter chaos. 

It's been proposed that security problems like those what the worm
used, whenever found, should first be published on a restricted-access
mailing list as soon as possible.  This mailing list should have all
major Un*x vendors on it, so that they can rush bug-fixes to their
clients as soon as possible.  Then, with for example a three-month or
so delay, this mailing list would be relayed to a Usenet newsgroup.

I think this approach would work quite well.  The knowledge that the
bug will be public knowledge in some months should make the vendors'
support much better.  Perhaps then we wouldn't be seeing all these
various sendmail, ftpd, rcp, rdist, rwall/wall, fingerd, nfs, rexd,
lpr, ptrace, uucp, yp and who knows what else (they're too numerous to
remember already!) security bugs months or even years they've come at
least partly public knowledge.

The Berkeley ucb-fixes list already does a very good job at this -
but apparently it isn't enough, as many vendors seem to neglect the
security fixes which Berkeley puts out.  For example, how many have
fixed the one with rshd and rlogind accepting connections from ports
under 512 ?  It seems that someone has to make public the information
how to use the bug before the vendors believe it.

Also, some way should be found to make vendors to make the
out-of-the-box system even somewhat acceptable.  Especially Sun loses
badly on this.  I think they still have + in their /etc/hosts.equiv.
They have extremely bad manners in other things, too, like that
/etc/utmp is world-writable.  And even after the rwalld / wall bug was
published, apparently they STILL don't plan to change that.  They're
practically asking for trouble.

Perhaps there should be some kind of `security rating' given to an
operating system.  I dont mean ratings like C2 or things like that;
just an estimate on how many known security bugs a system has and if
it is suitable for use in the internet off-the-box or if it needs a
few weeks of debugging with a tight comb to prevent J. Random User on
the internet from getting root access on it in five minutes by reading
the tips from `The History of BSD Unix'.

>Some people are recognizing the need for testing. The IAB is pushing
>to get funding for the "Internet testbed" where they can have an
>Internet in miniature and do this kind of testing. Some statements
>from them today made that concern pretty clear. God, this paragraph
>sounds like politicalese. Anyway, I don't know if they'll really do
>it. I don't know if it'll really be effective. But they do seem to be
>pushing for it, and I'd feel a lot more comfortable about doing the
>testing in a smaller, more controlled environment.

Sounds good.

>There's some senator who's trying to introduce legislation that would
>make it illegal to write a worm or virus. I think worms could actually
>be very interesting for doing certain kinds of distributed computation
>or network management.

That kind of legislation sounds extremely silly and dangerous to me.
Computers are nothing but a tool.  Why should they be treated any
differently from any other tools in legislation ?  If a person deliberately
causes harm to others - like destroys all data from a police computer
- certainly there already are laws which can be used against this
person.  Of course, some laws about official documents may need to be
changed to cover documents stored on computer systems, but the need
for a separate `computer fraud law' is not clear to me.

Actually, I find the idea of a `computer fraud law' quite disturbing.
If it is made criminal to for example feed wrong information to a
computer, it leads to great reduction of the individual's basic
rights.  As an example, I'm appending the State of Wisconsin Computer
Fraud Law to the end of this message (as a part of a law is hardly any
use).  As I have little experience reading legalese, perhaps I have
misunderstood the law, but to me it seems that there's no mention
about to which purpose the computer system in question is used.  Also,
the headings like `(3) OFFENSES AGAINST COMPUTERS, COMPUTER EQUIPMENT
OR SUPPLIES.' seem quite strange to me - I thought the laws were there
to protect people, not machines (the heading sounds like those you see
in scifi-novels describing societies ruled by computers ;-).

Of course, I can't be sure if the document is real as I've gotten it
via the computer networks, so please tell me if it isn't.

>These issues give me headaches. Yes, I wish we could do open testing
>all over the Internet. We could test security; we could also take pot
>shots with finger of death packets to find old releases of software
>that are running on systems and encourage their administrators to run
>up to date stuff. And more. I don't think it's practical in the
>current environment, but I do think it is important, regardless.
>				- john

Perhaps there should even be `an internet requirement' of suffucient
security; that is, if a site runs software with all the five-year-old
network bugs unfixed, they're not allowed to be an the internet.  That
way, good will in the net is maintained as random pranksters don't get
access to machines they don't have official accounts to nearly as
easily.  Please note that this shouldn't be extended to administrative
policies, just the security bugs (much like the RFC requirements).  Ah
well, just an idea.

//Jyrki

----------------------------------------------------------------------

         -- Computer Law - State of Wisconsin Statute --

                    Chapter 293, Laws of 1981

                     943.70 Computer crimes.

(1) DEFINITIONS. In this section:

   (a) "Computer" means an electronic device that performs
       logical, arithmetic and memory functions by manipulating
       electronic or magnetic impulses, and includes all input,
       output, processing, storage, computer software and
       communication facilities that are connected or related to
       a computer in a computer system or computer network.

   (b) "Computer network" means the interconnection of
       communication lines with a computer through remote
       terminals or a complex consisting of 2 or more
       interconnected computers.

   (c) "Computer program" means an ordered set of instructions or
       statements that, when executed by a computer, causes the
       computer to process data.

   (d) "Computer software" means a set of computer programs,
       procedures or associated documentation used in the
       operation of a computer system.

   (dm) "Computer supplies" means punchcards, paper tape,
       magnetic tape, disk packs, diskettes and computer output,
       including paper and microform.

   (e) "Computer system" means a set of related computer
       equipment, hardware or software.

   (f) "Data" means a representation of information, knowledge,
       facts, concepts or instructions that has been prepared or
       is being prepared in a formalized manner and has been
       processed, is being processed or is intended to be
       processed in a computer system or computer network. Data
       may be in any form including computer printouts, magnetic
       storage media, punched cards and as stored in the memory
       of the computer. Data are property.

   (g) "Financial instrument" includes any check, draft, warrant,
       money order, note, certificate of deposit, letter of
       credit, bill of exchange, credit or credit card,
       transaction authorization mechanism, marketable security
       and any computer representation of them.

   (h) "Property" means anything of value, including but not
       limited to financial instruments, information,
       electronically produced data, computer software and
       computer programs.

   (i) "Supporting documentation" means all documentation used in
       the computer system in the construction, clarification,
       implementation, use or modification of  the software or
       data.

(2) OFFENSES AGAINST COMPUTER DATA AND PROGRAMS.

   (a) Whoever willfully, knowingly and without authorization
       does any of the following may be penalized as provided in
       par. (b):

   1.  Modifies data, computer programs or supporting
       documentation.

   2.  Destroys data, computer programs or supporting
       documentation.

   3.  Accesses data, computer programs or supporting
       documentation.

   4.  Takes possession of data, computer programs or supporting
       documentation.

   5.  Copies data, computer programs or supporting
       documentation.

   6.  Discloses restricted access codes or other restricted
       access information to unauthorized person.

   (b) Whoever violates this subsection is guilty of:

   1.  A Class A misdemeanor unless subd. 2, 3 or 4 applies.

   2.  A Class E felony if the offense is committed to defraud or
       to obtain property.

   3.  A Class D felony if the damage is greater than $2,500 or
       if it causes an interruption or impairment of governmental
       operations or public communication, of transportation or
       of a supply of water, gas or other public service.

   4.  A Class C felony if the offense creates a situation of
       unreasonable risk and high probability of death or great
       bodily harm to another.


(3) OFFENSES AGAINST COMPUTERS, COMPUTER EQUIPMENT OR SUPPLIES.

   (a) Whoever willingly, knowingly and without authorization
       does any of the following may be penalized as provided in
       par. (b):

   1.  Modifies computer equipment or supplies that are used or
       intended to be used in a computer, computer system or
       computer network.

   2.  Destroys, uses, takes or damages a computer, computer
       system, computer, network or equipment or supplies used or
       intended to be used in a computer, computer system, or
       computer network.

   (b) Whoever violates this subsection is guilty of:

   1.  A Class A misdemeanor unless sub. 2,3 or 4 applies.

   2. A Class E felony if the offense is committed to defraud or
       obtain property.

   3.  A Class D felony if the damage to the computer, computer
       system, computer network, equipment or supplies is greater
       than $2,500.

   4.  A Class C felony if the offense creates a situation of
       unreasonable risk and high probability of death or great
       bodily harm to another.

                 -- Penalties for Infractions --

939.50(3) Penalties for felonies are as follows:

   (a) For a Class A felony, life imprisonment.

   (b) For a Class B felony, imprisonment not to exceed 20 years.

   (c) For a Class C felony, a fine not to exceed $10,000 or
       imprisonment not to exceed 10 year, or both.

   (d) For a Class D felony, a fine not to exceed $10,000 or
       imprisonment not to exceed 5 year, or both.

   (e) For a Class E felony, a fine not to exceed $10,000 or
       imprisonment not to exceed 2 year, or both.

939.51(3) Penalties for misdemeanors are as follows:

   (a) For a Class A misdemeanor, a fine not to exceed $10,000 or
       imprisonment not to exceed 9 months, or both.

   (b) For a Class B misdemeanor, a fine not to exceed $1,000 or
       imprisonment not to exceed 90 days, or both.

   (c) For a Class C misdemeanor, a fine not to exceed $500 or
       imprisonment not to exceed 30 days, or both.
-- 
Jyrki Kuoppala    Helsinki University of Technology, Finland.
Internet :        jkp@cs.hut.fi           [128.214.3.119]
BITNET :          jkp@fingate.bitnet      Gravity is a myth, the Earth sucks!

bzs@ENCORE.COM (Barry Shein) (08/07/89)

To some extent I think what the public has just discovered about
networks is:

	"Oh my, someone could just come along with a rock and throw it through
	this glass-stuff?! Someone ought to do something about that, people
	could be hurt!"

	-Barry Shein

Software Tool & Die, Purveyors to the Trade
1330 Beacon Street, Brookline, MA 02146, (617) 739-0202

kwe@bu-cs.BU.EDU (kwe@bu-it.bu.edu (Kent W. England)) (08/07/89)

>
>To some extent I think what the public has just discovered about
>networks is:
>
>	"Oh my, someone could just come along with a rock and throw it through
>	this glass-stuff?! Someone ought to do something about that, people
>	could be hurt!"
>
>	-Barry Shein
>

But the glass had been painted to look like brick.

randall@uvaarpa.virginia.edu (Randall Atkinson) (08/08/89)

The GAO report was frankly disappointing.  I hope that
the powers that be at DARPA, NSF, et. al. understand 
what happened and why more clearly than the GAO seems
to have.  Moreover, I hope that they apply their own 
knowledge and experience to the problems in network
security rather than just adding "yet another" 
coordinating committee or group or agency as the GAO
seems to suggest.

jdp@polstra.UUCP (John D. Polstra) (08/08/89)

One of the problems that surfaces over and over in this forum is the
fact that the major vendors don't bother to fix the known security
problems in their products.  The reason they don't fix these problems
is that they don't have much motivation to do so.  I would like to
suggest a way to provide the missing motivation.

Somebody (the DoD, a major university, or an interested member of the
press) ought to organize an annual competition, in which each of the
vendors would try to crack its competitors' systems.  A mini-network
would be set up, and each vendor's tiger team would try to crack as
many other systems in as many ways as possible during some fixed time
interval.  The results would be published openly so that potential
customers could take security issues into account when choosing
vendors.

The vendors would be doubly motivated to keep abreast of all known
security weaknesses.  First, they would be looking for ways to
embarrass the competition.  Second, they would be trying to minimize
their own vulnerability as much as possible.

The press would love it, because security issues sell newspapers these
days.  Also, members of the press IMHO get a charge out of embarrassing
people and (especially) corporations.

There would be no need to openly publish the methods used for breaking
into systems, so the rest of the Internet would not need to worry about
zillions of evil computer hackers suddenly finding out how to mess with
their systems.  On the other hand, the rules could require that vendors
share their successful methods with the manufacturers of the systems
that were defeated by them.  (This could be part of the process of
validating a break-in.)

If the competition were held periodically, say once or twice a year,
then one could also keep track of weaknesses which had been previously
exposed and remained uncorrected.

Comments, anyone?

-- John Polstra               jdp@polstra.UUCP
   Polstra & Co., Inc.        ...{uunet,sun}!practic!polstra!jdp
   Seattle, WA                (206) 932-6482
-- 

-- John Polstra               jdp@polstra.UUCP
   Polstra & Co., Inc.        ...{uunet,sun}!practic!polstra!jdp
   Seattle, WA                (206) 932-6482

PADLIPSKY@A.ISI.EDU (Michael Padlipsky) (08/08/89)

Slightly less prejudicially (or perhaps reverse-biased):
    "You mean some kid playing ball could could break this glass-stuff?!
Someone ought to ...."

    cheers, map
-------

stev@VAX.FTP.COM (08/09/89)

*Somebody (the DoD, a major university, or an interested member of the
*press) ought to organize an annual competition, in which each of the
*vendors would try to crack its competitors' systems.  A mini-network
*would be set up, and each vendor's tiger team would try to crack as
*many other systems in as many ways as possible during some fixed time
*interval.  The results would be published openly so that potential
*customers could take security issues into account when choosing
*vendors.

*Comments, anyone?


i doubt any of the major vendors would show up, unless they were forced to
somehow, like the goverment requiring all equipment in bids show up at
these "meetings". even then, i am not sure anything would get fixed. i
think instead alot of things would become "non-standard". things like
finger and rcp and rlogin and such would be moved to the "unsupported
networking tape". you probably cant force the big guys to play ball if
they dont want to, and you probably cant organize enough of the customer
base to make them want to.

sorry if i seem pessimistic, but i have been around for this before, and
only seen it work once. (you need to get *only* the engineers together. if
*anyone* else shows up, you should forget it.)

stev knowles
stev@ftp.com
617-246-0900
ftp software

little@SAIC.COM (Mike Little) (08/09/89)

John Polstra wrote and suggested an annual security rodeo for major vendors
with visitors and press to record the results.  Winners likely get to bake
the losers through marketing ads.  I'd like to point out a problem with
this scheme:  the systems brought to the competition are not necessarily
those I buy.  One would need to employ a stock car racing analogy, where
some modifications are allowed - change default passwords, locate machine
as "standard" (and what would THAT mean?) host on a network, etc.  At 
some point what becomes allowed is beyond what you or I would do as an
administrator;  at which point the purpose is forgotten in favor of the
competition and the trophies.  However, I agree the approach is time tested.
Competition is an age old method of determination;  perhaps the challenge
here is to determine the appropriate contest(s).

					-Mike

hughes@silver.bacs.indiana.edu (08/11/89)

/* Written  2:48 pm  Aug  7, 1989 by jdp@polstra.UUCP in silver:comp.protocols.tcp-ip */
One of the problems that surfaces over and over in this forum is the
fact that the major vendors don't bother to fix the known security
problems in their products.  The reason they don't fix these problems
is that they don't have much motivation to do so.  I would like to
suggest a way to provide the missing motivation.

Somebody (the DoD, a major university, or an interested member of the
press) ought to organize an annual competition, in which each of the
vendors would try to crack its competitors' systems.  A mini-network
would be set up, and each vendor's tiger team would try to crack as
many other systems in as many ways as possible during some fixed time
interval.  The results would be published openly so that potential
customers could take security issues into account when choosing
vendors.

The vendors would be doubly motivated to keep abreast of all known
security weaknesses.  First, they would be looking for ways to
embarrass the competition.  Second, they would be trying to minimize
their own vulnerability as much as possible.

The press would love it, because security issues sell newspapers these
days.  Also, members of the press IMHO get a charge out of embarrassing
people and (especially) corporations.

There would be no need to openly publish the methods used for breaking
into systems, so the rest of the Internet would not need to worry about
zillions of evil computer hackers suddenly finding out how to mess with
their systems.  On the other hand, the rules could require that vendors
share their successful methods with the manufacturers of the systems
that were defeated by them.  (This could be part of the process of
validating a break-in.)

If the competition were held periodically, say once or twice a year,
then one could also keep track of weaknesses which had been previously
exposed and remained uncorrected.

Comments, anyone?

-- John Polstra               jdp@polstra.UUCP
   Polstra & Co., Inc.        ...{uunet,sun}!practic!polstra!jdp
   Seattle, WA                (206) 932-6482
-- 

-- John Polstra               jdp@polstra.UUCP
   Polstra & Co., Inc.        ...{uunet,sun}!practic!polstra!jdp
   Seattle, WA                (206) 932-6482
/* End of text from silver:comp.protocols.tcp-ip */

hughes@silver.bacs.indiana.edu (08/11/89)

jdp@polstra.UUCP writes:

> One of the problems that surfaces over and over in this forum is the
> fact that the major vendors don't bother to fix the known security
> problems in their products.  The reason they don't fix these problems
> is that they don't have much motivation to do so.  I would like to
> suggest a way to provide the missing motivation.
>
> Somebody (the DoD, a major university, or an interested member of the
> press) ought to organize an annual competition, in which each of the
> vendors would try to crack its competitors' systems.  A mini-network
> would be set up, and each vendor's tiger team would try to crack as
> many other systems in as many ways as possible during some fixed time
> interval.  The results would be published openly so that potential
> customers could take security issues into account when choosing
> vendors.
> ...

I see your point, but I have a few comments.

First, this assumes that the "tiger teams" would each stand on
equal footing, which is probably not the case.  If this approach
were to be taken, a more effective approach might be to have
an impartial third party try to break each system.

Second, I know people who are excellent at breaking programs, yet 
are not very good at designing or implementing programs.  And an 
operating system is, after all, just a collection of programs.

Third, and to me more important, I think this type of
competition would do more harm than good.  We can motivate
others by rewarding or punishing them, and there is a place
for both.  But to rely more on punishing will certainly
take the heart out of the industry...which is already suffering
enough from fierce and greedy competition.

In other words...shouldn't our motivation as programmers
to produce a quality product be coming more from an
internal inspiration, rather than from a fear of what
others will say or think?

/=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\
|| Larry J. Hughes, Senior Programmer ||  hughes@silver.bacs.indiana.edu   ||
||        Indiana University          ||  hughes@iujade.bitnet             ||
||   University Computing Services    ||                                   ||
||    750 N. State Road 46 Bypass     ||  "The person who knows            ||
||      Bloomington, IN  47405        ||     everything has a lot          ||
||         (812) 855-9255             ||       to learn."                  ||
\=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=/

A01MES1@NIU.BITNET (Michael Stack) (08/12/89)

> One of the problems that surfaces over and over in this forum is the
> fact that the major vendors don't bother to fix the known security
> problems in their products.  The reason they don't fix these problems
> is that they don't have much motivation to do so. ...

The problem is much more difficult than that:

   KEEP THE NETWORKS OPEN.  Virtually all of the people involved in a
   network are basically well-meaning and careful.  The challenge is
   protecting them and the system from the tiny number who are
   malicious or foolish.  Making it impossible for the latter to carry
   out their nefarious activities might seriously inconvenience
   everyone else.  We must seek out ways of controlling aberrant
   activities without impeding communication.

                                          James H. Morris
                                          Professor of Computer Science
                                          Carnegie Mellon University
                                          from CACM, June 1989, p 661

Needless to say, the above is taken from a very long letter and must
be considered as taken out of context.  Nonetheless, it is an expression
of the view that too much security can be a barrier to convenient use of
computer networks.  As it relates to the problem of fixing security holes,
I suspect that this view is a much greater obstacle than vendor motivation.


Michael Stack
Northern Illinois University

dave@celerity.uucp (Dave Smith) (08/16/89)

In article <24248@santra.UUCP> jkp@cs.HUT.FI (Jyrki Kuoppala) writes:
>The Berkeley ucb-fixes list already does a very good job at this -
>but apparently it isn't enough, as many vendors seem to neglect the
>security fixes which Berkeley puts out.  For example, how many have
>fixed the one with rshd and rlogind accepting connections from ports
>under 512 ?  It seems that someone has to make public the information
>how to use the bug before the vendors believe it.

One problem we have had in implementing the fixes is the reluctance of
BSD to explain what the problem is!  Our code has diverged from the
BSD stuff in parts and receiving a ten-line context diff which doesn't
apply to our code and "fix this now!" is really not very helpful.  
Especially since in order to be sure that we've fixed the bug we need to
know what the bug is so we can test it afterwards.  In addition, since 
our architecture is different from the VAX some bugs (like the fingerd 
hole) don't happen on our machine or may happen in a different way.

The idea of a security list circulating amongst the vendors and then going
public after a few months is an excellent idea.  Pretending the problems
don't exist is silly.

(these views, of course, are mine and not the property of FPS Computing,
 who would probably disown me if they knew the kind of silly stuff I have
 been posting)
David L. Smith
FPS Computing, San Diego
ucsd!celerity!dave
"Repent, Harlequin!," said the TickTock Man

neil@zardoz.UUCP (Neil Gorsuch) (08/18/89)

In article <24248@santra.UUCP> jkp@cs.HUT.FI (Jyrki Kuoppala) writes:
>It's been proposed that security problems like those what the worm
>used, whenever found, should first be published on a restricted-access
>mailing list as soon as possible.  This mailing list should have all
>major Un*x vendors on it, so that they can rush bug-fixes to their
>clients as soon as possible.  Then, with for example a three-month or
>so delay, this mailing list would be relayed to a Usenet newsgroup.

The restricted access unix security mailing list already exists.  It
is primarily meant to be an advance warning system of newly found
security holes and problems.  I am appending the official blurb, which
eplains it more fully.  Please mail requests to join (after reading
the rest of this message, of course 8<), to security-request@cpd.com,
rather than directly to me at neil@cpd.com.  Be patient if I don't
answer your request in a timely manner, my mailbox is overloaded at
best, and I receive hundreds of new messages each week from various
sources.  Of course, if I don't respond to you within 2 or 3 weeks, it
can't hurt to try again, mail has been known to disappear 8<).

Neil Gorsuch
neil@cpd.com
uunet!zardoz!neil
(AKA security-request)

----------------------------------------------------------------------------

UNIX SECURITY MAILING LIST

The unix security mailing list exists for these reasons:

1. To notify system administrators and other appropriate people of
   serious security dangers BEFORE they become common knowledge.
2. Provide security enhancement information.

Most unix security mailing list material has been explanations of, and
fixes for, specific security "holes".  I DO NOT believe in security
through obscurity, but I certainly don't spread "cracking" methods to
the world at large as soon as they become known.  The unix security
list is, in my opinion, an excellent compromise between the two ideas.
It is not intended for the discussion of theoretical security
techniques or "Should we thank Mr. Morris?" types of subjects, there
is no need for secrecy regarding such matters, and appropriate usenet
news groups already exist that serve those purposes.  It is, however,
appropriate to post security checkup programs and scripts, and
specific security enhancement methods to this list in addition to the
proper news groups.  I assume that since the members of the list made
a special effort to join, they might appreciate appropriate material
being sent via email so that they don't have to sort through many news
groups to "catch" everything.

zardoz is well connected, having 51 uucp links including uunet, and is
in the process of becoming part of the Internet.  Reliable delivery is
available to any bang path or internet address.  Each mailing list
destination can choose to receive either automatically "reflected"
postings of all received material, or moderated digests that are sent
out about once a week.  There is a seperate posting address for
emergencies that reflects the received material to the entire mailing
list without any intervention on my part.

I typically require that destinations have an interest in unix site
security, or are involved in adding security enhancement software to
unix, but I am flexible.  To apply for membership, send email from one
of the following or send email requesting that I contact one of the
following (please arrange the former, it saves me time):

1.	For uucp sites with a uucp map entry, the listed email contact,
	map entry writer, or root.
2.	For internet sites, the NIC "WHOIS" listed site contact, or root.

Please include the following:

1.	The uucp map entry and map name to find it in, or the WHOIS
	response from the NIC and the request handle.
2.	The actual email destination you want material sent to.  It
	can be a person or alias, but must be on the same machine
	that you use as a reference, or in a sub-domain of said machine.
3.	Whether you want immediate reflected postings, or the weekly
	moderated digests.
4.	The email address and voice phone number of the administrative
	contact if different from the above.
5.	The organization name, address, and voice phone number if not
	listed already.

Please don't do any of the following:

1.	send email from root on machine_17.basement.podunk_U.edu and
	expect that to be sufficient for membership.  With
	workstations being so prevalent, and being so EASY to "crack",
	root doesn't mean much these days.
2.	send email from root on the uucp map entry listed site
	toy-of-son and expect that to be sufficient.  If you would prefer
	material sent to a home machine, verify your credentials through
	one of the previously mentioned methods.
3.	send mail from a network that I don't have any way to verify,
	such as bitnet or others.  I can verify uucp and internet sites.
	Send me some way to verify your credentials if you can't use
	an appropriate listed uucp or internet site.
4.	send me mail saying I can verify your identity and credentials
	by telephoning a long distance number.  I will continue to donate
	the extra computer capacity required for sending and archiving
	this list, and I will continue to spend the money on the extra
	uucp/internet communication costs that this list requires, but I
	draw the line at spending money on voice long distance phone calls.
5.	send me an application request that involves a lot of time and
	special procedures for verification.  Please try to make my
	processing of your application an easy matter.

All email regarding this list should be sent to:

security-request@cpd.com (INTERNET sites)
uunet!zardoz!security-request (UUCP sites)

neil@zardoz.UUCP (Neil Gorsuch) (08/18/89)

In article <433@celit.UUCP> dave@celerity.UUCP (Dave Smith) writes:
>The idea of a security list circulating amongst the vendors and then going
>public after a few months is an excellent idea.  Pretending the problems
>don't exist is silly.

Sheesh, I should probably hire an ad agency, nobody knows about the
unix security mailing list 8<).  Please see my previous posting for
more information, or write to security-request@cpd.com.

Neil Gorsuch
(AKA security-request)
neil@cpd.com
uunet!zardoz!neil