[comp.protocols.tcp-ip] Worm report fails to address the problem

gnu@hoptoad.uucp (John Gilmore) (07/27/89)

I found the OTA worm report to not be very helpful.

It recommends central control of Internet security.  Of course, this is
what a government would tend to recommend -- centralization of authority.
We have several centralized authorities  for security and privacy now --
the NSA, NIST, and CERT.  NSA is habitually silent (protecting its own
security, not anyone else's), NIST doesn't seem to have the expertise,
and CERT seems to be following the NSA model (all information flows
inward).

It discussed several research projects, including the use of cryptography
for email; Kerberos; and formal proofs of programs.  What it forgot to point
out was that none of this research would have had any effect on the worm.

It talks about existing laws and proposed bills that would criminalize the
release of a worm -- while conveniently ignoring that these bills would
criminalize things that all of us do daily.

I think that *responsibility* for security should still rest on the
individual hosts and networks.  However, there should be public *testing*
of security by any interested parties, in the spirit of fire drills.

Responsibility for security should remain decentralized because one
model is not appropriate for all sites.  A central bureacracy will not
have experts in each type of machine on the Internet.  And central rules
will necessarily be compromises -- too loose for some sites, too strict
for others.

The key to making decentral security work is public testing.  On the
third Tuesday of each month, say, it's open season on breaking into
other peoples' machines over the Internet -- IF you provide a
transcript of your actions afterward.  Organizations with particular
security concerns can fund people to test their own security, or can
swap with another organization and test each others' security.  DARPA
and NSF can fund a few people to do more widespread "scattergun"
testing.  And there will always be plenty of volunteers because people
like to solve puzzles.

The key is to make this a regularly scheduled, publicly sanctioned
event.  You could even award prizes, as in the computer Go tournaments
or the Obfuscated C contest -- highest volume of systems cracked, most
obscure hole found, hardest to fix problem, least visible intrusion,
etc.  These could even carry cash prizes -- if you break into a NASA
computer during such a fire drill, and document your breakin, we'll pay
you $1000.  This would fund the best crackers so they could afford to
continue providing high quality testing.  Entire classes of
undergraduates and grad students could do security testing projects,
using the Internet as their testbed.  This would educate lots of folks
about how to provide good security, and make the Internet the most
secure network, by constantly testing and fixing its security.

Of course, people who didn't *want* to fix their security could just
drop off the net one Tuesday a month.  But public disclosure of the holes
found in other systems would make their systems more vulnerable the
rest of the time, and they would have strong incentive to either clean
up their security, or drop off the Internet.  In either case the Internet
is left more secure.

The only way I can see to keep the Internet secure is "eternal vigilance".
A central security bureau will not be eternally vigilant -- it will become
bureaucratic and lazy.  And it will have no incentive to reveal what it
has learned about security, except to small numbers of people (e.g. the
people who maintain 'sendmail' at various vendors).  In fact, revealing
breakins will DECREASE its reputation -- "No one has ever escaped from
Stalag 13!"

There seems to be a meme loose today that wants to criminalize all sorts
of activities -- that views making something illegal a "fix" for the problems
it presents.  But the problems persist regardless of what the laws say.
I think something closer to a "fix" would be to bring the activity out
of the underground and diffuse it through society, in a cooperative rather
than combative situation.

*If we keep treating security testing as someting only criminals will do,
only criminals will do it!*
-- 
John Gilmore      {sun,pacbell,uunet,pyramid}!hoptoad!gnu      gnu@toad.com
      "And if there's danger don't you try to overlook it,
       Because you knew the job was dangerous when you took it"

tli@sargas.usc.edu (Tony Li) (07/27/89)

In article <8136@hoptoad.uucp> gnu@hoptoad.uucp (John Gilmore) writes:
    ...
    and CERT seems to be following the NSA model (all information flows
    inward).
    
I think that this is most unfair, especially in light of the message
which was sent to the sun-managers list today.

Tony Li - USC University Computing Services
Internet: tli@usc.edu	Uucp: usc!tli	Bitnet: tli@gamera, tli@ramoth
This is a test.  This is a only a test.  In the event of a real life
you would have been given instructions.

scs@itivax.iti.org (Steve C. Simmons) (07/27/89)

gnu@hoptoad.uucp (John Gilmore) writes:

>I think that *responsibility* for security should still rest on the
>individual hosts and networks.  However, there should be public *testing*
>of security by any interested parties, in the spirit of fire drills.

>The key to making decentral security work is public testing.  On the
>third Tuesday of each month, say, it's open season on breaking into
>other peoples' machines over the Internet -- IF you provide a
>transcript of your actions afterward. . . .

While I agree with John that testing is key, this is the wrong way to
go about it.  Several times I've made deals with other sysadms to crack
each others systems, but this is a far cry from 'open season'.  Testing
should be private and controlled.  What should be open in the immediate
dissemination of how to close any holes opened.

Many shops (not naming any names here) have implicitly or explicitly
decided not to beef up security -- they may feel it isn't worth the effort
or have decided to trust the Internet community.  Whether you agree or
disagree with this is irrelevant.  Declaring 'open season' on them will
likely cause them to get angry and perhaps stimulate the same repressive
legislation and central beaurocracy you oppose.
season' will cause these shops a great deal of distree
-- 
Steve Simmons		          scs@vax3.iti.org
Industrial Technology Institute     Ann Arbor, MI.
"Velveeta -- the Spam of Cheeses!" -- Uncle Bonsai

karl@cheops.cis.ohio-state.edu (Karl Kleinpaste) (07/27/89)

I would like to make a small echo of the sentiment to avoid making the
entire Internet into an open hunting ground for any form of attack.
My most visible systems are also the ones which work the hardest, and
they can't _all_ afford a day off each month.

On the other hand, I would be extremely interested if I could offer,
for example, one of my systems (or perhaps one of each type) once a
month as an open-season victim testbed.  If a list of such
available-for-abuse systems could be compiled, we could accomplish the
goals both of having a large pool of available victims and keeping the
bulk of the Internet still operating sanely even during such abusive
testing.

Would anyone care to register ABUSE.NET, containing nothing but CNAMEs
for available systems?  Anyone wanting to test could set their system
up as a secondary server, thus giving them a dump of the available
masochists.

Just a thought,
--Karl

DSTEVENS@DSRM12.STEVENS-TECH.EDU (David L. Stevens) (07/28/89)

   
In Message  <8136@hoptoad.uucp> John Gilmore
<pacbell!hptoad!gnu@ames.arc.nasa.gov> writes:

> I found the OTA worm report to not be very helpful.
>      
> It recommends central control of Internet security.  Of course, this is
> what a government would tend to recommend -- centralization of authority.

    However, without some form of central authority you end up with anarchy,
    and you need someone with sufficient clout to punish people who violate
    security.

    [intervening text omited]

> I think that *responsibility* for security should still rest on the
> individual hosts and networks.  However, there should be public *testing*
> of security by any interested parties, in the spirit of fire drills.
>      
> Responsibility for security should remain decentralized because one
> model is not appropriate for all sites.  A central bureacracy will not
> have experts in each type of machine on the Internet.  And central rules
> will necessarily be compromises -- too loose for some sites, too strict
> for others.
>      
> The key to making decentral security work is public testing.  On the
> third Tuesday of each month, say, it's open season on breaking into
> other peoples' machines over the Internet -- IF you provide a
> transcript of your actions afterward.  Organizations with particular

    Thats a mighty big IF.  Declaring a day as open season to break into
    any system on the network is like declaring a day as open season to
    break into any bank in the country.  So what if you leave a transcript
    saying that you got in by breaking the window,  whos going to pay for the
    window afterwards???????

> security concerns can fund people to test their own security, or can
> swap with another organization and test each others' security.  DARPA
> and NSF can fund a few people to do more widespread "scattergun"
> testing.  And there will always be plenty of volunteers because people
> like to solve puzzles.

    If you make arangements with someone to specifically attempt to break
    into your system, thats fine, at least you'll know whos doing what.  But
    to allow any random Schmoe on the network to try and break in leaves no
    accountability.  How would you be able to tell a "helpful" cracker, if
    there is such a beast, from a "harmful" cracker????????

> The key is to make this a regularly scheduled, publicly sanctioned
> event.  You could even award prizes, as in the computer Go tournaments
> or the Obfuscated C contest -- highest volume of systems cracked, most
> obscure hole found, hardest to fix problem, least visible intrusion,
> etc.  These could even carry cash prizes -- if you break into a NASA
> computer during such a fire drill, and document your breakin, we'll pay
> you $1000.  This would fund the best crackers so they could afford to
> continue providing high quality testing.  Entire classes of
> undergraduates and grad students could do security testing projects,
> using the Internet as their testbed.  This would educate lots of folks
> about how to provide good security, and make the Internet the most
> secure network, by constantly testing and fixing its security.

    Undergraduates, and Graduates already try to break into systems either
    on their campus or off on networks, the last thing College Comp Center
    staffs need to deal with is encouragement, especially monetary, for them
    to continue!!!!!!!!!!!!!!

> Of course, people who didn't *want* to fix their security could just
> drop off the net one Tuesday a month.  But public disclosure of the holes
> found in other systems would make their systems more vulnerable the
> rest of the time, and they would have strong incentive to either clean
> up their security, or drop off the Internet.  In either case the Internet
> is left more secure.

    What about those of us who just don't like the idea of people trying to
    break into our systems???  You're taking away an important research
    resourse so that people can burn bandwidth trying to break into places
    where they don't belong.

> The only way I can see to keep the Internet secure is "eternal vigilance".

    Eternal vigilance is whats needed in order to keep systems secure.  But
    we need laws so that we can punish people who violate our security, and
    break into, or contaminate our systems.

> A central security bureau will not be eternally vigilant -- it will become
> bureaucratic and lazy.  And it will have no incentive to reveal what it
> has learned about security, except to small numbers of people (e.g. the

    [ rest of text omitted ]

> --
> John Gilmore      {sun,pacbell,uunet,pyramid}!hoptoad!gnu      gnu@toad.com
>       "And if there's danger don't you try to overlook it,
>        Because you knew the job was dangerous when you took it"
> ------------
> 


===============================================================================
|                                  |                                          |
| David L. Stevens                 | CCnet:    SITVXC::DSTEVENS               |
| Senior Systems Programmer        | BITnet:   DSTEVENS@STEVENS               |
| Stevens Institute of Technology  | INTERnet: DSTEVENS@VAXC.STEVENS-TECH.EDU |
|                                  |                                          |
===============================================================================
[ ...self realization, I was thinking of those immortal words of Socrates     ]
[    when he said: 'I drank what ?'     -   Val Kilmer  -  Real Genius        ]
------------

brent@capmkt.COM (Brent Chapman) (07/28/89)

tli@sargas.usc.edu (Tony Li) writes:

>In article <8136@hoptoad.uucp> gnu@hoptoad.uucp (John Gilmore) writes:
>    ...
>    and CERT seems to be following the NSA model (all information flows
>    inward).
>    
>I think that this is most unfair, especially in light of the message
>which was sent to the sun-managers list today.

Yes, they finally sent it out.  I was informed of it (through other
channels) and fixed it on my systems over a month ago.  I'm not particularly
well-connected in the security community; CERT must surely have learned of
the problem before I did.  Why did they take so long to get the word out?
Does CERT have a formal policy of sitting on a security problem for some
period of time before releasing it to the "general public"?  What _is_
CERT's charter and policy?

Before anyone starts flaming here, note that I'm not criticizing CERT (yet);
I don't know enough about them.  I'm asking for information about them,
so that I can form an informed opinion about them.


-Brent
--
Brent Chapman					Capital Market Technology, Inc.
Computer Operations Manager			1995 University Ave., Suite 390
brent@capmkt.com				Berkeley, CA  94704
{apple,lll-tis,uunet}!capmkt!brent		Phone:  415/540-6400

grr@cbmvax.UUCP (George Robbins) (08/02/89)

In article <89627152623.c83.DSTEVENS> DSTEVENS@DSRM12.STEVENS-TECH.EDU (David L. Stevens) writes:
> 
>    
> In Message  <8136@hoptoad.uucp> John Gilmore
> <pacbell!hptoad!gnu@ames.arc.nasa.gov> writes:
> 
> > I found the OTA worm report to not be very helpful.
> >      
> > It recommends central control of Internet security.  Of course, this is
> > what a government would tend to recommend -- centralization of authority.
> 
>     However, without some form of central authority you end up with anarchy,
>     and you need someone with sufficient clout to punish people who violate
>     security.

Say what?  It is not at all obvious that anybody needs to be punished for
violation of this precious "security", in the absence of any further malign
intent or actions.  "Security" need not be need not be used as a catchall
excuse that lays all the fault on the perpetrator and excuses the victims
for their failure to understand their exposure and secure their system to
such a degree they deem appropriate.  I'd leave the "enforcement" of security
up to the organizations who have a compelling need for it and prefer not to
have government supply a new set of rules and regulations over a network of
diverse organizations and interests.  One might still be regretting such
rules long after the battle of the worm has faded into legend...

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)