[mod.protocols.tcp-ip] My Broadcast

jkh@VIOLET.BERKELEY.EDU.UUCP (04/02/87)

By now, many of you have heard of (or seen) the broadcast message I sent to
the net two days ago. I have since received 743 messages and have
replied to every one (either with a form letter, or more personally
when questions were asked). The intention behind this effort was to
show that I wasn't interested in doing what I did maliciously or in
hiding out afterwards and avoiding the repercussions. One of the
people who received my message was Dennis Perry, the Inspector General
of the ARPAnet (in the Pentagon), and he wasn't exactly pleased.
(I hear his Interleaf windows got scribbled on)

So now everyone is asking: "Who is this Jordan Hubbard, and why is he on my
screen??"

I will attempt to explain.

I head a small group here at Berkeley called the "Distributed Unix Group".
What that essentially means is that I come up with Unix distribution software
for workstations on campus. Part of this job entails seeing where some of
the novice administrators we're creating will hang themselves, and hopefully
prevent them from doing so. Yesterday, I finally got around to looking
at the "broadcast" group in /etc/netgroup which was set to "(,,)". It
was obvious that this was set up for rwall to use, so I read the documentation
on "netgroup" and "rwall". A section of the netgroup man page said:

  ...

     Any of three fields can be empty, in which case it signifies
     a wild card.  Thus

                universal (,,)

     defines a group to which everyone belongs.  Field names that ...
  ...


Now "everyone" here is pretty ambiguous. Reading a bit further down, one
sees discussion on yellow-pages domains and might be led to believe that
"everyone" was everyone in your domain. I know that rwall uses point-to-point
RPC connections, so I didn't feel that this was what they meant, just that
it seemed to be the implication.

Reading the rwall man page turned up nothing about "broadcasts". It doesn't
even specify the communications method used. One might infer that rwall
did indeed use actual broadcast packets.

Failing to find anything that might suggest that rwall would do anything
nasty beyond the bounds of the current domain (or at least up to the IMP),
I tried it. I knew that rwall takes awhile to do its stuff, so I left
it running and went back to my office. I assumed that anyone who got my
message would let me know.. Boy, was I right about that!
After the first few mail messages arrived from Purdue and Utexas, I begin
to understand what was really going on and killed the rwall. I mean, how
often do you expect to run something on your machine and have people
from Wisconsin start getting the results of it on their screens?

All of this has raised some interesting points and problems.

1. Rwall will walk through your entire hosts file and blare at anyone
   and everyone if you use the (,,) wildcard group. Whether this is a bug
   or a feature, I don't know.

2. Since rwall is an RPC service, and RPC doesn't seem to give a damn
   who you are as long as you're root (which is trivial to be, on a work-
   station), I have to wonder what other RPC services are open holes. We've
   managed to do some interesting, unauthorized, things with the YP service
   here at Berkeley, I wonder what the implications of this are.

3. Having a group called "broadcast" in your netgroup file (which is how
   it comes from sun) is just begging for some novice admin (or operator
   with root) to use it in the mistaken belief that he/she is getting to
   all the users. I am really surprised (as are many others) that this has
   taken this long to happen.

4. Killing rwall is not going to solve the problem. Any fool can write
   rwall, and just about any fool can get root priviledge on a Sun workstation.
   It seems that the place to fix the problem is on the receiving ends. The
   only other alternative would be to tighten up all the IMP gateways to
   forward packets only from "trusted" hosts. I don't like that at all,
   from a standpoint of reduced convenience and productivity. Also, since
   many places are adding hosts at a phenominal rate (ourselves especially),
   it would be hard to keep such a database up to date. Many perfectly well-
   behaved people would suffer for the potential sins of a few.


I certainly don't intend to do this again, but I'm very curious as to
what will happen as a result. A lot of people got wall'd, and I would think
that they would be annoyed that their machine would let someone from the
opposite side of the continent do such a thing!

						Jordan Hubbard
						jkh@violet.berkeley.edu
						(ucbvax!jkh)

					Computer Facilities & Communications.
					U.C. Berkeley

LYNCH@A.ISI.EDU.UUCP (04/04/87)

John,  I think you did a good thing.  Testing for idiotic holes in
the "system".  Now, if you could figure out a way to encourage
them to get plugged.  I remember years ago being annoyed at the 
loose security in the Tenex operating system that was prevalent
on the early Arpanet.  I couldn't get the wizards at BBN to "fix"
the problems by the "usual" means.  So, one day I took advantage of
the holes and, across the net, all by myself with no confederates,
obtained the password of the wizard of all wizards and sent it to him 
in a one word mail message.  No other communication was necessary.
He plugged the holes as fast as his fingers could type.  I was a "good
guy" and he knew it, but it took an actual event to drive the point home 
it wouldn't be too long before someone else would figure out the
method i used an dperhaps not be so benign.  
Can you think of a similar thing to do?  Or have you already done it?
(I think not because what you are pointing out is going to take
lots of thinking to solve.  But, it has to start somewhere.

Dan
-------

Rudy.Nedved@H.CS.CMU.EDU.UUCP (04/05/87)

Whoa!

Encouraging people to find holes and then use them to make the local system
programmers work on them is wrong. It is like encouraging people to find out
if their neighbors lock their door during the day so they will. Do you really
want that or do you want the theives to be caught? I want the theives to be
caught and the ability to leave my door open. I don't want to fear my
neighborhood or my users.

I could spend many man years working on Unix security alone. The same is
TRUE with TOPS-20. The worst is when you fix a security problem and some
yo-yo finds out about it and then attacks some other remote site. The remote
site gets pissed and says it can not afford to fix it....would you please
deal with the yo-yo.

-RUdy

usenet@ucbvax.UUCP (04/06/87)

Dan -

     I'm afraid you (and I, and any of the other old-timers who
care about security) are banging your head against a brick wall.
The philsophy behind Unix largely seems quite reminiscent of the
old ITS philsophy of "security through obscurity;" we must
entrust our systems and data to a open-ended set of youthful
hackers (the current term is "gurus") who have mastered the
arcane knowledge.

     The problem is further exacerbated by the multitude of slimy
vendors who sell Unix boxes without sources and without an
efficient means of dealing with security problems as they
develop.

     I don't see any relief, however.  There are a lot of
politics involved here.  Some individuals would rather muzzle
knowledge of Unix security problems and their fixes than see them
fixed.  I feel it is *criminal* to have this attitude on the DDN,
since our national security in wartime might ultimately depend
upon it.  If there is such a breach, those individuals will be
better off if the Russians win the war, because if not there will
be a Court of Inquiry to answer...

     It may be necessary to take matters into our own hands, as
you did once before.  I am seriously considering offering a cash
reward for the first discoverer of a Unix security bug, provided
that the bug is thoroughly documented (with both cause and fix).
There would be a sliding cash scale based on how devastating the
bug is and how many vendors' systems it affects.  My intention
would be to propagate the knowledge as widely as possible with
the express intension of getting these bugs FIXED everywhere.

     Knowledge is power, and it properly belongs in the hands of
system administrators and system programmers.  It should NOT be
the exclusive province of "gurus" who have a vested interest in
keeping such details secret.

-- Mark --

PS: Crispin's definition of a "somewhat secure operating system":
A "somewhat secure operating system" is one that, given an
intelligent system management that does not commit a blunder that
compromises security, would withstand an attack by one of its
architects for at least an hour.

Crispin's definition of a "moderately secure operating system": a
"moderately secure operating system" is one that would withstand
an attack by one of its architects for at least an hour even if
the management of the system are total idiots who make every
mistake in the book.
-------

PERRY@VAX.DARPA.MIL.UUCP (04/06/87)

Jordan, you are right in your assumptions that people will get annoyed
that what happened was allowed to happen.

By the way, I am the program manager of the Arpanet in the Information
Science and Technology Office of DARPA, located in Roslin (Arlington), not
the Pentagon.

I would like suggestions as to what you, or anyone else, think should be
done to prevent such occurances in the furture.  There are many drastic
choices one could make.  Is there a reasonable one?  Perhaps some one
from Sun could volunteer what there action will be in light of this
revelation.  I certainly hope that the community can come up with a good
solution, because I know that when the problem gets solved from the top
the solutions will reflect their concerns.

Think about this situation and I think you will all agree that this is
 a serious problem that could cripple the Arpanet and anyother net that
lets things like this happen without control.

dennis
-------

jkh@VIOLET.BERKELEY.EDU.UUCP (04/06/87)

Dennis,

Sorry about the mixup on your location and position within DARPA. I got
the news of your call to Richard Olson second hand, and I guess details
got muddled along the way. I think the best solution to this problem (and
other problems of this nature) is to tighten up the receiving ends. Assuming
that the network is basically hostile seems safer than assuming that it's
benign when deciding which services to offer.

I don't know what Sun has in mind for Secure RPC, or whether they will move
the release date for 4.0 (which presumably incorporates these features)
closer, but I will be changing rwalld here at Berkeley to use a new YP
database containing a list of "trusted" hosts. If it's possible to change
RPC itself, without massive performance degradation, I may do that as well.

My primary concern is that people understand where and why unix/network
security holes exist. I've gotten a few messages from people saying that
they would consider it a bug if rwall *didn't* perform in this manner, and
that hampering their ability to communicate with the rest of the network
would be against the spirit of all it stands for. There is, of course, the
opposite camp which feels that IMP's should only forward packets from hosts
registered with the NIC. I think that either point of view has its pros and
cons, but that it should be up to the users to make a choice. If they wish
to expose themselves to potential annoyance in exchange for being able to,
uh, communicate more freely, then so be it. If the opposite is true, then
they can take appropriate action. At least an informed choice will have been
made.

		Yours for a secure, but usable, network.

					Jordan Hubbard

PALLAS@SUSHI.STANFORD.EDU.UUCP (04/06/87)

I see that, as usual, Mark Crispin has tried to turn a constructive
discussion into a diatribe against Unix.  If there's a point to his
flame, however, it escapes me.  It does bear an entertaining
resemblance to some conspiracy theories I've heard, I must admit.

    Crispin's definition of a "somewhat secure operating system":
    A "somewhat secure operating system" is one that, given an
    intelligent system management that does not commit a blunder that
    compromises security, would withstand an attack by one of its
    architects for at least an hour.

"You stupid fool, who told you to turn the damn thing on?!?"

    Crispin's definition of a "moderately secure operating system": a
    "moderately secure operating system" is one that would withstand
    an attack by one of its architects for at least an hour even if
    the management of the system are total idiots who make every
    mistake in the book.

The first mistake in the book is to believe that the security of the
operating system implies the security of the data, or rather that the
system is an isolated entity which can be made "secure" independent of
its environment.

joe
-------

LYNCH@A.ISI.EDU.UUCP (04/06/87)

Well,  Looks like we have (re)uncovered a can of mixed worms here.
The example I gave was definitely in the "security" area and you
should note that the method used to get it fixed involved exactly
one "outside site", the site of the author of the operating system.

The example of the broadcast that went "astray" is more accurately
described as an "integrity" issue.  With integrity one is concerned
that the "system/facility" stay alive and functional under both
normal use and many forms of abnormality.  What we are learning
with some of the facilities for message sending is that our
"internet" is very highly connected and even can be considered
to be too highly connected for some forms of (even innocent)
misbehavior.  How do we benefit from what we have learned thus far?

Dennis has suggested that one of the manufacturers fix some code
and/or defaults and/or procedures in its releases.  I'm sure other
manufacturers can do likewise should they also exhibit the
same misfeatures in their offerings.  But the big thing that we
need to understand is that we do not understand how to live
in these highly connected internets yet.  Much more research needs
to happen in the area of intergroup interactions.  And much more
tolerance needs to be exhibited towards those who are probing
the edges of all this.

Dan
-------

robert@SPAM.ISTC.SRI.COM.UUCP (04/06/87)

>> 						..... we must
>> entrust our systems and data to a open-ended set of youthful
>> hackers (the current term is "gurus") who have mastered the
>> arcane knowledge.

	Only because these 'youthful hackers' are the only ones
	willing (or having the time) to look for the problems
	they discover.
>> 
>>    ....
>> 
>>   Knowledge is power, and it properly belongs in the hands of
>> system administrators and system programmers.  It should NOT be
>> the exclusive province of "gurus" who have a vested interest in
>> keeping such details secret.

	Mark,

	I agree that system administators should have the know-how
	to protect their systems.  However I have not seen the
	concerted effort of gurus to keep security problems
	secret from the administors.  Rather I have seen administrators
	keeping such holes secret from the users, and then complaining
	when the users discover and use them.

>> 
>> -- Mark --
>> 
>> PS: Crispin's definition of a "somewhat secure operating system":
>> A "somewhat secure operating system" is one that, given an
>> intelligent system management that does not commit a blunder that
>> compromises security, would withstand an attack by one of its
>> architects for at least an hour.

	...except for the case where one has physical access to
	the hardware.

Robert Allen,
robert@spam.istc.sri.com

Disclaimer: I am not a guru, and I don't advocate breakins, but if a
	    feature is there (such as telnet port 25), and is used,
	    I think that the administrators should share responsibility
	    with the user for any problems that result.

Rudy.Nedved@H.CS.CMU.EDU.UUCP (04/06/87)

Dan,

My two areas of frustration are abuse of mail resources and abuse of
network resources. Each year people, send mail to as many mailing lists
as they can asking to be put on them instead of the request list. Several
times a year, someone configures a mailer to cause a huge loop that causes
many megabytes of mail messages to be sent to many people on many different
systems. At least once year, I see a piece of code written under the
assumption that the network is a quiet high-speed high-relaibility
medium -- the code retransmits quickly and has very short timeouts.

Lastly, we have several systems that take a list of hosts and broadcast
messages to them to update databases. This is in the similiar flavor
of grapevine. It is not unlikely that some company could set up a
system and want a broadcast facility similiar to the one that started
up this discussion. At this point, it is no longer a security problem
but a feature.

If I had concrete improvements that I could implement, I would act on them.

Maybe the system will change to charge for mail and to charge for network
access and usage. People would then be more responsive to their utilization
of those resources.

-Rudy

faustus@IC.BERKELEY.EDU.UUCP (04/06/87)

What type of security are we really taking about, anyway?  Military security?
If so, maybe it's better that there are well-known loopholes so that nobody
places too much faith in their system and makes use of techniques like
public-key encryption when it really matters.  No matter how secure your
network and OS is, if you assume that it's ok to rely on their security alone
for very sensitive data, you'll get burned sooner or later.  It's much
safer to assume the worst and take proper precautions.

	Wayne

robert@SPAM.ISTC.SRI.COM.UUCP (04/06/87)

>> Whoa!
>> 
>> Encouraging people to find holes and then use them to make the local system
>> programmers work on them is wrong. It is like encouraging people to find out
>> if their neighbors lock their door during the day so they will. Do you really
>> want that or do you want the theives to be caught? I want the theives to be
>> caught and the ability to leave my door open. I don't want to fear my
>> neighborhood or my users.

While this doesn't deal directly with TCP-IP, it is a *very* important
consideration in the Internet in particular, and any network in general.

Often a so-called 'breakin' does not even require that a user maliciously
"try their neighbors doors" to see if they can gain restricted permissions
or access.  Often curiosity alone is enough to cause problems. Example 1:
a first-time UNIX user was learning about the file system, and in particular
how to delete files.  He was told that he could only delete files owned by
him, and by way of counterexample his mentor typed "rm /etc/passwd".
Surprise, /etc was writeable and the file was gone.  Example two: the recent
rlogin breakins at Stanford.  Example 3: Obviously if you have hardware
access to the transmission medium you can unintentionally wreak havoc merely
by using someone elses IP address.

I too would like to live in a word where I can leave my "door unlocked".
Unfortunately it doesn't take more than a very few nasty or ignorant persons
to cause problems.  Due to the fact that computers have evolved in an
atmosphere of sharing (time sharing, memory sharing, src sharing..)
we have yet to realize the responsibilities and risks of trusting them too
much.  I.e., there is a big difference between leaving your door
unlocked but closed, and spreading $20.00 bills on your front lawn.
In the case of J. Hubbards 'wall' to the Net, the problem was not
caused by a malicious person, but by simple curiosity.

At the recent TCP/IP Conference in Monterey CA, some discussion was
given to "network security".  From the military standpoint they want
the ability to send data through a network, such that anyone who
captures the data won't be able to read or use it.  While this may
be a prerequisite for the military, I don't think that 'normal' users
should expect that their Email be any more secure than their USMail.
The best method of keeping something secure on a network is to physically
seperate it.  Or, do what I do, and don't put anything on the system
which you wouldn't read by someone else under the worst case scenario.

Fixing security 'features' is obviously important, and should be pursued.
Catching malicious persons doing damage is also extremely important.  But
"catching the theives" is not the answer to a lack of network security.
If your network rolls out a red-carpet to someone then don't be surprised
if you find muddy footprints on it the next morning.  I leave you with
two examples quoted from the January 1987 issue of the ACM Software
Engineering Notes...

	"The computer security administrator at Roche ... had been
	 plagued by a hacker who auto-dialed the entire Roche phone
	 system in sequence.  .... They laid a hacker trap on one of
	 the PC's and traced the call.  Once the suspect was found,
	 it was even harder to get him arrested since he was in
	 New York, and Roched in New Jersey (which got the FBI involved).
	 The perp was brought into the police station and had the riot
	 act read to him...  He was not charged -- because there wasn't
	 a **no-trespassing** sign on the hacker trap identifying the
	 system as private proberty of Roche."

	 " "Welcome to the ______ System" ... A Mass. financial firm
	  that had attempted to prosecute a hacker who had penetrated
	  their system.  The defense lawyer argued that the system had
	  a greeting that welcomed people to the system, and that was 
	  tantamount to welcoming someone intor your home.  The judge
	  threw out the case, accepting the arguments of the defense.."

Robert Allen,
robert@spam.istc.sri.com

bzs@BU-CS.BU.EDU.UUCP (04/06/87)

Mark Crispin --

I think your attack on UNIX is utterly unwarranted and devoid of
content. How you can compare it to ITS where everyone was effectively
a wheel is utterly beyond me.

UNIX exhibits no worse characteristics than other commonly used
systems. Even your beloved TOPS-20 had this charming feature of
unencrypted passwords so anyone gaining access to a priviliged
terminal for a few seconds could print every pwd on the system in
clear text with one command. Sure, that's fixed, but the fix came
recently, after DEC had dumped the product. We had to live with this
for years (and show me the local hack patches that "fixed" this and
I'll show you the local hack patches that fix any UNIX security flaw
you see.)

For the love of god Mark, Jordan broadcast a message to a lot of terminals.

That's it.

BFD, sure it could be annoying, but the originating site (and user,
although I admit that could be faked easily) was clearly printed and
easily (see etherfind for example) identified. To say your "systems
and data" were endangered by this broadcast is hyperbole, at best.

Can you condemn the entire UNIX operating system because a user was
able to SHOUT to a bunch of hosts he didn't own? Sounds flimsy to
me.

As to "muzzling" of unix security problems, there's an entire, active
mailing list on the internet devoted to nothing but discussing UNIX
security issues. What other operating system can claim this? (Ok,
these things are also freely discussed on some of the TOPS-20 lists,
no argument, but name another? I've seen this stuff specifically
stifled and people severely flamed on at least one other O/S's list.)

	-Barry Shein, Boston University

P.S. One thing I do agree with Mark about is that without the sources
you might be a sitting duck. This is one major reason I discourage
people from buying VMS.

bzs@BU-CS.BU.EDU.UUCP (04/06/87)

From Dennis Perry:
>I would like suggestions as to what you, or anyone else, think should be
>done to prevent such occurances in the furture.

Solution:

Edit /etc/servers and remove the rwalld line. This will disable the
remote service. The local "write to all users" program, 'wall', can
still be used on any individual system. To shout to all systems in an
area either have the operators log in and run wall locally or execute
it via 'rsh system wall ..msg..' from a locally trusted site (as per
the rsh restrictions.) A command file could be created trivially which
simulated "rwall" to a selected set of sites:

#!/bin/sh
echo "Enter message to be sent to all systems (one line)"
echo -n 'MESSAGE: '
read msg
for i in host1 host2 etc...
do
	rsh $i wall $msg
done

(I didn't test this, but I think the point is clear.)

This can be further enhanced by removing the rwall binary from your
systems, but if you don't support the daemon, you're not going to see
any broadcasts, so it's under your control. Done.

	-Barry Shein, Boston University

PERRY@VAX.DARPA.MIL.UUCP (04/06/87)

Jordan, thanks for the note.  I agree that we should discover and FIX holes
found in the system.  But at the same time, we don't want to have to
shut the thing down until such a fix can be made. Misuse of the system
get us all in a lot of trouble.  The Arpanet has succeeded because of
the self policing community. If this type of potential for disruption
gets used by very many people, I guarentee that we all will not like the
solution or fix proposed.

dennis
-------

PERRY@VAX.DARPA.MIL.UUCP (04/06/87)

Barry, thanks for your suggestion.  Seems to me like a good solution
to start with, i.e. where we trust the users to implement the fix.
Ultimately, we have to build a network that will protect itself somehow.

Can I ask all you out there to implement such a scheme or any better one
that may come out of this discussion.

Thanks,
dennis
-------

PERRY@VAX.DARPA.MIL.UUCP (04/07/87)

Dan, you are right in one point, anyway.  That is we need to explore the
issues of intergroup interaction.  But, like any society where our
collective well being depends upon the behavior of others outside
our control, we need to be sensitive on how we explore the edges of these
interactions.  I posit that this should take place in a controlled 
experiment that maximizes the benefits and minimizes the disruptions.

After all, the Arpanet is still an 'experimental' network.  Let's plan
our experiments, rather than just letting them happen.

dennis
-------

galvin@dewey.UUCP (04/07/87)

	From:     Robert Allen <robert@spam.istc.sri.com>

	I don't think that 'normal' users should expect that their
	Email be any more secure than their USMail.

I don't buy this.  Why should we restrict or constrain current technology
based on what we are used to?  There is no reason that electronic mail can't
be more secure than USMail.  Isn't it self-defeating to assume otherwise?

	From:     Rudy.Nedved@h.cs.cmu.edu

	Encouraging people to find holes and then use them to make the
	local system programmers work on them is wrong. It is like
	encouraging people to find out if their neighbors lock their
	door during the day so they will. Do you really want that or do
	you want the theives to be caught? I want the theives to be
	caught and the ability to leave my door open. I don't want to
	fear my neighborhood or my users.

This analogy doesn't hold in the internet (small i intended).  It is not your
neighbors you are worried about.  You can live in a "friendly" network just
like you can live in a "friendly" neighborhood.  The problem is, your friendly
network is a great deal "closer" to the unfriendly ones than your friendly
neighborhood is close to unfriendly ones.

Isn't this what Dan Lynch meant when he said:

	From: Dan Lynch <LYNCH@a.isi.edu>

	What we are learning with some of the facilities for message
	sending is that our "internet" is very highly connected and
	even can be considered to be too highly connected for some
	forms of (even innocent) misbehavior.  How do we benefit from
	what we have learned thus far?

	... But the big thing that we need to understand is that we do
	not understand how to live in these highly connected internets
	yet.  Much more research needs to happen in the area of
	intergroup interactions.  And much more tolerance needs to be
	exhibited towards those who are probing the edges of all this.

Jim

don@SRI-LEWIS.ARPA.UUCP (04/07/87)

Just an interested bystander here,

Perhaps the fundemental difference in UNIX versus most other operating
system is that the basic user has to get so much more intimate with the
command set. In addition a goodly number of these users are writing software
that requires them to, at some point in time, have a hook or two into 
the workings of the operating system.

Also one must remember that the abilities/capabilities of one to cross
mount entire file systems is a FEATURE, not a quirk.

Believe me, when the feds come crashing through your bedroom window
and sieze your equipment you will know you shouldn't have been doing
what you were doing.

TIA, (That Is All)
Don

NJG@CORNELLA.BITNET.UUCP (04/07/87)

>As to "muzzling" of unix security problems, there's an entire, active
>mailing list on the internet devoted to nothing but discussing UNIX
>security issues. What other operating system can claim this? (Ok,
>these things are also freely discussed on some of the TOPS-20 lists,
>no argument, but name another? I've seen this stuff specifically
>stifled and people severely flamed on at least one other O/S's list.)
I couldn't resist:
The VM Group of the IBM user's group SHARE has a conferencing system
(VMSHARE) which has an on going discussion of security problems.

ahill@CC7.BBN.COM.UUCP (04/07/87)

	I would like to hear why Mark Crispin's second definition isn't
a more pratical approach to the security issue.  If Unix is still vulnerable
after a decade of availability should we ever expect it to be safe?  Why also
should we pick on Unix (except its a good subject to evoke flames)?  My
years of experience in dealing with security indicates that data should
be encrypted whenever practical.  Relying on software or system adminstrators
is folly.

Alan

MRC%PANDA@SUMEX-AIM.STANFORD.EDU.UUCP (04/08/87)

Wayne -

     In a sense your message is very reminiscent of the attitude of the
architects of MIT ITS.  It is a useful attitude in certain environments;
it has been argued that the security/integrity consciousness of TOPS-20
and Multics hampered tools development (or limited it to system wizards)
compared to systems such as ITS, WAITS, and Unix.  But this does not
mean that it is right for all environments.  Even in an environment in
which rwalld is useful, it's important to have safeguards in place to
limit its range.  In the present state of affairs, such safeguards are
either absent, not enabled, or inadequately documented.

     Just as an example, why did Dennis Perry's system at DARPA accept a
rwall from a machine somewhere at Berkeley?  Maybe Berkeley is doing such
time-critical research that breakthroughs must be announced by such
"network shouts", but I think it's much more likely that nobody at DARPA
even knew that such a facility existed or was running on their machine.

     Think of what would happen if our IP gateways supported an IP address
of FF.FF.FF.FF (the famous and as-yet mythical "Godzilla-gram").
Fortunately, no gateway does.  The same sort of sanity check needs to
be extended to higher-level protocols.

-- Mark --

PS: I could envision a security bug caused by the ability to broadcast
arbitrary characters to terminals on other systems.  Are all the rwalld
implementations clever enough to filter out control characters?  Also,
those of us who are old enough to know what "cookie bear" know that
broadcasting messages CAN effectively stop all work...
-------

MRC%PANDA@SUMEX-AIM.STANFORD.EDU.UUCP (04/08/87)

Alan -

     I consider it a given that Unix will never be "safe" (although it can
be made a lot "safer").  The whole point of my message (or flames, if you
will) was to knock a hole into some of the complacency surrounding "standard"
software.  The incident that started this entire discussion demonstrated an
instance in which a particular operating system lacked a sanity check.

     It is my contention that certain (but not all) versions of this operating
system (Unix) have an endemic lack of such sanity checks.  Since this operating
system is the primary operating system on the DDN it is crucial that these
oversights be corrected.  Otherwise, as long as there are other operating systems
or crackers on the network there will be similar incidents.

     If I didn't care, I'd keep my mouth shut.

-- Mark --
-------

preece%mycroft@GSWD-VMS.ARPA.UUCP (04/08/87)

  From: "Alan R. Hill" <ahill@cc7.bbn.com>
> My years of experience in dealing with security indicates that data
> should be encrypted whenever practical.  Relying on software or system
> adminstrators is folly.
----------
Unless you're doing all your work on a trusted workstation, to which
only you have access, encryption isn't enough.  The processor on which
the work is done has to encrypt and decrypt, meaning that the data
has to be floating around in plaintext form at some times.  If your
software is ignoraably secure, then so is your transitory plaintext.

-- 
scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@gswd-vms

ahill@CC7.BBN.COM.UUCP (04/08/87)

Mark,
	History indicates that "whistle blowing" is not generally appreciated
regardless of its well meaning intent.  Unless DoD specifies requirements
for Unix use on the internet, I doubt that anything will change.

	Although there are lots of security problems with Unix and its
network code, I thought I would relate my experience with this type of
problem.  Many years ago I had responsibility for a Unix system that
was used by competing contractors and a government agency.  My job was
to prevent importation of non-approved code that could compromise the
integrity of the system.  I also had to keep the various groups from
digging into each others files.  I modified the access code for the
network and the file system.  It took me roughly 2 hours work and
I was able to restrict access by user and source location.  I also logged
all access attempts good or bad.  My point is that the effort to 
dramatically improve control is not costly.

	I suggest that this discussion is no longer useful to the
TCP-IP mailing list and can be continued off-line.  I generally
approve of comments that will evoke an emotional response since they
will generate much more data than those that are more benign.

Alan

bzs@BU-CS.BU.EDU.UUCP (04/08/87)

>...If Unix is still vulnerable
>after a decade of availability should we ever expect it to be safe?

Look folks, we're really flying off the handle here on innuendo and
virtually no facts.

Let me try to rehash what I believe happened that started all this:

Sun provided a utility, rwalld, which is a daemon which listens for
certain informatory message broadcasts on a network (such as a
scheduled system shutdown) and displays them on terminals.

This was an extension of the single system 'wall' (Write ALL) program
that most UNIX systems come with. Wall, as it is standardly provided,
is not the issue. Most O/S's come with some utility to broadcast a
message within a local system (ie. no network involved.) Sun simply
extended this service to a network facility (leaving the older method
intact, thus it was an optional extension.)

The security breach was that someone discovered that many systems were
configured to accept these broadcasts fairly indiscriminately.

This, in turn, was due not to a lack of security inherent in the
system, but the fact that the way the system comes off the tape it
allows this. All O/S's come off of distribution tapes inherently
insecure (eg. super-user password is typically generic or null.)

There is a facility (netgroups) which is supposed to be set up by the
concerned system administrator with those machine groups who are to be
allowed to issue such broadcasts (well, that's a little backwards, you
list the systems who you will accept such messages from.)

So, in complete contradiction to Mark Crispin's analysis, it was not
the "random gurus" who were the problem in finding an unclosed
security hole but, rather, the sysadmins who never even opened the
manuals to find out what daemons they were running and how to
administer security (there aren't that many, look at your start-up
files and services/servers files.)

Or, perhaps as was the case with my system (which received the
broadcast) they didn't particularly care if someone sent such a
message and viewed it in the same light as someone sending mail to
someone who didn't want it, a nuisance that could be dealt with easily
if a problem arises (I am still not convinced there is much of a
problem with this particular event.) We never really took a vote on
how many people even agree that a security breach worthy of concern
has occurred on their system, even that fundamental observation
remains an opinion.

Thus, as is almost always the case with system security, it was not
the fault of the system providers but entirely the fault of those
charged with the responsibility of maintaining the system, if they
were concerned about this (later) then they have only themselves to
blame. They simply left the barn door wide open, ignoring the door,
the lock and the key.

To add insult to the system administrator's injury, not only was it
within their normal administrative power to limit such an event with a
simple file edit, they never even had to run the utility at all.  It
is purely icing on the cake to find out about other system's status
changes on your network and can be removed by adding one comment
character to one line in the system's start-up file with no real ill
effect on the operation of the system (remember, only SUN's UNIX even
has this particular utility.)

I really wish that people who send notes discussing these issues would
ask themselves if their notes actually contain any factual information.

If nothing else, this whole interchange has pointed out how
ill-informed many folks are about security management on various
systems and how they have turned to folk tales and philosophizing
to supplant that void.

This, in my opinion, is a far worse problem. I do not deny that there
are security and integrity problems on an internet. I only claim that
little of the discussion I have seen has moved us any closer to
measuring and rectifying the problems instead of just finding someone
to blame (we has met the enemy and they is us.)

	-Barry Shein, Boston University