[net.unix-wizards] A possible security bug fix

mrd@wjh12.UUCP (Douglas) (07/22/83)

Not too long ago there was a news item pointing out the potential 
for working mischief by running programs with file descriptors 0, 
1 or 2 closed - for example, the program could open a file which 
would end up having file descriptor 2, and then write a message 
to standard error, modifying the file.

Would there be any problems with requiring file descriptors 0, 1 and 2
to be open during an exec?  (making an exception for pid 1, so the
initial exec of /etc/init works).

gwyn@brl-vld@sri-unix.UUCP (07/24/83)

From:      Doug Gwyn (VLD/VMB) <gwyn@brl-vld>

Yes, there is a problem with requiring FDs 0..2 to be open on exec;
that would keep people from shutting up stderr by
	$ foo 2>&-

Better to write the programs in a paranoid style (check everything that
might go wrong).  Too many UNIX programs fail disastrously when they
could recover from unexpected situations, or at least terminate gracefully.

bob%ucla-locus@sri-unix.UUCP (07/28/83)

From:            Bob English <bob@ucla-locus>

I don't think the '$ foo 2>&-' complaint is a valid one.  It
could be easily addressed either by '$ foo 2>& /dev/null' or
by having the shell itself do the /dev/null connection when the
user attempts to disconnect one of the standard outputs.

--bob--

davis@hplabs.UUCP (Jim Davis) (07/30/83)

		With reference to Bob English's comment:

		"I don't think the '$ foo 2>&-' complaint is a 
		valid one.  It could be easily addressed either
		by '$ foo 2>& /dev/null' or by having the
		shell itself do the /dev/null connection when
		the user attempts to disconnect one of the
		standard outputs."

	First, the '$ foo 2>& /dev/null' solution has nothing to
do with the security aspects.  Simply because the user has the option
not to attempt to break security does not cause a system to BE secure.
Second, the solution of having the shell disallow leaving a standard
stream unconnected does solve a small part of the problem.  However,
it has two disadvantages.  One may actually wish to have a standard
stream disconnected.(I don't know why, but let's think before we
restrict functionality.)  A much stronger flaw is that the shell does
not spawn all programs.  A user wishing to break security will spawn
programs by herself.

	Either programs should be prepared to handle standard
streams being unconnected,(the point of the original submission)
or the operating system must force all programs to have valid
standard streams.  I prefer the first approach, others may 
prefer otherwise.  Comments anyone?

					Jim Davis (James W Davis)
					...!ucbvax!hplabs!davis
					davis.HP-Labs@UDel-Relay
----------------------------------------------------------------

bob@ucla-locus@sri-unix.UUCP (08/01/83)

From:            Bob English <bob@ucla-locus>

My comment was in reply to a criticism of a suggestion that
exec require the presence of an open stdin, stdout, and stderr.
The reason this was suggested was to prevent opens after the
exec from falling accidentally into fd's 0, 1 or 2, resulting
in interference between program output and diagnostic output.

The criticism was indeed that a user might want to close 0, 1, or
2.  I replied that re-directing the output to stdio might be
wiser.  Perhaps exec should open /dev/null itself if 0, 1, or 2
are open (but that sounds like a real bad idea).

I'm more worried about what the shell does, anyway.  If some
hacker wants to close a conventional file descriptor and take
his chances on subsequent exec's, I see no reason to stop him. A
naive user, however, should be protected from such foolishness.

I should point out that I have yet to hear any reasons (good or
bad) for allowing 0, 1, or 2 to be closed.  The only one I can
think of involves the number of keystrokes typed at the command
level in currently existing shells...

--bob--

clark.wbst@PARC-MAXC.ARPA@sri-unix.UUCP (08/02/83)

The main thing that makes UNIX "good" is it's elegance and simplicity.
These qualities have two advantages:

	1) It is easier to learn and remember how to use UNIX, and to
	   find bugs/interpret errors.

	   If you think about it, the documentation for UNIX is not real
	   extensive; yet from the time I started using it I never remember
	   feeling the unbelievable frustration I feel almost every day as
	   I try to get RSX, VMS, or good 'ol VersaDOS (Motorola) to do
	   what I want to do in spite of itself.  I have spent MONTHS
	   trying to do things that would have taken MINUTES on UNIX.

	   UNIX' indication of errors is not the best either.  System calls
	   have a tendency to return an informative -1, which is then
	   printed as some descriptive 3 word message as the utilitiy
	   exits.  To be fair(er), you can look at errno, but I never have.
	   The interesting point is that a stupid little -1 usually tells me
	   exactly what I need to know.  Such is not the case with the
	   many-digit (or letter equivilent) error number that comes out
	   of most operating systems, pointing you to a very specific
	   and extensive paragraph in an error code manual.   On such
	   systems I am usually almost as confused as before I looked,
	   or at best am not sure exactly what to do.  

	   Having the source is nice too, because you can look and see
	   exactly what the system does.  It beats reading an ambiguous
	   and voluminous pile of boring manuals.

	2) It does not restrict the user/programmer.  The system should
	   not (in my opinion) try to anticipate all applications and 
	   provide flexible facilities to do anything anyone might want
	   to do.  First because it gets rediculously complicated/BIG
	   and it takes a 5 year vetern to know all the ins and outs to be
	   able to use it efficiently and easily (i.e. to avoid spending
	   90% of your time plowing through manuals), but more
	   important, YOU CANNOT ANTICIPATE EVERYTHING 
	   ANYBODY MIGHT WANT TO DO!!!!!!!!  If you try, your 
	   result will get in the way of the poor sap (me) who just
	   wants to do something real simple, like open a file and read
	   it in 5 lines or less, or real the bits on a simple mag tape and
	   put them in a simple file.
	

This is changing as more and more simple little patches are made to 'fix'
'problems' that crop up.  Like 0,1, or 2 being closed.  Or funny bits in
the file name.  Sure, why not restrict it to printable ascii.  Seems reasonable.
But then, why not add an informative extention, so that people know what
a file contains without looking at it.  3 characters seems like enough.

Just a few more little details to remember, to get around... 

UNIX has this wonderfully elegant concept of 'standard I/O' if you 
will... The idea that you get the lowest free open file descriptor, and 
dup() dups them, and open opens them and close closes them... and fork
forks them too.... and exec propagates them... think about it - those simple 
ideas make the redirection of I/O and pipe ideas possible, in one easy 
swoop.  Very trivial, yet powerful idea.   And yet these pipes and > <
I/O redirection from the shell are what most non-UNIX types think of 
as UNIX!  They are just a creative application of the existing system
calls by the shell.  Do we really want to throw a monkey wrench in 
there by making an illogical/inconsistency in the basic design?
You would break most of my programs if it was changed so you could
not close 0,1, or 2.  I usually close them then open the file so I can use
printf.  

The other half of what makes UNIX great is the simple file system with 
no funny restrictions, and NO FILE TYPES, just byte stream files.  I 
fully anticipate that someone will shortly suggest that we really need 
to add fixed length records and variable length records and... in spite 
of the fact that with UNIX system calls all these could be implemented 
with 0-few lines of code.  But I am wandering.  The important point is 
that a file is just a pile of bits.  I don't give a * how the system keeps 
them, I just want them back in the same order.

Moral:

	Keep It Simple !   Or at least, please keep all the wierd stuff 
out of the kernel.  It should be kept clean, simple, and elegant.  If 
you want weird system calls, slide a layer of subroutines called 
wexec, wclose, etc. to do your funny stuff.  That way your 
application code won't be full of that code, and applications that
don't want it won't get it - they can implement what THEY need.

As UNIX gets more like VMS and VMS gets more like UNIX, I wish
something were getting more like an updated V6.

					--Ray Clark
					   Xerox, Webster N.Y.
					   {ucbvax!}Clark.wbst@parc-maxc

bob@ucla-locus@sri-unix.UUCP (08/03/83)

From:            Bob English <bob@ucla-locus>

I think you missed the point.

What I am trying to avoid (and remember that I wasn't the one
to suggest this) is the sudden and inexplicable appearance of
standard error messages in data files.  I don't think that's an
unreasonable goal, but I don't think it should be the
responsibility of individual programs to check such things.  I
don't remember suggesting that fd's 0, 1, or 2 be immune to close
except at the command level (where they cannot be explicitly
re-opened).

Given that the stdio package exists, and is widely used, I think
it makes sense to protect its users in some fashion.  A more
elegant solution would be to prohibit opens on 0, 1, or 2 until
an explicit close has been done by that program.  That would, at
least, prevent the accidental re-opening of a stdio descriptor
(the dup call closes an existing file before opening the new one,
so that's no problem).

I don't see how this adds complexity.  Nor do I see how this is a
merge with VMS.  In fact, I'm a little confused by your comments
altogether.

--bob--